Friendly AI

From OpenCog

"Friendly AI" is a nontechnical term introduced by Eliezer Yudkowsky to refer to an AI that is "benevolent to humans" -- not killing us nor torturing us, but helping us and basically letting us live happily, perhaps even helping us to do so.

Guaranteeing AGI Friendliness in the situation where AGI's are undergoing Strong Self-Modification and rapidly improving their own intelligence beyond the human level, is a tough problem to say the least.

Why Friendly AI Is Important to Think About

Yudkowsky's recent thoughts on the topic may be found at

Why Friendly AI May Be Impossible to Guarantee

An argument as to why Friendly AI may be unachievable may be found at . Note that the author tried and failed to prove rigorously that Friendly AI is unachievable. However, his qualitative arguments are still interesting.

Mind Ontology Links

Mind Ontology
Supercategory: Ethical Properties of Minds