"Friendly AI" is a nontechnical term introduced by Eliezer Yudkowsky to refer to an AI that is "benevolent to humans" -- not killing us nor torturing us, but helping us and basically letting us live happily, perhaps even helping us to do so.
Guaranteeing AGI Friendliness in the situation where AGI's are undergoing Strong Self-Modification and rapidly improving their own intelligence beyond the human level, is a tough problem to say the least.
Why Friendly AI Is Important to Think About
Yudkowsky's recent thoughts on the topic may be found at http://www.singinst.org/ourresearch/publications/artificial-intelligence-risk.pdf
Why Friendly AI May Be Impossible to Guarantee
An argument as to why Friendly AI may be unachievable may be found at http://www.vetta.org/?p=5 . Note that the author tried and failed to prove rigorously that Friendly AI is unachievable. However, his qualitative arguments are still interesting.
Mind Ontology Links
- Eliezer Yudkowsky's writings on Friendly AI and related topics: http://yudkowsky.net