Directory of Proposed Technologies for AGI

From OpenCog
Jump to: navigation, search

Advanced Artificial Intelligence Technology Directory


This is a directory of proposed technologies to achieve advanced artificial intelligence by various means. Where possible, arguments both for and against various propositions are presented, along with contact information to carry on these discussions.



Proposition

Rather than tackling the usual information processing problems like problem solving and learning, that appear to require much more to be working than we now know how to do, a more viable path to advanced AI would start with understanding the functionality needed to achieve self-organizing process control systems.


The Case For by Steve Richfield: It appears that evolution started out with process control, and then later developed other computational capabilities. This started with single celled animals, then simple multi-celled animals like hydras. There are well-studied structures that perform process control, like the lobster stomatogastric ganglion, where ~30 neurons have been diagrammed and observed in operation. It is our own hypothalamus that keeps everything working, yet our hypothalamus is only 1/350th of our brains. Our hypothalamus appears to be quite susceptible to superstitious learning, as explored on http://www.FixLowBodyTemp.com, which is known to be an unsolvable problem in process control. Superstitious learning appears to cause metabolic control problems that underlie most chronic illness. There would be great economic value in self-organizing process control system chips that could be dropped into all sorts of machines to better control processes, ranging from bread making machines to oil refineries. Make these first, then use your fortune to develop other areas of AI.



Proposition

Biological neurons must be reverse engineered enough to fully understand at least some of their computations, before there can be any real chance for developing advanced AI.


Case #1 For by Steve Richfield: The neurons in our brains absolutely must work VERY different than the neurons in present-day neural networks (NNs), or the computations in machine learning (ML) implementations. This is because our own learning is often instantaneous, and definitely not the sort of gradual learning so typical in NNs or in ML. A half century of “researching” NNs has failed to stumble into the secret sauce needed to make NNs learn instantly, like we do. Advanced AI is now stalled on this and other such fundamental challenges. Consider also that neurons have been evolving for hundreds of millions of years, so while the construction of our brains may be rather recent in evolutionary terms, the neurons have had MUCH longer to evolve to near perfection. If we can’t even see how to fully duplicate the capabilities of any single neuron, then what chance do we have of ever engineering MUCH more complex systems?


Case #2 For by Steve Richfield: Both our own neurons and whatever will be happening in future advanced AI systems operate according to presently unknown system(s) of mathematics. Similar to mechanical engineering before Newton, we now utilize heuristics in feeble attempts to make rudimentary prototype systems work. We don’t even know what is being communicated between biological neurons. If you presume that it is the derivatives of the logarithms of probabilities, then the observed operation in neurons makes much more mathematical sense than presuming simple probabilities. Our present efforts will appear laughable, like trying to “shoot the moon” with a cannon, when a new theory brings optimizing order to the present chaos of heuristics. Wasting effort on better heuristics before real theory emerges is a fool’s errand.



Proposition

With really careful personal observation of our own thinking processes, we can figure out what we are doing when we are thinking, and then replicate thinking in software.


The Case Against by Steve Richfield: Everything we think we know about the world is just mental models of complex real-world processes, and in most cases those real-world processes are MUCH more complex than our mental models. One of those mental models is our perception of our own thinking. What we think is happening when we are thinking, is just a mental model conjured up by our own brains, and NOT any sort of direct observation of what we are actually doing. Hence, while it may be theoretically possible that someone’s mental model either matches what is actually happening, or alternatively, describes a different but still useful way of accomplishing the same things, this distant theoretical possibility is highly unlikely.

Proposition

Proposals for AGI systems should rest on provably general methods.

The Case For by Abram Demski: Generality proofs (relying on convergence in the limit of unbounded computational resources) are typically not that difficult, and so don't put up an extreme barrier to research progress. Furthermore, they are a good "sanity check" which suggests that the system will not fail on subjectively easy cases in practice (with bounded computational resources), and asking for provable generality usefully discards many possibilities which may seem initially plausible.

Contact Information for authors

Steve Richfield <Steve.Richfield@gmail.com> Abram Demski <abramdemski@gmail.com>