CSPDiscussion

From OpenCog
Jump to: navigation, search

.

Regarding the difficulty of predicting the behavior of an intelligent system based on its architecture, there are of course multiple possibilities... e.g.

1) It's **so** hard that we have no hope to build AGI except by slavishly copying the human brain on a very low level, e.g. cell by cell or molecule by molecule or column by column...

2) It's not as hard as in Case 1 ... but it's so hard that we have no hope to engineer an AGI based on principles differing substantially from those underlying the human brain. However, it can work to build systems qualitatively very similar to the human brain and then tune their details via some combination of theory and systematic experimentation and teaching.

3) It's not as hard as in Case 2 ... one can engineer an AGI based on principles roughly similar but not nearly identical to those underlying the brain, and one can predict the basic nature of the overall behavior based on the architecture, and then tune the details via a combination of theory and systematic experimentation and teaching

4) Brain-like AGIs are hard to engineer, but there are other principles one can use to build equally or more intelligent minds ... and using these other principles it's not nearly so hard to predict the behavior from the architecture

5) It's not as hard as 3 would suggest: We can build vaguely brainlike systems and predict the behavior based on the architecture, without so much fussing. The reason it hasn't worked yet is pretty much that the hardware sucks, or the robot bodies suck, etc.


Richard Loosemore seems to believe something like Case 2. That's fine, but I haven't heard from him any principled argument as to why Case 2 holds instead of Case 1 or Case 3.

[Note: Loosemore disagrees! He claims, on the contrary, that he has given the principled arguments in his 2007 paper, which can be found at http://susaro.com/publications].

Doug Lenat, for example, seems to believe something like Case 4

I (Ben Goertzel) tend to believe something like Case 3, and I'm unsure about whether Case 4 holds as well. If Case 5 turns out to hold I won't be totally shocked either, but I think it may be overoptimistic...

My argument for Case 3 is based on specific hypotheses about the behavior that will ensue from a complete implementation of my own AGI design, which I'm currently writing up in a book entitled "Building Better Minds".... But these are hypotheses [backed up by nonrigorous, suggestive arguments] that must be validated (or refuted) by building and teaching the system, they're not mathematical proofs...

-- Ben G