OpenCogPrime:FAQ

From OpenCog
Jump to: navigation, search

Roadmap & Metrics

The milestones proposed in the OCP roadmap seem vague. Why?

To quote a specific question:

Instead of "Tuning of attention allocation to effectively control a mobile, social, exploring agent in a sim world," for example, I'd expect something like "navigate from point A to point B in virtual world V, overcoming challenges including a maze of obstacles" or ".... challenges including hostile monster M (which can only defeated with the aid of allied players)." The current milestones do not seem objectively measurable.

More specific milestones must be formulated in order to guide development, and this will be done. There has been plenty of discussion and thinking about this already and we just haven't taken time to put it on the wiki site yet.

However, there is some subtlety to this issue. Ben Goertzel wrote in an email exchange that:

IMHO, the value of "objectively measurable milestones" is often overestimated in an AGI context.
It is not at all clear that highly specific milestones should be used to drive development, in a project like this one. Certainly they are useful to guide and inspire development... but, using them to define success, or to drive development in a specific way, may be a dangerous technique.
Over and over again, in the history of AI, we've seen the danger of "overfitting an AI system" to a specifically, narrowly defined goal or set of goals. Over and over again, it turns out that hacks or narrow-AI cleverness of various sorts can be used to achieve the specific goals, without really capturing the spirit in which the goals were originally proposed.
Like it or not, the important assessment of intermediate stages of AGI development is going to be qualitative. Objectively measurable milestones are really useful for testing, tuning and tweaking algorithms, when used in the context of a deep understanding and appreciation of the qualitative intermediate-stage goals.... But they should not really be the crux of the project, IMO...
Of course, this is a separate issue from the AGI design itself, and folks could pursue the OpenCogPrime design in a manner tightly driven by narrowly defined objective milestones. I just doubt this is the most productive approach.

A narrow-AI surely can't overfit a well-selected set of milestones, can it?

Yes, it can. Take the example of 20 simple and heterogeneous milestone tests. All you need to do is create a narrow-AI approach to each one of the 20 problems separately, perhaps using the OpenCog tools which are easily customizable to yield narrow-AI solutions to particular problems, and then wrap up these 20 specialized solutions inside OpenCog's common external interface ... On the most trivial level, one could create a separate MindAgent for each of the 20 test problems, with each of these specialized MindAgents drawing on more general MindAgents such as PLN, MOSEs, etc.

Furthermore, it's not really clear where the boundary btw this trivial approach and serious AGI design lies.

For instance, suppose two of the 20 test problems involve navigation in complex environments. Is it "cheating" to create a navigation MindAgent, or not? We have created a navigation Task already, for practical use in the control of virtual agents in virtual worlds. I think this is likely a bogus approach from an AGI perspective, and that if one is trying to make a powerful embodied AGI, navigation should likely be learned based on other primitives. However, Eric Baum (author of "What Is Thought?) is one serious AGI thinker who disagrees: he strongly feels that the human brain has an in-built navigation module, and that an engineered AGI system should have one too. I am leaning toward a middle path, in which certain high-level functions within the current navigation Task are exposed to the AtomTable and Combo interpreter as primitives, but the AI system must learn to compose these functions into a real navigation algorithm. I think this can lead to a more flexible and adaptive navigation algorithm than just using our existing navigation Task (which embodies a combination of A* and TangentBug based algorithms) ... but, whether this difference would be apparent on a couple navigation-based test problems, is not clear. Probably the navigation Task could be humanly-tweaked to do well on a couple navigation-based test problems, more quickly than a deeper, learning-based approach could be implemented and then taught to do well on these test problems.

As with navigation, so with everything else...

On one subtle issue after another, tailoring your proto-AGI to a bunch of test problems drastically affects the approach you take

The lesson, IMO, is that test problems should be taken as an important way of validating the non-idioticness of an AGI system ... but they should **not** be taken as a validation of success, and different approaches should **not** be compared based on their performance on test problems

This may sound strange but I have come to this conclusion based on a lot of thought.

At AGI-09 there will be a session on "Evaluation of AGI Systems", chaired by John Laird (the creator of SOAR, a very famous AI/AGI/cognitive-modeling system), who does not precisely agree with me on the above point ... so we should have some interesting discussions ;-) ....

So, I do understand the importance of this topic, but I also think it's much subtler than it initially appears

I don't want to propose a set of tests/milestones just based on cobbling together "stuff that it would be cool/impressive to have an AGI do", I want to create a set of tests based on carefully porting theories of human developmental psychology into the OpenCog/virtual-worlds context. This is not a huge deal but nor is it a trivial task. "

Is OpenCog aiming too high, even in its first iteration?

Many projects based on brilliant ideas have failed by aiming too high for the first iteration. In contrast, any successful software project offers value, a tangible benefit compared to what existed before, from the very first pre-release version -- even if the first version is built from only a few of the planned modules, and the implementation is primitive, and the functionality is limited and buggy. And of course, the only general intelligences known to date were built that way. Which better describes the way OpenCog is being developed?

OpenCog, as a framework, is definitely not aiming too high for the first iteration.... We will release a public alpha during Fall 2008, which we believe will offer many tangible advantages over other frameworks supporting diverse AI development. And the framework will then be gradually improved, according to a dynamic roadmap based on the needs of individuals using the framework as the basis for their AI systems.

Regarding OpenCogPrime, in particular, as noted in the OpenCogPrime:Roadmap:

OCP development is divided into several phases.

The first phase has focused on development of the framework itself, and the development of specific Ai modules operating within the framework. Each of these modules has tangible benefits compared to competing AI software ... but these benefits exist within the domain of **research software functionality** ... none of these modules, on its own, is supposed to do anything useful to an end user or a commercial company, nor even to do anything that looks wizzy in a demo...

For instance, the MOSES module arguably outperforms any other automated program learning approach, as argued in Moshe Looks' PhD thesis and subsequent published conference papers...

The PLN module (currently being integrated into OpenCog by Joel Pitt and Cesar Maracondes) is the only existing software system that can meaningfully propagate probabilities through the full range of predicate and/or term logic inference steps...

The second phase (which we are undertaking simultaneously with completing the first phase, for several reasons that I believe valid) involves seeking "artificial toddler" functionality. But we note that as a sub-phase within this phase, we are currently working on OpenCog-based "artificial dog" functionality in the RealXTend platform.

Is an "artificial toddler" too big of a goal for Phase Two? It is hard to prove it's not; but, the goal was chosen pretty carefully. An artificial dog or baby, within the constraints of current virtual world tech, is in a sense "too easy" -- it doesn't provide much avenue for using interesting AI learning or reasoning methods. OTOH an artificial scientist or sales clerk is obviously "too hard" -- there are a lot of steps between here and there, and even though we can describe them all in detail, it makes sense to focus on the earlier ones before struggling too much with the details of the later ones.

If virtual worlds were integrated with really really good robot simulators, then a virtual baby would be a decent Phase Two goal, because sensorimotor integration and motor planning would be serious AI problems. But in current virtual worlds (even after initial crude integration with robot simulators), the sensorimotor foundation is very simplistic, so that baby-level stuff is not that interesting to play with, and it makes more sense to aim for the virtual-toddler level.

Embodiment (Real and Virtual)

Is virtual-world embodiment good enough for AGI?

I see that simple embodiment is not anywhere near enough to put human social contact into the reach of direct experience. Embodiment will help AGI understand "chair" and "table"; it will not help it understand vindictiveness, slander, e.g."

It is not yet clear what characteristics a virtual world needs to have to make it "good enough" for AGI.

It may be that current virtual worlds are good enough. It may be that we need to take some relatively minor steps such as integrating virtual worlds with robot simulators. Or it may be (though most involved with OCP doubt it) that we need to go beyond what could be achieved by fusing current virtual worlds with current robot simulators.

More details from an email by Ben Goertzel:

It seems that in many cases, even a quite simplistic virtual world can be helpful to an AGI in understanding the human world.
For instance, slander is about lying, and understanding lying will be easier for an AI system if it is embodied in a world together with other agents, so it can see which of the other agents' statements accurately reflect the state of the shared world.
While it's true that simple embodiment will not allow rich understanding of human motions, I think it will still allow a lot of understanding of human social relationships and attitudes, via giving a shared-experience context for socially-oriented linguistic utterances
Recall Calvin and Bickerton (Lingua Ex Machina) 's theory that linguistic case roles are derived from human social roles. To the extent this is true, a basic understanding of social roles in common situations (which can certainly be derived from a simple embodiment) may be very helpful in providing AI's with the inductive bias needed to effectively learn human language.

From Bob Mottram:

As a roboticist I can say that a physical body resembling that of a human isn't really all that important. You can build the most sophisticated humanoid possible, but the problems still boil down to how such a machine should be intelligently directed by its software.
What embodiment does provide are *instruments of causation* and closed loop control. The muscles or actuators cause events to occur, and sensors then observe the results. Both actuation and sensing are subject to a good deal of uncertainty, so an embodied system needs to be able to cope with this adequately, at least maintaining some kind of homeostatic regime. Note that "actuator" and "sensor" could be broadly interpreted, and might not necessarily operate within a physical domain

Is embodiment really so necessary? Isn't language understanding more important than embodiment?

Embodiment will not help an AGI system understand "president", "Russia" or "newspaperman" -- only reading seems to open these gates.
I don't see what benefit embodiment brings to the creation of an agi scientist/engineer, whereas reading is critical. Mechanical awareness -- not so much -- AGI could have "immediate" mechanical awareness of not just 3D, but also 4D, 5D, etc. spaces.

Ben Goertzel:

I feel that simple embodiment is the most likely route to enable an AGI to learn human language well enough to read complex texts, including those in scientific and engineering domains.
This is really the main point. I don't think that statistical corpus analysis nor hand-engineering of linguistic rule-bases is going to make systems that can really understand language. But I think that systematic instruction of a simply-embodied AGI system can get us there, even without all the bells and whistles of a truly humanlike body.

Design

Is there a formal proof that OCP will yield human-level AGI?

The short answer is: no!

In XX and the pages linked therefrom, the outlines of a mathematical research program that might lead to such a proof are speculatively and broadly outlined. However, such a speculative outline is a very long way from a proof!

Specifically, there a bunch of unproven theorem-statements are given ... along with the proviso that most of the statements will probably will need to be tweaked a bit to make them actually true ;-)

There appears to be, at least, years of mathematical work in making those speculations rigorous, and the prioritization of this versus work on actually building, testing and teaching the system is a subtle matter.

Also note: for **no** reasonably complex AGI system will there be any formal proof that it will work, anytime soon ... because modern math just isn't advanced enough in the right ways to let us prove stuff like this. We can barely prove average-case complexity theorems about complex graph algorithms, for example -- and proving useful stuff about complex AI systems is way, way harder.

Is there strong empirical evidence that the OCP design will lead to human-level AGI?

Short answer: no

Note, however that: For **no** AGI system will there be any empirical data that the system will work, before it is built and tested

Is there *any* evidence or proof that the OCP design will lead to human-level AGI?

Short answer: no

Here's the thing: Only for very simple AGI designs is it going to be possible to cobble together a combination of fairly-decent-looking theoretical and empirical arguments to make a substantive case that the system is going to work on the large scale, before actually trying it.

So, it seems, the most natural things would-be AGI builders can do are:

A) throw our hands up in despair where AGI is concerned, and work on something simpler

B) wait for the neuroscientists and cognitive scientists to understand the human mind/brain, and then emulate it in computer software

C) work on pure mathematics, hoping to eventually be able to prove things about interesting AGI systems

D) choose a design that seems to make sense, based on not-fully-rigorous analysis and deep thinking about all the issues involved, and then build it, learn from the experience, and improve the design as you go

The idea underlying OCP is to take approach D.

There are also other possible approaches: for instance, Richard Loosemore is working on a variant of approach C), which involves building an AGI that is as close as possible to the human mind at the cognitive level (so, not a reconstruction of the neural hardware). His reasons for doing so are more than just preference, because he believes that there is something called the "complex systems problem" (see below) that makes it imperative that we use this approach. As part of his attempt to avoid the complex systems problem, his methodology is also unusual: he is building a novel sort of software framework for empirically exploring the behavior of human-cognitive AGI systems. As far as mathematical provability goes, Loosemore actually embraces the fact that provability is not possible, because the complex systems problem says that nobody can "prove", ahead of time, that a particular approach is guaranteed to lead to AGI - but instead he argues that the second best thing to provability is to emulate the one design for which we have an existence proof (i.e. the human mind).

Before the modern science of aerodynamics existed, any attempt at mechanized flight was based to some extent on "hunches" or "intuitions."

The Wright Brothers followed their hunches and built the plane, rather than listening to the skeptics and giving up, or devoting their lives to developing aerodynamic mathematics instead.

Of course, many others in history have followed their hunches and failed -- such as may prior AI researchers; and many flight researchers prior to the Wright Brothers.

It is precisely where there is not enough knowledge for definitive proofs, that the potential for dramatic progress exists ... for those whose intuitive understandings happen to be on target.

What about the "Complex Systems Problem?"

In a 2007 paper, Richard Loosemore proposed the idea that if a system is complex enough to be highly intelligent, it is likely falls into the class of "complex systems", with the result that one can't design a system that reaches full intelligence using engineering methods, so that one must rather evolve such a system or use some other non-engineering-like design methodology.

Loosemore, in an email, made the following complaint about the OCP design:

As far as I can see there is no explicit attempt to address the complex systems problem (cf http://susaro.com/wp-content/uploads/2008/04/2007_complexsystems_rpwl.pdf). In practice, what that means is that the methodology (not the design per se, but the development methodology) does not make any allowance for the CSP.

However, those of us involved with the OCP design believe that the CSP is addressed in the design, but not by that name.

The CSP is addressed implicitly, via the methodology of **interactive learning**.

The OCP design is complex and has a lot of free parameters, but the methodology of handling parameters is to identify bounds within which each parameter must stay to ensure basic system functionality -- and then let parameters auto-adapt within those bounds, based on the system's experience in the world.

So, the system is intended to be self-tuning, and to effectively explore its own parameter-space as it experiences the world.

The design is fixed by human developers in advance, but the design only narrows down the region of dynamical-system-space in which the system lives. The parameter settings narrow down the region further, and they need to be auto-tuned via experience.

So, the question is not whether the system as designed, and with exact human-tuned parameter values, is going to be a human-level intelligence. The question is whether the region of dynamical-system-space delimited by the overall system design contains systems capable of human-level-intelligence, and whether these can be found via dynamic automatic parameter adaptation guided by embodied, interactive system experience.

Loosemore thinks the CSP is more severe than we do. We believe we can fix the design and let the parameters auto-adapt. Loosemore main disagreement is really that the initial choice of design (in our case, the OCP design) determines how far one can reasonably travel through the "design space" by using this auto-adaptation of parameters, to get to an final AGI design that works. For him, the design space might be so thinly populated with viable AGI designs that the only way to guarantee that we will hit a viable design is to do all the work as near as possible to the human design. We, on the other hand, believe that he is being too conservative in his estimate of how hard it might be to find a viable design that is not human-like.

Of course, on this as on many other conceptual issues related to AGI, at this stage none of us can rigorously prove our perspectives correct. This is early-stage science and to some extent we each have to follow our own intuitions.

See also CSPDiscussion

There's a lot of logic in the OCP design. Isn't this missing the wonderful irrationality at the core of human creativity?

AGI-ers AFAIK try to build rational, consistent (& therefore "totalitarian") computer systems. Actually, humans are very much conflict systems and to behave consistently for any extended period in any area of your life is a supreme and possibly heroic achievement. A conflicted, non-rational system is paradoxically better psychologically as well as socially - and I would argue, absolutely essential for dealing with AGI decisions/problems as (most of us will agree) it is for social problems..

According to Ben Goertzel:

I think that non-rationality is often necessary in minds due to resource limitations, but is best minimized as much as possible ...
It's easy to confuse true rationality with narrow-minded implementations of rationality, which are actually NOT fully rational. If your goal is to create amazing new ideas, the most rational course may be to spend some time thinking wacky thoughts that at first sight appear non-rational.
By true rationality I simply mean making judgments in accordance with probability theory based on one's goals and the knowledge at one's disposal. Note that rationality does not tell you what goals to have, nor does it apply to systems except in the context of specific goals (which may be conceptualized by the system, or just by an oserver of the system).
I think it would be both counterproductive research-wise, and ethically dangerous, to attempt to replicate human irrationality in our AGI systems.

Technical

Implementation

How much computer power is required to create human-level AGI using OCP?

Honestly: we really don't know. Our educated guess is that it's in the range of dozens to the low thousands of current PC's.

The thing is, the OCP design tells us what processes we need to implement and hook together to create a powerful artificial mind, but it doesn't tell us how well these processes can be optimized.

A great deal of attention has gone into making the design capable of scaling (because if you don't need to worry about computational resources, creating a viable AGI design is a lot easier ... cf AIXI or the Godel Machine...).

Ethics

What are the expected benefits of human-level AGI?

...

What are the expected risks of human-level AGI?

...

What is being done to minimise the risks of human-level AGI?

...

Also Refer to MindOntology:Ethical_Properties_of_Minds

Other