From OpenCog
Jump to: navigation, search

From an email dialogue btw Ben Goertzel and Will Pearson, Feb 2009

    The choice of
    scheduler is important, it is something we allow to vary in our
    current computer systems. I have a vision where the scheduler
    specializes to the tasks assigned to it.

I see that you could do this ... and you could create a Scheduler 
object within OpenCog doing this ... but, I'm not sure it's the best 
way to go.

My inclination instead is to design a Scheduler object that has the right
tweakable parameters, and let the system adapt these parameters based on its 
situation.  Initially some heuristic procedures for adapting these parameters 
could be supplied, but then the system could adapt these procedures as well...

    It also
    allows the system to modify itself to protect itself from
    (unintentional as well as intentional) Denial of Resources attacks
    from the programs it runs. This factor can be crucial in how useful a
    system is in the real world.

    This brings me to another philosophical difference I have with
    opencog, I assume all programs are potentially malicious or just full
    of errors that can cause DoR attacks. This I think is necessary when
    working with systems that are experimentally self-programmable. And
    experimental self-programmability is necessary for interesting
    self-programmability as far as I can see.

Well, we thought about this a lot when designing OpenCog.

And we decided that it made sense to defer these issues till a later stage 
rather than messing with them from the start.  It seems clear they *can* be 
dealt with, within the OpenCog infrastructure ... and also clear that this is 
not a practical priority right now.  The priority right now is getting more and 
more intelligent functionality!!

Of course, **ideally** the system should not allow any MindAgent to grind it to 
a halt ...

But at the current stage of development, this doesn't seem extremely important, 
because we can have careful control over which MA's go into a particular 
openCog deployment.

It seems there are two kinds of bad MA's to be worried about, in future...

A MA that contains nasty code that uses C pointers inappropriately and screws 
with heap memory ... or does something else analogous...

I guess this is not the worst long-term issue, in that once we get around to 
making an OpenCog version that writes its own MA's, it won't be writing them in 
C++.  It will write them in Combo or LISP or some nice language that doesn't 
allow pointer errors....

A MA that contains infinite or very long loops, thus doesn't cede processor as 
rapidly as a MA is expected to.

In the case of automatically learned MAs, one can imagine some sort of 
automated program analysis being used to ward off *many* instances of this.  
But I guess it wouldn't catch all of them.

One could also imagine a "watchdog" thread that somehow  monitored other 
threads, checking to see if some rogue MA is monopolizing that thread longer 
than it's supposed to, and killing the thread if so ... and then penalizing the 
MA somehow.


Another, related (and more urgent) issue is the system should allocate 
resources preferentially to those MA's that are more useful ... which will be 
done by the assignment of credit mechanism (which is not programmed yet but is 
described in the OCP wikibook).

But if a MindAgent wants to suck up resources and not do anything useful ... 
it's often going to take quite a while for the system's "assignment of credit" 
mechanisms to get around to realizing the MA is worthless and denying it the 
STICurrency and LTICurrency it needs to operate.

-- ben