Outsourcing Array Operations From OpenCog
In Deep_Learning_in_OpenCog:_Proposal I give some rough notes on the potential mechanisms for implementing deep learning networks inside OpenCog. But the specific representations described there are just “rough notes” quality and may not really be exactly what we want.
Sooo — in a F2F discussion at the iCog office yesterday, in order to figure out how to get the Atomspace mechanisms to work well for “deep learning based vision in OpenCog”, Yenatfanta and I decided it would be best to first figure out how to get some simple efficient array operations to work using OpenCog mechanisms.
THESE ARE ROUGH, EXPERIMENTAL SPECULATIONS PROPOSED FOR DISCUSSION ONLY — NOT TO BE TAKEN AS A SPEC FOR IMPLEMENTATION AT THE MOMENT…
The example cases we chose are:
- Multiply a reasonably large vector by a reasonably large matrix
- Train and apply a single neural net model mapping some input data into some output data
The same mechanisms used for these, should be usable for making OpenCog do the math underlying deep learning algorithms as well…
So, here is one way to do the above.
We may introduce an ArrayNode (to correspond to a multi-dimensional array; this could also be called a TensorNode…). An ArrayNode would correspond to a specific multi-dimensional array, and would have a name providing an easy way to find this array.
One way to introduce ArrayNode, without building a lot of new mechanisms, would be to make an ArrayNode “behind the scenes” be a type of GroundedSchemaNode. So for instance if one wants to represent the matrix
1 2 3 4 5 6 7 8 9
one could do this by creating an Atom
which is interpreted to mean the same as
ExecutionOutputLink GroundedSchemaNode “abc.py”
In this case, running the python script “abc.py” would return some python multi-dimensional array object, containing the array
1 2 3 4 5 6 7 8 9
A possible shorthand, for very short arrays, would be to allow usage like
where the fact that the ArrayNode name begins with a parenthese would be interpreted to mean that the name directly encodes the array referred to, so that e.g. the above would mean the same as
ExecutionOutputLink GroundedSchemaNode “makeArrayFromString.py” “(3,3)”
which would produce the appropriate python array object corresponding to (3,3).
ArrayNodes would then have certain characteristics, which could be stored in the Atomspace, e.g.
EvaluationLink PredicateNode “dimension” ArrayNode “abc” NumberNode “2” EvaluationLink PredicateNode “shape” ArrayNode “abc” ArrayNode “(3,3)”
where “size_abc” refers to the array (3,3) [since “abc” refers to a 3x3 matrix]
We could also have a method for getting an element of the array referred to by an ArrayNode, e.g.
ExecutionLink PredicateNode “getArrayEntry” ListLink ArrayNode “abc” ArrayNode “(3,2)” NumberNode “6”
To multiply the above matrix by the array (1,0,1), we would then say something like
ExecutionLink PredicateNode “multiplyMatrix.py” ListLink ArrayNode “abc” ArrayNode “(1,0,1)” ArrayNode “(6,0,24)”
Or, equivalently, if one did
ExecutionOutputLink PredicateNode “multiplyMatrix.py” ListLink ArrayNode “abc” ArrayNode “(1,0,1)”
then the result produced would be the ArrayNode “(6,0,24)”.
We would then need argument-passing to work, so that one could say e.g.
ExecutionOutputLink PredicateNode “addArray” ListLink ArrayNode “(1,1,1)” ExecutionOutputLink PredicateNode “multiplyMatrix.py” ListLink ArrayNode “abc” ArrayNode “(1,0,1)”
the results would work as expected (yielding (7,1,25) ).
Next, suppose we want to train a neural network to map the rows of matrix A into the rows of matrix B. We may then say something like
ExecutionOutputLink GroundedSchemaNode “makeTrainedModel” ListLink ConceptNode $X ArrayNode “A” ArrayNode “B”
Suppose the ConceptNode $X is named "AB". Then, the action of the ExecutionOutputLink here would be to create a GroundedSchemaNode "trained_model_AB.py" (and to create, and save in an appropriate location, the corresponding python file), so that
ExecutionOutputLink GroundedSchemaNode “trained_model_AB.py” ArrayNode $Z
will give the output of the trained model when given the input $Z (where Z, in this example, is a 1D array of the same length as the rows of matrix A).
If we wanted to update the same trained network via giving it additional training data, we could say
ExecutionOutputLink GroundedSchemaNode “updateTrainedNeuralNet.py” ListLink GroundedSchemaNode “trained_model_AB.py” ArrayNode “A1” ArrayNode “A2”
OK, that’s about it…
Some important advantages of the above approach seem to be
- fairly concise Atomese code
- all large numeric objects and all expensive numeric operations are offloaded out of the Atomspace
- inputs and outputs of operations can be inspected in the Atomspace as needed
- relatively minimal introduction of new structures and mechanisms in OpenCog, due to reliance on the GroundedSchemaNode mechanism
- many python (or python-wrapped) libraries for numeric operations (including neural net and deep learning operations) exist, so this would quickly give access within OpenCog to a lot of useful stuff