ZeroMQ

From OpenCog
Jump to: navigation, search

Setup

  • Install zeroMQ (should already be installed); for eclipse: add zmq to libraries
  • Install protocol buffers (should already be installed)
 ./configure
 make
 sudo make install

eclipse: add protobuf to libraries

Compile proto file (if changed)

 protoc --cpp_out=. AtomSpaceMessages.proto

How to compile

  • Add new files and libs to opencog/atomspace/CMakeLists.txt
    • add ZMQMessages.pb.cc, ZMQServer.cc, ZMQClient.cc, ProtocolBufferSerializer.cc to ADD_LIBRARY (atomspace SHARED
    • add protobuf to <code>SET(ATOMSPACE_LINK_LIBS
    • add ZMQMessages.pb.h, ZMQServer.h, ZMQClient.h, ProtocolBufferSerializer.h to INSTALL (FILES
    • run cmake .. in bin folder
  • Add -DZMQ_EXPERIMENT to CXX_DEFINES in the flags.make file in the following folders:
    • /home/erwin/ochack/bin/opencog/atomspace/CMakeFiles/atomspace.dir
    • /home/erwin/ochack/bin/opencog/server/CMakeFiles/cogserver.dir
    • /home/erwin/ochack/bin/opencog/server/CMakeFiles/server.dir

How to use

  • Link with AtomSpace
  • Create a ZMQClient instance and call its methods
ZMQClient* zmqClient = new ZMQClient(); //defaults to localhost
zmqClient->getAtom(h);

If the cogserver is running on a different PC

ZMQClient* zmqClient = new ZMQClient("tcp://168.0.0.21:5555");

How it works

ZeroMQ allows you to connect a client process to the atomspace managed by the server (normally the cogserver). Use cases include deploying an agent on a separate server, easier debugging of agents, connecting components that are written in different languages (any language that supports protocol buffers can use a simple message based wrapper to talk to the server. Of course implementing the full atomspace OO interface takes a lot more work. ) and connecting tools that need high performance access to the atomspace (e.g. the atomspace visualizer). You can connect multiple clients to one server. The clients can be anywhere in the world as long as the server is reachable via TCP/IP.

Message Exchange Pattern

Currently we use a simple request-reply pattern using socket types REQ for client and REP for server.

Pro:

  • Simple to implement for straightforward use cases.

Con:

TODO: Consider DEALER/ROUTER socket types.

Now we can switch out both REQ and REP with DEALER and ROUTER to get the most powerful socket combination, which is DEALER talking to ROUTER. It gives us asynchronous clients talking to asynchronous servers, where both sides have full control over the message formats.

TODO: Research whether PUB/SUB socket types will be usable, probably in reactive API scenarios. i.e. API consumer wants the backing store to callback when a Node/Link changes/added/removed.

Server

 if server is enabled run zmqloop
 zmqloop
   check for RequestMessage
   switch(function)
     case getAtom:
       call getAtom with arguments from RequestMessage
       store return value in ReplyMessage
     case getName:
       etc...
   send ReplyMessage

Client

 async queue works the same as usual but if client is enabled call server instead of Run
   copy ASRequest arguments and function number to RequestMessage
   send RequestMessage to server
   wait for ReplyMessage
   copy ReplyMessage to ASRequest

Neo4j Graph Backing Store

ZeroMQ and the protobuf serialization is used to communicate with Neo4j Backing Store (currently work in progress).

Problem solving

It doesn't matter if you start the server or client first but if the server crashes or is restarted you have to restart the client(s) as well. If you get error "Address already in use" then another instance of the server is already running

TODO

  • febcorpus.scm doesn't load
    • clean exit of atomspaceasync without using sleep or a busy loop

send ctrl+C signal to the thread and break when NULL message received char *reply = zstr_recv (client); if (!reply) if(exitSeverloop) logger.info "Serverloop exitting" break; else logger.error "Serverloop quit unexpectedly"

  • what if dowork throws an exception on the server? catch exceptions, put them in reply message and rethrow at client
    • try to handle communications exceptions locally