Spatial Cognitive Mapping in CABot3
Our current project is the Cell Assembly Robot (CABot), and the version that is currently under construction is CABot3.
The aim of the CABots is to do have a complete agent based entirely on simulated neurons.
CABot3 still has some symbolic hooks (hopefully all gone by Oct 31).
The agents take natural language commands from a user (explore in this case), view the environment, plan, and
we've now integrated the cognitive mapping into CABot3.
The rooms are as the above experiment but there are corridors instead of doors, and the rooms are uniquely identified by textured shapes.
The tests (e.g. Go to the room before the room with the striped pyramid.) are not yet implemented, but it can learn the map.
While this particular version is limited, having the agent in an open ended game provides us with scope to expand.