The Vision System
- When I actually got the grant (around January 2006), I realized that
I had to implement a vision system, and I'm a computational linguist.
- However, I did my best to follow the biology and after a month
or so of hacking, I had a
wire frame object detection system.
- Dan Diaper helped me through the maths of the retinal system and
an implementation of on-off center surround detectors.
- I used a primary visual cortex (V1) for line and angle detectors.
- I then passed the information through to an ojbect recognizer
(that we still incorrectly call secondary visual cortex).
- Somewhere in CABot1 development, I had to translate this into
something that could be used to recognize solid shapes.
- I added edge detectors to V1 and futzed around with the V2,
but the basic mechanism stayed the same.
- Again, for CABot2, I didn't change much. I just played around
with the weights to make it more effective.
- Note that the CABot1 and CABot2 vision system really only detects
stalactites and pyramids.
- All of the background is thresholded out of the input.
- These are recognized in a postion variant way that is then
fed into position invariant storage.
- CABot3 is integrating texture recognition via grating cells.
- This has required some modification of the connection strengths,
but the old systems remain largely unchanged.
- The grating cells are in a separate subnet. (They cross V1 and
V2 biologically, but Emma Byrne is doing this, so its a good
engineering decision for now.)
- They take input from the line detectors, send activation to
the specific regions of V2 (roughly saying there is an object