Hardware
- Steve Furber came down to London to give a talk on
hardware.
- I've thought about this a bit, before and after.
- As you all might be interested in it, here are some of
my thoughts.
- Synchronous vs. Asynchronous: I hope there really isn't
a real difference. The attractor net research seems
to indicate this, and that fits with my intuitions.
- Axonal propagation delays: This is tricky. However, the
key point about CAs is that they remain active. This
may be wasted computation, but something probably can
be worked out.
- Learning: you need it!
- It doesn't have to be on chip,
but some space for co-firing information needs to be.
- Having the calculation off-chip (definitely off neuron),
shouldn't be a problem, and may be an advantage.
- The current learning that the CANT simulations do
are each cycle. It might be improved if it
combines say 1000 cycles.
- This could be a good place to deal with the plasticity
stability dilemma.
- Use: currently, I don't need a billion neurons. A
problem like NLP or cognitive architectures probably
will.