Discussion
- There are a lot of other things that could be do with the
documents categorisation including more tests, comparisons with
other algorithms on this data, using documents as inputs (as
opposed to combining words), and using more advanced topologies.
- There seems to be a lot of discussion about using neuromorphic
systems for learning.
- It can be more energy efficient.
- If you throw out biologically plausible learning, you can get
pretty close to Back Prop (Diehl has a paper).
- You can make use of a lot of parallelism.
- We don't really know how the brain learns the things it does.
- Note that the brain does not use the feedforward mechanism we've
presented here.
- Perhaps more importantly, the brain does not solve particular problems
in isolation. The neurons talk to each other and are firing all the
time.
- The machine learning community seems focused on feed forward topologies.
- I think we can get some more leverage out of these.
- Also note that there are some nice properties of spiking neurons
that may lead (eventually) to better system.
- The actual time matters, and provides the system with a lot of
information. If a neuron spikes at 20ms, 35ms, and 42 ms, it's different
than one that spikes at 20ms, 28ms, and 42 ms. (I don't have particularly
solid ideas about how to use that.)
- Also, our mechanism of one spike per input is not standard. Many
people use multiple spikes to represent the value of the input instead
of multiple neurons.