Diehl and Cook, and Rybka
- I was trying to get some spiking net STDP learning working because
a lot of project students are interested in doing theses in ML.
- A few years back I ran across Unsupervised learning of digit recognition
using spike-timing-dependent plasticty (Diehl and Cook 2015), and
said, hey I should try this out.
- I spent a fair bit of time (80 hours?) at it, but didn't really
make much progress. It turns out placing a bunch of neurons in four
dimensional space does better than chance.
- I was then at a (virtual) conference and Rybka presented some results. I
said, send me the code. He did. I unpacked it. Then modified it for
the two tasks below.

- There are inputs. In the Iris case there are four features, so there
are four neurons, which are really just spike sources.
- The categorisation neurons are like the SOM nodes.
- When they fire they each turn on their own inhibition neuron, which
inhibits the rest of the categorisation neurons.
- That's the winner take all network.
- The synapses from input to categorisation are plastic.
- So, present an input for a while, have a rest period, then present
the next and so forth.
- One of the cool things is the neuron with dynamic threshold and the
full mechanism automatically normalizes.