Fatiguing Leaky Integrate and Fire Neurons
- Almost all of my neural simulations have been based on
fatiguing Leaky Integrate and Fire Neurons.
- It's a pretty good model of biological neural behaviour,
but is also pretty fast to simulate.
- There is a lot of work being done by others with Leaky
Integrate and Fire Neurons.
- What neurons do is collect activation from other
neurons that have connections to them. That's Integrate.
- If a neuron collects enough activation (surpassing a threshold), it
fires sending activation to neurons that it is connected to. (IF)
- If a neuron fires it loses all its activity.
- If it doesn't fire, some of the activity leaks away. (LIF)
- I simulate in discrete cycles so activity at t is activity at t-1
divided by a leak factor.
- I'm not familiar with many people simulating fatigue in neurons.
- The idea is that a neuron gets tired.
- I typically model this with a fatigue constant that raises the
activation threshold when a neuron fires. It's reduced if the
neuron doesn't fire.
- Neurons are inhibitory or excitatory but not both.
- Topology. Uni-directional sparse connections.
- I typically use distance-biased connectivity but have used
random connectivity more recently to get a hold of the math.
Some Inaccuracies
- Discrete
- synaptic delay
- refractory periods