Fatiguing Leaky Integrate and Fire Neurons
- I've spent most of my neural time working with fatiguing Leaky
Integrate and Fire (fLIF) Neurons
- The LIF model is quite common.
- I model it via
- Integrate
- Fire
- Leak
- Fatigue is relatively novel in neural models.
- I model it with two constants, one for firing and one for not firing.
- A neuron increases its fatigue level by the first constant when it
fires, and decreases by a second (down to 0) when it doesn't.
- The fatigue is then added to the threshold.
- So, as a neuron fires more it becomes more difficult to fire.
- The idea is that as fatigue grows, the CA becomes harder to
support and automatically shuts down.
- This simple model doesn't seem to manage to do that
at reasonable times.
- We also simulate in discrete time cycles that we map to 10 ms. of
real biological time.