The neural model we have been focusing on for the past several years
is the fatiguing Leaky Integrate and Fire model.
It's a relatively simple (and efficient to simulate) model.
It's a lot less complex than a compartmental model.
Like a Hopfield neuron it is IF. That is it integrates
activity passed to it along synapses from other neurons, and
fires when it passes a threshold. (It's spiking.)
There has been a lot of work with LIF models (e.g. Maas), where
activity leaks away if a neuron doesn't fire, allowing it to
fire more easily later on.
There hasn't been much work with fatigue. Neurons become more
difficult to fire the longer they fire. We model this by increasing
the threshold when a neuron fires and reducing it back when it doesn't.