Why deep nets are not a panacea
- Firstly, deep nets are only going to work for really big problems.
For smaller problems, they'll just undergeneralise.
- Secondly, you need a lot of training data.
- Thirdly, you need to spend some time with the problem and build
an appropriate deep model.
- Finally, they work pretty well for single input sets to an output or
relatively short sequences of inputs.
- (There is some work with short term memory buffers to deal with this, but
this seems pretty tentative at the moment.)
- I was at a neuromorphic workshop a couple of years ago, and someone
asked "why do we even bother with neural models when deep nets have
solved all the problems?"
- Aside from all of the above problems, I think there's a whole bunch
of things that brains do that deep nets don't. For example, they
don't deal well with time.
- Others include dealing with many functions, existing for a long time,
and working on a largely open domain.