Multi Layer Perceptrons
- Multi Layer Perceptrons (MLPs) learning with backprop are a very
popular learning algorithm.
- The perceptron is a simple element that takes inputs and converts
them to a single output.
- A layer of these, with the same inputs, cannot do some tasks.
- If there are multiple layers, with outputs from the first layer
becoming the inputs to the second layer, you can approximate
most interesting functions.
- MLPs are universal approximators.
- Using the Backpropagation algorithm, MLPs can be used to learn
functions from sample input output pairs.