Conclusion
- MLPs work. Use them for machine learning
problems of moderate complexity.
- Take Home Points
- Perceptrons have inputs that are typically weighted.
- There are a range of transfer functions, including linear,
sigmoid, and step.
- One layer of perceptrons can't learn some simple things by
themselves, so we use MLPs.
- MLPs can represent almost any function.
- Using backpropagation, they can learn any function.
- Reading: For this week is
the
MLP Wiki.
- Reading: for next week is Russell and Norvig 18.1 through 18.3 (pp
704 - 718 Learning from Examples)