Conclusion
- These are often useful techniques, but they don't work for
every game.
- The search space for Go is so big, and the evaluation function
so poorly understood that Alpha-Beta doesn't work. That was the
case when I wrote this slide in 2015. Now, deep learning (Alpha-Go)
has learned a good enough evaluation function to win.
- Take Home Points
- Adversarial games can be represented by a game tree.
- Minimax is a good way to play these.
- In all but the simplest game, you need a heuristic
evaluation function.
- Minimax can be improved (without loss) by alpha-beta pruning
and iterative deepening.
- Reading: For this week is Russell and Norvig's Adversarial Search
chapter through section 4 (pp. 164-179).
- Reading: for next week
skim Janet Kolodner's
Case-Based Reasoning
and read the
Case Based Reasoning Wiki.