Chapter 26. Classification and Regression Trees
Regression and classification trees—also known collectively as decision trees (DTs)—are data structures that can interpret patterns in data in order to recognize them. They are organized as hierarchical decisions over certain variables to predict a result value. With classification trees, the result is a symbolic class, whereas regression trees return continuous values.
Decision trees must be learned from example data, so it's necessary to create or gather it beforehand. An expert can prepare the data, or a collection of facts from the problems can be accumulated. Many tasks, including simulating intelligence, can be seen as just interpreting—or classifying—such data. In practice, each problem can be encoded as a set of attributes and the DT can predict an unknown attribute (the solution).
This chapter discusses the following topics:
These items can become the decision-making component in game characters. Each situation is represented as a set of attributes, so the DT can suggest the best course of action in a certain situation. Also, it's possible to use regression trees to evaluate the benefit of an object (or predict an outcome, both positive and negative). The next chapter does this in the context of weapon selection.