LCSs are particularly good at estimating the benefit of actions, which implies they are also able to take advantage of the best actions. Because the classifiers are rewarded for their accuracy, the knowledge of the world modeled by LCS is generally representative of its true nature.
The basic representation of classifiers is relatively easy to understand, being almost as expressive as rule-based systems. Sadly, the binary and ternary values aren't especially friendly to read. Although they may be very efficient, binary sensors are somewhat clumsy to deal with; given heterogeneous data from the interfaces (for instance, floats, integers, arrays), we need a lot of "serialization" to convert them to binary values.
As for game development, LCSs can be applied in many situations. LCSs are primarily intended to provide adaptation in online simulations—although offline problems are safer choices. Control problems are the most popular application, although this technology is applicable to optimize a set of rules used for problem solving instead.
Adapting the representation to other categorical data types is relatively straightforward. It's just a matter of using symbols rather than the binary values, but retaining the "don't care" symbol. By providing a hierarchy of symbols (for instance, Marvin is-a player), the LCS can learn precise generalizations. On the other hand, continuous values are not a trivial extension of the same principle, because generalization would prove less straightforward.
This means LCSs, and slightly adapted versions, can theoretically be applied to almost any game AI task, ranging from obstacle avoidance to learning to assess weapons, including all reinforcement learning problems we'll discuss in Part VII. All we really need is a form of reward signal or a fitness value, and the creativity of genetic algorithms will find an appropriate set of rules. In practice, however, other techniques may be more appropriate.