This chapter started by creating a custom representation for the desired behaviors:
The representation of the sequences is designed to be linear. Sequences are purely reactive, mostly insensitive to sensory input after they have started.
Genetic operators were defined—both for initialization and discovery—and were capable of generating all the possible sequences within acceptable length and time.
After that, we moved on to ways to optimize the behaviors using genetic algorithms:
The evolutionary process is extremely simple, selecting the fittest and discarding the weakest.
The module is designed to be plugged straight into any other component that can be evolved. This is done by defining an evolvable interface that provides phenotype-specific evolutionary operations.
Back into the practical world of in-game behaviors, we talked about dodging fire and rocket jumps:
Two fitness functions were defined, and we made sure they had few loopholes for the genetic algorithm to find. We also attempted to make the values as continuous as possible, to guide the evolutionary process.
The application phase discussed implementation details (for instance, sorting the array, using callbacks, and guaranteeing the sequence finishes), and pointed out the simplicity of the testing phase.
The resulting behaviors are relatively close to what human players would do, despite not being perfect. The dodging behavior is useful in practice when triggered at the right moment, but rocket jumping is used just to show off! We need a higher-level AI to decide when to use it.
The technology used in this chapter manages to express sequences of actions very well, but fails to provide some other functionality we take for granted in expert systems. Part VI will actually look into a way of expressing arbitrary sequences with rules—resulting in finite state machines. These are more powerful, but would be less tailored to this particular problem at hand.