Adaptive AI can be difficult to deal with because it may not learn, it can be hard to control, it can look unrealistic, and it may be suboptimal. The first set of solutions comes from software-engineering methodologies:
Validate the implementations of the AI techniques to remove any doubt of bugs.
Provide visual debugging aids and standard tools to express AI data in a convenient fashion.
Guarantee reproducibility by using a seeded random-number generator, or by logging all the data.
Use randomized seeds at each execution to test the AI with a variety of different parameters.
For each AI technique, implement tools that can test learning performance (for instance, cross-validation).
When it comes to modeling the system, some designs are better suited to adaptive AI:
Keep the learning localized, splitting it into components when possible.
Reduce the size of search spaces and keep them as simple as possible.
Use fitness and reward functions that are as smooth as possible to provide hints to the adaptation.
Certain methodologies are useful to control the adaptation:
In the game, incremental learning occurs in stages; different aspects of the system are learned separately.
Offline knowledge provides assistance to online adaptation.
Control the adaptation with a learning rate that decreases as performance increases, and not processing when the results are satisfactory.
By combining these tricks, designers can deal not only with adaptation, but they can make it an essential part of the system.
Dafty is an animat that contains many learning components that almost work. Dafty attempts to learn capabilities covered in this book with a different approach, but fails. In each case, different tricks from this chapter can be used to identify and repair the problems. The problems are design issues, not implementation bugs.