Solid Software Engineering
To expand on each of these solutions, the first set of tricks that help deal with adaptive behaviors belong to software engineering.
When creating a learning system, it's critical for the algorithm's source code to be robust. Programming errors cause much grief during testing because they are difficult to isolate. Therefore, spending additional time to validate the AI code and reduce the number of bugs is a good investment. This can be achieved with unit testing procedures, among others. Then, when a system fails to learn or produces suboptimal results, the code itself is never under question; the accusing finger points to the design without hesitation.
Testing and Debugging
Debugging code can be achieved using standard tools, such as gdb or Visual Studio's integrated debugger. When it comes to testing the AI, however, it's often more convenient to have visual debugging aids. This may include a console or colored primitives within the game (for instance, lines, circles, or boxes).
In addition, a log file proves extremely practical as a dump for AI data in a format that's easier to read and search with text editors (an old favorite among game developers in general). The log may enable an AI developer to deal with bugs that occur on machines of other members of the development team.
An even more robust tool would track and store each of the AI's actions, storing them for later analysis. One of the advantages of this type of subsystem is reproducibility. Each of the AI's actions can be retraced right from the start, so the exact cause of problems can be identified.
In the AI, pseudo-random numbers will often be used to generate the behaviors. Instead of logging each action, the seed of the pseudo-random–number generator can be set to a particular value. With careful implementation of the engine, the simulation will always be exactly the same given identical seeds. This allows any bug to be reproduced with less effort and smaller log files.
A good random-number generator is absolutely essential for AI development, especially for learning systems. Faster and less-predictable alternatives to the default rand() are freely available (such as the Mersenne Twister). By setting a random seed every execution of the game, different situations arise during the game's development—stochastically testing the various aspects of the implementation. Such unpredictability often reveals more bugs in the software. The seed can be set deterministically when the game is shipped if necessary.
The AI software engineer uses more tools than normal programmers. These vary depending on the techniques used, but testing the general performance of learning is useful in most cases. A tool used for such would test the learned model with all the data collected to see how well it performs. This information can be used to perform cross-validation: using part of the data set to see how well the learning algorithm generalizes.
One particularly useful tool will automatically extract the performance of the algorithms over time, and display it as a graph. In fact, it's very practical for the developers to analyze aspects of the learning with visualization packages. Instead of letting the developer format the information in Excel (possibly using macros), command-line tools such as gnuplot are often a better choice because they can be fully automated and easily called from scripts.