As far as the multilayer perceptron is concerned, there is little room for artistic interpretation. With the rule-based system, we had to customize it to our problem. Perceptrons, on the other hand, are pretty much standard, and only the underlying implementation of the training algorithm changes. So, there's little need to worry about the design of the neural network; a standard fully connected feed-forward module will do fine.
As far as the behaviors are concerned, it would be ideal for the animats to learn about target selection online. However, this may not be directly possible because of the complexity of the data or the unpredictability of the problem. For this reason, if problems are encountered, data can be gathered in the game but learned offline. This will also give us the opportunity to analyze the data ourselves to check that there is actually something to learn. We can thereby determine whether the features discussed in previous chapters (and in the case study) are relevant. Having knowledge of relevant features improves the prediction, reduces the size of the MLP, and speeds up the learning.
Gathering vast quantities of data is a good idea regardless, because we can investigate how strong the noise is (and whether preventive measures are necessary). Because of the unpredictability of games, the outcome of two similar fights will be different. As such, the noise is expected to be very high, which often makes learning the data directly more difficult. In this case, statistical analysis may be required first, using the MLP to learn from those intermediate results.