Because turns are required for movement too, the animat has the opportunity to learn to correct the aiming while moving around. Perceptrons can thereby learn to aim much faster as more training samples are presented. When it comes to aiming for targets, the neural network should already be familiar with the basic concepts.
The animats are three-dimensional entities, but can control their view along two angles: the pitch and the yaw. The perceptron controls and corrects the angles for both of these dimensions. However, controlling the pitch raises many problems during the learning. Only when the pitch is near horizontal is movement possible; when looking fully up or down, there is no forward movement (as defined by the player physics). In these cases, the yaw is constrained as well, which prevents the AI from learning altogether. To prevent such learning traps, we only allow the perceptron to control the yaw until satisfactory results are obtained. When these precautions are taken, the worse that can happen is the animat spinning around while learning. To the untrained eye, this often seems like the animats are ballroom dancing!
On a more technical note, giving the perceptron full control can lead to other learning traps. The way the yaw is chosen determines all the examples that are available for learning (that is, requested angles and observed angles). So, by allowing the perceptron to determine the yaw fully, it also has the full capability to affect what it learns. In these cases, it can converge into a configuration that always suggests the same turn. To solve this, the AI needs to force the situation by generating a variety of angle suggestions (for instance, by generating random angles or adding noise to the actions). This implies the perceptron always gets a representative sample of the input/output patterns, and forces it to learn balanced knowledge.