Momentum and FrictionWhen human players aim with their mouse, momentum generally turns the view further than they expect for large angles. Conversely, the friction also slows down the turn for small angles. Therefore, when either small or large adjustments in the view are necessary, human players can lose accuracy. The tradeoff is between finding the target quickly and overshooting or taking more time to reach the target with higher precision. For the AI, it's often a matter of specifying the exact target and firing the weapon. Because bots with perfect accuracy aren't as fun to play against, physically plausible turning errors could be added to the animats (see Figure 18.1). This increases the difficulty of the task and puts the animats on par with human players. Figure 18.1. Two examples of errors that commonly affect the aiming process.
Explicit ModelA mathematical function is used to model these kinds of errors, expressing the actual angle in terms of the desired angle and the previous angle. This feedback loop represents momentum carried over from frame to frame, and scaling the desired angle adjustment represents the friction. The function itself is defined over the desired angle and the previous output. We also include a parametric error in terms of the desired angle; the noise() function returns a value between –1 and 1:
This makes the aiming more realistic, because the AI will also be subject to under and overshooting and aiming errors. Linear ApproximationA perceptron is used to approximate the function illustrated in Equation 18.1. This can be understood as the animat learning to turn in a smoother fashion. Alternatively, it can be seen as a way of learning a faster approximation of an expensive function in the interface's implementation (inaccessible from the animat). In fact, after the function is learned, it could be moved to the interface—hidden from the AI code. The aiming errors would then become a constraint. The equation is wellsuited to being approximated linearly. In fact, it becomes a moving average; the previous angle requests and the current request are weighted together. The major task of the learning will be to understand the a parameter, approximate b, and adjust the weights accordingly (see Figure 18.2). Figure 18.2. Flow chart representation of the processing of the angles before they are passed to the engine.
To be able to approximate the equation, we need the previous output to be explicit. We'll have to plug it into the input ourselves so it seems like a direct mapping for the perceptron (reactive). MethodologyWe'll compute the approximation by training the network iteratively. Random inputs are chosen, and then the desired output is computed with Equation 18.1. By grouping these results all together, a batch training algorithm can be applied to find the desired values for each of the weights and biases. This can be done during the initialization or, better still, offline. The perceptron itself will be modular: A large perceptron does not compute all the angles together (that is, yaw and pitch); instead, a smaller perceptron is applied twice (for both angles). This reduces the memory used to store the perceptron, but requires slightly more programming. EvaluationThis is not an especially difficult problem. Even if it is not linear by nature, the linear approximation is close. The behaviors generated by the perceptron are more realistic, because the turning is much smoother (visibly subject to friction and momentum). When the perceptron learns a function with a lot momentum, there's a certain feeling of motion sickness. That said, beginners often have this problem, too, and the problem can be just as unstable. As for the training, this is done very quickly. Few samples (for instance, 10) are required for the weights to be adjusted suitably. The iterations required vary greatly depending on the initial random weights.
