Summary
In practice, perceptrons prove amazingly simple to implement and train:
Perceptrons consist in an array of floating point numbers representing the weights of the inputs. To compute the output, the inputs are scaled by the weights and the sum is passed through an activation function. Learning is achieved by adjusting the weights, choosing a step that minimizes the error. The model has some serious limitations, notably only being able to handle linear problems. However, this can be more than suitable in many cases where the approximation is satisfactory. The delta rule in batch mode is proven to converge to the global minimum, and the incremental perceptron training finds a correct solution if it exists. The ideal learning rate depends on the problem, but a small value is a safe guess (for instance, 0.1). Given the possibility of using perceptrons rather than more complex neural networks—even if this involves simplifying the model—we should consider doing so. Perceptrons are incredibly good at what they do best: linear approximations.
The next chapter applies perceptrons to improve aiming abilities. The neural networks allow smoother movement, and prevent over and underaiming errors. Expanding on the theory in this chapter, Chapter 19 covers perceptrons with multiple layers, which potentially have better capabilities—at the cost of complexity.
Major Pain is an animat that uses a very simple perceptron to learn when to fire. It's okay to fire when there's an enemy and the weapon is ready. The perceptron learns a simple AND operation on two inputs (which is a linear problem). The demo and overview of the AI architecture are available at http://AiGameDev.com/. The source is available and commented, although the next chapter explains the design of the perceptron module in greater depth.

