Perceptrons essentially consist in a layer of weights, mapping a set of inputs x = [x1,..., xn] onto a single output y. The arrow above the x denotes this is a vector consisting of multiple numbers. The mapping from input to output is achieved with a set of linear weights connecting the function inputs directly to the output (see Figure 17.2).
Multiple outputs y = [y1,..., yn] can be handled by using the same principle again; another set of weights can connect all the inputs to a different output. All the outputs and weights should be considered—and actually are—independent. (This, in fact, is one limitation of the perceptron.) For this reason, we will focus on networks with a single output y, which will simplify the explanations.
The weights are denoted w = [w0, w1...wn]; weights 1 through n are connected to the inputs, and the 0th weight w0 = b is unconnected and represents a bias (that is, a constant offset). The bias can also be interpreted as a threshold; if you add the bias, the threshold is 0; otherwise, it's –b.
The choice of data type for the inputs, outputs, and weights has changed over the years, depending on the models and the applications. The options are binary values or continuous numbers. The perceptron initially used binary values (0, 1) for the inputs and outputs, whereas the Adaline allowed inputs to be negative and used continuous outputs. The weights have mostly been continuous (that is, real numbers), although various degrees of precision are used. There is a strong case to use continuous values throughout, as they have many advantages without drawbacks.
As for the data type, we'll be using 32-bit floating-point numbers—at the risk of offending some neural network purists. Indeed, 64 bits is a "standard" policy, but in games, this is rarely worth double the memory and computational power; single precision floats perform just fine! We can do more important things to improve the quality of our perceptrons, instead of increasing the precision of the weights (for instance, revise the input/output specification, adjust the training procedure, and so forth).
The next few pages rely on mathematics to explain the processing inside perceptrons, but it is kept accessible. (The text around the equations explains them.) The practical approach in the next chapter serves as an ideal complement for this theoretical foundation.