JavaScript EditorFree JavaScript Editor     Ajax Editor 

Main Page
  Previous Section Next Section


The representation of MLP is an extended version of single-layer perceptrons:

  • MLPs have middle layers of processing units, with weighted connections between them.

  • The activation function of hidden units needs to be nonlinear (a sigmoid function ideally).

  • The output is computed by propagating the inputs through the layers one after the other.

  • Because of the additional complexities of the middle layer, training is harder to achieve.

  • Backprop computes the error gradient in the last layer, and then filters the error in reverse through the layers.

  • Both batch and incremental algorithms can be applied to this representation, including delta rule solutions or adaptive weight adjustment strategies.

The result is an AI technique that can learn to approximate functions. In practice, this can be used by game AI programmer to make decisions, predict an outcome, or recognize a pattern in the current situation. The next chapter applies MLPs to target selection, estimating the benefit of aiming for a point in space.

Practical Demo

Onno is an animat that uses a large neural network to handle shooting in general, including prediction, target selection, and aiming. The information provided to the perceptron is the combination of the features used in the previous chapters. Although the results are moderate, Onno demonstrates the versatility of MLP and the benefits of decomposing the behaviors. The documented source code is available on the web site at

      Previous Section Next Section

    JavaScript EditorAjax Editor     JavaScript Editor