Perceptrons—and neural networks generally—are an extremely controversial technique, especially with game AI programmers. On one hand, MLPs solve specific problems extremely well and surprisingly efficiently. On the other hand, there are undeniable problems that hamper perceptrons during game development, which partly explains why they aren't applied more often in games.
Perceptrons have a consequential mathematical foundation. Recent techniques (such as RProp) prove very robust at finding solutions when there is one. For well-defined problems, perceptrons will often surface as one of the best choices. On the other hand, if something goes wrong, we know it's a problem with some of the parameters—or even the design.
Another advantage of perceptrons is the wide variety of training algorithms available. These enable us to exploit MLP in many different settings, under various constraints; this provides amazing flexibility. As such, it's the actual nature of perceptrons that come into consideration when choosing whether to apply MLP.
Finally, MLPs work natively with continuous values. They can also deal with noise extremely well, and take various different data types on the inputs and outputs.
Neural networks are not very expressive when it comes to their knowledge; we can print out the values of the weights, but they'll mean nothing unless we spend hours analyzing the results. This also implies that perceptrons depend fully on the algorithms that are used to create them. We can't manually modify the weights without understanding what they mean!
Once learned, the representation of MLP is fixed. This means that updating it will either involve starting again or retraining it. When retrained, there's no guarantee that it'll conserve the knowledge it learned previously. This means that all efforts done to test and debug the perceptrons will have to be duplicated.
There is a lot of experimentation involved in developing a successful MLP. The number of processing units and even the layer count are both parameters that require exploration. The design of the inputs and outputs also needs special care, because they have an incredible impact on the problem. In addition, MLPs often require scaffolding: pre- and postprocessing to expose the true nature of the problem (which neural networks often cannot understand alone).
Finally, MLPs have real problems with scalability. It's quite difficult for them to cope with complex decision surfaces, or those of high dimensionality. (Memory and computation grow exponentially.)