The evaluation of this animat involves observing the behaviors for a short period. Using insider knowledge and some constructive criticism enables us to identify both advantages and disadvantages of the solution.
The animat demonstrating these principles is known as Marvin, and can be found on the web site at http://AiGameDev.com/—along with instructions to get the demo compiled. Marvin uses obstacle sensors and steering behaviors to prevent collisions in a reactive fashion. It can also wander around using the various enhancements described.
The main benefits of this solution stem from its straightforward definition:
Simplicity— From the case study to the actual implementation, the development is extremely straightforward. The pseudo-code is also self-explanatory. We'd be hard-pressed to find a simpler alternative.
Reliability— Because of the simplicity of the task, we can identify most of the possible situations in the requirements. After we implement these within the system, it will prove capable of dealing with arbitrary environments.
Predictability— A great advantage also lies in the predictable nature of the solution. First, as a reactive behavior, there is no possible ambiguity. Second, each rule is written explicitly, which means the developer can understand them.
Efficiency— Thanks to the simplicity of the solution, it has a low computational overhead. For this reason, the sensor queries account for most of the time spent in the procedure. Even the use of the sensors can be minimized trivially; the front sensor can be ignored for a few frames when the animat is clear of danger, and the side sensors can be checked at even bigger intervals (if at all).
On the other hand, the solution might be too simple, leading to the following pitfalls:
Local traps— The layout of the environment can be quite intricate, so successfully avoiding obstacles can be tricky. In some cases, such as in a corner, the animat might get stuck (likewise when the sensors are not very accurate). This can be caused by the animat deciding to turn one way, and then realizing that the other way is potentially better. We must iron out such problems during the experimentation phase (for instance, including a turning momentum computed with a moving average).
Testing— Local traps are one of many problems that arise during the experimentation. Despite the simplicity of the rules, there are many different parameters to tune and tweak. This generally increases the development time for the behavior.
Realism— Because the system is based on rigid rules, we could have expected a similar "robotic" motion with jagged paths. However, it is not as bad as expected; using linear interpolation to blend the steering forces, and the underlying momentum applied by the locomotion combine together to form relatively convincing movement (though not ideally smooth). There is still also a problem linked with the reactivity of the task, which is not very humanlike (usually based on higher-level prediction and planning).
Scalability— We modify the behavior by hacking extra lines into code. This has been manageable so far, but as complexity increases this task will become even more troublesome. For more complex behaviors for which 10 or even 20 different situations need to be encoded, this approach would be a struggle.