Because the big picture is now much clearer, it's a good time to perform a retrospective overview of all the knowledge and experience we've acquired.
The motion of the animats is relatively simple because it does not use any global knowledge of the terrain. For this reason, such reactive movement does not seem as purposeful as movement with planning, although many tricks can make the characters' behaviors more persistent.
This kind of movement is often satisfactory, because a majority of AI characters do not actually need to move far—especially in single-player games. Efficiency, reliability, and realism are the major advantages of this solution. The illusion of intelligence is unlikely to be shattered by careless maneuvers (for instance, getting stuck in doors) in the presence of human players. That said, it's a good idea to assume that things may go wrong with the behaviors and plan accordingly to attempt to recover from them (for instance, preventing spinning during wall following).
The steering behaviors shine with their simplicity. They can be applied within a few minutes to a problem that has been prepared in advance. However, one problem that remains is fine-tuning the parameters during the experimentation phase, and dealing with more complex behaviors.
Rule-base systems come to the rescue for larger problems (and more general ones), providing a flexible and modular approach—at a relatively small development cost. However, the acquisition of knowledge still proves to be a huge problem that cannot be circumvented. It also seems that rule-based systems are not particularly suited to low-level control, because it can be a challenge to get the smooth output with a manageable number of rules.