Sketching Possible Options
Before continuing any further, it's necessary to brainstorm feasible options for all aspects of the problem, attempting to keep a wide range of choices. Each alternative need not be precisely defined; just a sketch will do.
The main thing to consider is how to model the animat's state in space. Both position and orientation are important concepts when it comes to movement. Keep in mind, however, that neither may be explicitly required by the AI! The animat does not necessarily need to know its position or orientation to perform well, because it can rely on perceptions from the environment. Indeed, the context implicitly affects both the inputs and outputs provided to the animat encoded in a different format, as depicted in Figure 8.1.
Figure 8.1. An agent and an object, represented with both absolute coordinates, where the world has the reference origin (top), and relative coordinates, where the agent is the reference origin (bottom).
Both position and orientation attributes can be encoded in a different fashion. This is purely an AI design issue, because the engine could store this information differently.
Additionally, different formats may be chosen to represent each of these quantities, as mentioned in Chapter 5, "Movement in Game Worlds":
Similar properties can be applied to the senses and actions (see Figure 8.2), as well as the underlying concepts.
The motor actions required for movement can be designed at different levels of abstraction; for example, does the AI request movement step by step, or as full paths? Lower levels of abstraction present simple commands such as "turn left" or "step forward." These may be provided explicitly as actual functions, or more implicitly by taking parameters. A high level of abstraction would offer actions such as "move to" and "turn toward."
It may be possible for the animat to do without the option of turning. This can either be implicit in the move functions (turn toward that direction automatically), or we can assume a constant orientation. This second option wouldn't be as realistic, but much simpler.
As you can see in Figure 8.2, discrete moves have limited directions and step sizes (top left), and discrete turns are restricted to 90-degree rotations (top right). Continuous steps instead have any magnitude or direction (bottom right), and continuous turns can rotate by arbitrary angles (bottom left).
Naturally, there is a trade-off between simplicity of implementation (on the engine side) and ease of use (by the AI)—so there may be more code on either side of the interface.
Choosing this level of abstraction is similar to the design decision about the commands given to human players. In first-person control, a balance between flexibility of control and intuitiveness is generally found. The movement interface between the engine and the AI can theoretically be handled in the same fashion as the human players, which actually reduces the work needed to integrate the AI into the game.
As previously emphasized, a sense of free space is needed for the animat to perform navigation. If a simplified version of the environment is not provided before the game starts, the animat can acquire it online with two different queries:
The animat can decide where to go based on this information, but more elaborate queries of the environment can be devised to simplify its task (that is, ones based on the expected movement of the animat). On the other hand, much simpler sensors can be used instead, or even in conjunction with the ones just listed.
It is possible to understand the layout of the environment just using contact sensors and tracking the position (see Figure 8.3); terrain learning is required for collision prevention. However, this task is less realistic because it relies on trial and error rather than prediction, because the animat finds out about obstacles only when it is too late.
Figure 8.3. Three different kinds of sensors to allow the animats to sense their environment, using point and line queries or even collision indicators.
Historically, game AI developers just used relative movement to determine collisions; if the creature hasn't gone forward, an obstacle must have been hit. This is a consequence of poor interface design rather than efficiency; the physics engine already knows about collisions, but hacks have to be used by the AI to determine it again. A better interface would pass this existing information from the physics system to the AI.