With standard finite-state machines, there is little room for ambiguity. Therefore, a reliable approach to simulation would first convert the hierarchical state machine to a flat one. In this simpler case, stepping through the states is a well-defined process (especially if symbols are processed one by one).
Sadly, this discards some of the benefits of HSMs (such as abstraction and modularity). Also, a deterministically flattened hierarchy assumes that the process of simulating an HSM has been defined already. The following sections discuss different ways to simulate the HSM.
Master and Slave
Typically, the relationship between two nested finite-state machines is known as master/slave. The outer finite-state machine is the master, whereas the nested finite-state machine is the slave. This relationship implies that finite-state machines higher up the hierarchy are in control of the execution. Indeed, the master finite-state machine has the opportunity to override the slave.
In the simulation, the slave is simulated first and the master next, moving up the hierarchy (see Figure 41.5). This allows the master finite-state machine to override the output, and trigger a transition away from the nested state.
Figure 41.5. Flow of control in the hierarchy using the master/slave configuration. The slaves are processed first, allowing the master to override the output.
When using a mechanism for passing input/output symbols, conflicts are rare because all the messages are ordered and interpreted one by one. If there is ambiguity in the execution, the HSM is in fact nondeterministic and should be fixed by design (or using the automated procedure of Chapter 40, "Nondeterministic State Machines"). However, the procedural approach with hard-coded sensors can become troublesome—even as a deterministic model. Transitions at different levels of the hierarchy may be triggered "simultaneously."
There are guidelines for resolving this using the master/slave paradigm. If a possible conflict exists (ambiguity of what to do during the execution), all the transitions affecting the master would have priority. Because the slave went first, all changes to nested finite-state machines will be discarded if there is a transition inside the parent finite-state machine. (That is, this child finite-state machine will no longer be active.)
During the simulation, the active finite-state machine at each level can be stored on a stack. The most generic finite-state machine (at the root of the hierarchy) resides at the bottom of the stack, whereas the most detailed finite-state machine lives on the top. The process of zooming in to a state can be seen as a push operation, adding another finite-state machine to the stack. The process of zooming out is a pop operation, because the last finite-state machine is discarded off the stack, returning to the parent.
The finite-state machine in focus is the one currently on the top of the stack (see Figure 41.6). One common approach is to only simulate this finite-state machine until it decides to terminate. Termination can be measured by reaching a certain state, or just by issuing a pop command to remove itself from the stack.
Figure 41.6. A stack of finite-state machines and the corresponding hierarchy. The finite-state machine in focus is responsible for popping itself.
This has the advantage that only one finite-state machine needs to be simulated at a time. There's no need to know how many other levels there are in the hierarchy, or even how big the stack is. Careful design can ensure that each state machine is capable of removing itself of the stack.
Levels of Detail
Each level in the hierarchy is an additional level of complexity; states are refined into substates, containing an expansion of the original task. As such, the different levels of the hierarchy add more detail to the behaviors.
Among the game AI community, there is a certain amount of interest in level-of-detail (LOD) solutions, providing varying quality of behaviors at different computational cost (see Figure 41.7). LOD schemes assume that it's possible to estimate the required detail for each nonplayer character (NPC). This is usually done by distance to the creature, or its visibility. Given this detail coefficient, the LOD technique should be capable of providing a trade-off between the quality of the behavior and the computational overhead.
HSMs are relatively well suited to this purpose. At design time, the designer can manually assign detail coefficients for each state machine (for instance, based on their computational cost). During the simulation, it's possible to keep track of the total detail in the HSM (that is, the sum of all the detail values of the finite-state machines in the stack). Then, comparing the potential detail of a nested state enables us to decide when to refine a state. If there is enough detail, the refinement is skipped and a simpler approximate behavior is used.
The detail can also be controlled proactively, forcing changes in detail when necessary. If a character goes out of view, for example, the required detail can drop drastically. At this point, finite-state machines can be popped off the top of the stack until the detail level matches. Likewise, states can be pushed onto the stack if more detail is suddenly required. Sadly, both push/pop operations can lead to unwanted idiosyncrasies.
It turns out that the intrinsic problem lies with losing detail, and not the technique itself. So most of the problem is about design: Is a LOD solution feasible at all, and if so, how do we lose detail gracefully? LOD behaviors created with HSMs suffer from this problem, too. It's taken the graphics community a couple of decades to handle visual LOD smoothly; essentially, choosing the right representation can reduce the appearance of the "popping" effect, notably allowing continuous LOD techniques. This is a long way off in AI because no convenient uniform representation can handle all aspects of AI (unlike triangles in computer graphics).
One of the advantages of HSMs is that the state machine can handle the changes in detail itself, as shown in Figure 41.8. Each state can consist of two nested states: One corresponds to the rest of the hierarchy, whereas the other will be a unique state providing an approximation of the behavior. The changes in LOD can be seen as a transition between these two nested states using a threshold test as a condition. To handle the inconsistencies in changing detail, these transitions can be connected to temporary states responsible for ensuring the graceful degradation (or addition) of the information.
Figure 41.8. Zooming in to a state capable of handling LOD transitions itself. The two states correspond to full detail and approximation, while intermediate states take care of graceful degradation. This could be extended to handle multiple levels of detail.
Essentially, it's possible to achieve LOD smoothly with HSM under the following conditions:
HSMs are relatively simple to deal with as long as the simulation process is well defined. The previous pages describe the most popular interaction semantics. However, these approaches may not always be suitable. In such cases, spending time defining the interaction within the HSM is important to prevent nasty surprises.
One useful way to do this right is to abstract the internal processing of each state. This is the approach used by heterogeneous HSMs. Essentially, each state provides a function to step the execution, which must return within a finite time.
All the previous semantics can be reproduced, but this approach also allows custom processing algorithms to be inserted instead:
So the processing is straightforward to abstract out. All we need is a base finite-state machine class, providing a step function. To create custom HSM semantics, we override the default step and manipulate the nested states as appropriate.