JavaScript EditorFree JavaScript Editor     Ajax Editor 

Main Page
  Previous Section Next Section

Modeling Behavioral State Systems

At this point, you have seen quite a few finite state machines in various forms—code to make lights blink, the main event loop state machines, and so forth. Now I want to formalize how FSMs (finite state machines) are used to generate AIs that exhibit intelligence.

To create a truly robust FSM, you need two properties:

  • A reasonable number of states, each of which represents a different goal or motive.

  • Lots of input to the FSM, such as the state of the environment and the other objects within the environment.

The premise of "a reasonable number of states" is easy enough to understand and appreciate. We humans have hundreds, if not thousands, of emotional states, and within each of these we may have further substates. The point is that a game character should be able to move around in a free manner, at the very least. For example, you may set up the following states:

State 1: Move forward.

State 2: Move backward.

State 3: Turn.

State 4: Stop.

State 5: Fire weapon.

State 6: Chase player.

State 7: Evade player.

States 1 to 4 are straightforward, but states 5, 6, and 7 might need substates to be properly modeled. This means that there may be more than one phase to states 5, 6, and 7. For example, chasing the player might involve turning and then moving forward. Take a look at Figure 12.9 to see the concept of substates illustrated. However, don't assume that substates must be based on states that actually exist—they may be totally artificial for the state in question.

Figure 12.9. A master FSM with substates.


The point of this discussion of states is that the game object needs to have enough variety to do "intelligent" things. If the only two states are stop and forward, there isn't going to be much action! Remember those stupid remote-control cars that went forward and then reversed in a left turn? What fun was that?

Moving on to the second property of robust FSM AIs, you need to have feedback or input from the other objects in the game world and from the player and environment. If you simply enter a state and run it until completion, that's pretty dumb. The state may have been selected intelligently, but that was 100 milliseconds ago. Now things have changed, and the player just did something that the AI needs to respond to. The FSM needs to track the game state and, if needed, be preempted from its current state into another one.

If you take all this into consideration, you can create an FSM that models commonly experienced behaviors such as aggression, curiosity, and so on. Let's see how this works with some concrete examples, beginning with simple state machines and following up with more advanced personality-based FSMs.

Elementary State Machines

At this point, you should be seeing a lot of overlap in the various AI techniques. For example, the pattern techniques are based on finite state machines at the lowest level which perform the actual motions or effects. What I want to do now is take finite state machines to another level and talk about high-level states that can be implemented with simple conditional logic, randomness, and patterns. In essence, I want to create a virtual brain that directs and dictates to the creature.

To better understand what I'm talking about, let's take a few behaviors and model them with the aforementioned techniques. On top of these behaviors, we'll place a master FSM to run the show and set the general direction of events and goals.

Most games are based on conflict. Whether conflict is the main idea of the game or it's just an underlying theme, the bottom line is that most the time the player is running around destroying the enemies and/or blowing things up. As a result, we can arrive at a few behaviors that a game creature might need to survive given the constant onslaught of the human opponent. Take a look at Figure 12.10, which illustrates the relationships between the following states:

Master State 1: Attack.

Master State 2: Retreat.

Master State 3: Move randomly.

Master State 4: Stop or pause for a moment.

Master State 5: Look for something—food, energy, light, dark, other computer-controlled creatures.

Master State 6: Select a pattern and follow it.

Figure 12.10. Building a better brain.


You should be able to see the difference between these states and the previous examples. These states function at a much higher level, and they definitely contain possible substates or further logic to generate. For example, states 1 and 2 can be accomplished using a deterministic algorithm, while states 3 and 4 are nothing more than a couple of lines of code. On the other hand, state 6 is very complex because it dictates that the creature must be able to perform complex patterns controlled by the Master FSM.

As you can see, your AI is getting fairly sophisticated. State 5 could be yet another deterministic algorithm, or even a mix of deterministic algorithms and preprogrammed search patterns. The point is that you want to model a creature from the top down; that is, first think of how complex you want the AI of the creature to be, and then implement each state and algorithm.

If you refer back to Figure 12.10, you also see that in addition to the Master FSM, which selects the states themselves, there's another part of the AI model that's doing the selection. This is similar to the "will" or "agenda" of the creature. There are a number of ways to implement this module, such as random selection, conditional logic, or something else. For now, just know that the states must be selected in an intelligent manner based on the current state of the game.

The following code fragment implements a crude version of the master state machine. The code is only partially functional because a complete AI would take many pages, but the most important structural elements are there. Basically, you fill in all the blanks and details, generalize, and drop it into your code. For now, just assume that the game's world consists of the AI creature and the player. Here's the code:

// these are the master states
#define STATE_ATTACK  0    // attack the player
#define STATE_RETREAT 1    // retreat from player
#define STATE_RANDOM  2    // move randomly
#define STATE_STOP    3    // stop for a moment
#define STATE_SEARCH  4    // search for energy
#define STATE_PATTERN 5    // select a pattern and execute it

// variables for creature
int creature_state   = STATE_STOP, // state of creature
    creature_counter = 0,   // used to time states
    creature_x       = 320, // position of creature
    creature_y       = 200,
    creature_dx      = 0,   // current trajectory
    creature_dy      = 0;

// player variables
int player_x = 10,
    player_y = 20;

// main logic for creature
// process current state
    case STATE_ATTACK:
          // step 1: move toward player
          if (player_x > creature_x) creature_x++;
          if (player_x < creature_x) creature_x—;
          if (player_y > creature_y) creature_y++;
          if (player_y < creature_y) creature_y—;

          // step 2: try and fire cannon 20% probability
          if ((rand()%5)==1)


        // move away from player
           if (player_x > creature_x) creature_x—;
           if (player_x < creature_x) creature_x++;
           if (player_y > creature_y) creature_y—;
           if (player_y < creature_y) creature_y++;

    case STATE_RANDOM:
           // move creature in random direction
           // that was set when this state was entered

    case STATE_STOP:
            // do nothing!
            } break;

    case STATE_SEARCH:
            // pick an object to search for such as
            // an energy pellet and then track it similar
            // to the player
            if (energy_x > creature_x) creature_x—;
            if (energy_x < creature_x) creature_x++;
            if (energy_y > creature_y) creature_y—;
            if (energy_y < creature_y) creature_y++;
            } break;
            // continue processing pattern
    default: break;
    }// end switch

// update state counter and test if a state transition is
// in order
if (--creature_counter <= 0)
    // pick a new state, use logic, random, script etc.
    // for now just random
    creature_state = rand()%6;

    // now depending on the state, we might need some
    // setup...
    if (creature_state == STATE_RANDOM)
        // set up random trajectory
        creature_dx = -4+rand()%8;
        creature_dy = -4+rand()%8;
        }// end if

    // perform setups on other states if needed

    // set time to perform state, use appropriate method...
    // at 30 fps, 1 to 5 seconds for the state
    creature_counter = 30 + 30*rand()5;

    }// end if

Let's talk about the code. To begin with, the current state is processed. This involves local logic, algorithms, and even function calls to other AIs, such as pattern processing. After the state has been processed, the state counter is updated and the code tests to see if the state is complete. If so, a new state is selected. If the new state needs to be set up, the setup is performed. Finally, a new state count is selected using a random number and the cycle continues.

There are a lot of improvements that you can make. You could mix the state transitions with the state processing, and you might want to use much more involved logic to make state transitions and decisions.

Adding More Robust Behaviors with Personality

A personality is nothing more than a collection of predictable behaviors. For example, I have friend with a very "tough guy" personality. I can guarantee that if you say something that he doesn't like, he'll probably let you know with a swift blow to the head. Furthermore, he's very impatient and doesn't like to think that much. On the other hand, I have another friend who's very small and wimpy. He has learned that due to his size, he can't speak his mind because he might get smacked. So he has a much more passive personality.

Of course, human beings are a lot more complex than these examples suggest, but these are still adequate descriptions of those people. Thus, you should be able to model personality types using logic and probability distributions that track a few behavioral traits and place a probability on each. This probability graph can be used to make state transitions. Take a look at Figure 12.11 to see what I'm talking about.

Figure 12.11. Personality distribution for basic behavioral states.


There are four states or behaviors in this model:

State 1: Attack

State 2: Retreat

State 3: Stop

State 4: Random

Instead of selecting a new state at random as before, you create a probability distribution that defines the personality of each creature as a function of these states. For example, Table 12.2 illustrates the probability distributions of my friends Rex (the tough one) and Joel (the wimpy one).

Table 12.2. Personality Probability Distributions
State Rex p(x) Joel p(x)
ATTACK 50% 15%
RETREAT 20% 40%
STOP 5% 30%
RANDOM 25% 15%

If you look at the hypothetical data, it seems to make sense. Rex likes to attack without thinking, while Joel thinks much more and likes to run if he can. In addition, Rex isn't that much of a planner, so he does a lot of random things—smashes walls, eats glass, and cheats on his girlfriend—whereas Joel knows what he's doing most of the time.

This entire example has been totally artificial, and Rex and Joel don't really exist. But I'll bet that you have a picture of Rex and Joel in your head, or you know people like them. Hence, my supposition is true—the outward behaviors of a person define their personality as perceived by others (at least in a general way). This is a very important asset to your AI modeling and state selection.

To use this technique of probability distribution, you simply set up a table that has, say, 20–50 entries (where each entry is a state), and then fill the table so that the probabilities are what you want. When you select a new state, you'll get one that has a little personality in it. For example, here's Rex's probability table in the form of a 20-element array—that is, each element has a 5 percent weight:

int rex_pers[20] = {1,1,1,1,1,1,1,1,1,1,2,2,2,2,3,4,4,4,4,4}

In addition to this technique, you might want to add radii of influence. This means that you switch probability distributions based on some variable, like distance to the player or some other object, as shown in Figure 12.12. The figure illustrates that when the game creature gets too far away, it switches to a non-aggressive search mode instead of the aggressive combat mode used when it's in close quarters. In other words, another probability table is used.

Figure 12.12. Switching personality probability distribution based on distance.


      Previous Section Next Section

    JavaScript EditorAjax Editor     JavaScript Editor