JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Modeling Memory and Learning with Software

Other elements of a good AI are memory and learning. As the AI-controlled creatures in your game run around, they are controlled by state machines, conditional logic, patterns, random numbers, probability distributions, and so forth. However, they always think on-the-fly. They never look at past history to help make a decision.

For example, what if a creature was in attack mode, and the player kept dodging to the right and the creature kept missing? You'd want the creature to track the player's motions and remember that the player moves right during every attack, and maybe change its targeting a little to compensate.

As another example, imagine that your game forces creatures to find ammo just as the player does. However, every time the creature wants ammo, it has to search randomly for it (maybe with a pattern). Wouldn't it be more realistic if the AI could remember where ammo was found last and try that position first?

These are just a couple of examples of using memory and learning to make game AI seem more intelligent. Frankly, implementing memory is easy to do, but few game programmers ever do it because they don't have time or feel it's not worth it. No way! Memory and learning is very cool, and your players will notice the difference. It's worth trying to find areas where simple memory and learning can be implemented with reasonable ease and can have a visible effect on the AI's decision-making.

That's the general idea of memory, but how exactly do you use it in a game? It depends on the situation. For example, take a look at Figure 12.13.

Figure 12.13. Using geographical-temporal memory.

graphics/12fig13.gif

Here you see a map of a game world, with a record attached to each room. These records store the following information:

Kills

Damage from player

Ammo found

Time in room

Every time the creature runs through its AI and you want to have a more robust selection process based on memory and learning, you refer to the record of events—the creature's memory of the room. For example, when the creature enters a room, you might check if the creature has sustained a great deal of damage in that room. If so, it might back out and try another.

For another example, the creature might run out of ammo. Instead of hunting randomly for more ammo, the creature could scan through its memory of all the rooms it has been to and see which one had the most ammo lying around. Of course, the AI has to update the memory every few cycles for this to work, but that's simple to do.

In addition, you can let creatures exchange information! For example, if one creature bumps into another in a hallway, they can merge memory records and learn about each other's travels. Or maybe the stronger creature could perform a force upload on the weaker creature, since the stronger one obviously has a better set of probabilities and experience and is a better survivor. Moreover, if one creature knows the player's last known position, it can influence the other creature's memory with that information and they can converge on the player.

There's no limit to the kinds of things you can do with memory and learning. The tricky part is working them into the AI in a fair manner. For example, it's unfair to let the game AI see the whole world and memorize it. The AI should have to explore it just like the player does.

TIP

Many game programmers like to use bit strings or vectors to memorize data. This is much more compact, and it's easy to flip single bits, simulating memory loss or degradation.


As an example of memory, I've created a little a-life ant simulation, DEMO12_7.CPP| EXE (16-bit version, DEMO12_7_16B.CPP|EXE), shown in Figure 12.14.

Figure 12.14. An ant-based memory demo.

graphics/12fig14.gif

The simulation starts off with a number of red ants and piles of blue? food. The ants walk around randomly until they find a pile of food. When they do, they eat the food until they're full, and then they roam around again. When the ants get hungry again, they remember where they last found food and then head for it (if there's any left).

In addition, when two ants bump into each other, they exchange knowledge about their travels. If an ant can't find food in time, it dies a horrible death. (Watch the simulation; it's a trip.) You can change the number of ants according to your system's processing power. Right now there are 16, but there's only enough room to display the state information and memory images for the first 8. This information is shown on the right side of the screen, detailing the current state, hunger level, hunger tolerance, and a couple of the internal counters.

If you want to add something even more involved, enable the ants to leave waste and create a cyclic system that won't run out of food.

      Previous Section Next Section
    
    roaches exterminator weblink


    JavaScript EditorAjax Editor     JavaScript Editor