Understanding behavior

Why AI characters matter

About 50% of games today feature non-player characters (NPCs). Sometimes they are just images used to drive a story, but more often they are animated 2D or 3D characters controlled by some form of AI.

Among gamers and industry professionals, there's a broad consensus that AI characters haven't seen fundamental improvements in quality since roughly 2005. The result is that they still feel scripted, uncanny and robotic.

At VIRTUAL BEINGS, we believe that this lack of innovation is a creative bottleneck for games and virtual worlds as well as an opportunity for those willing to approach the problem with a fresh perspective.

The 12 principles of behavior

Almost every student of character animation today learns about Disney's famous 12 principles of animation. They are timeless laws for how to make animated characters feel believable and engaging.

In many ways, the art of creating AI characters can be seen as an extension of character animation. Disney's principles still apply, yet they also need to be extended because AI characters interact with the virtual world they inhabit.

Based on many years of academic research and in-house R&D, we have identified 12 principles of behavior. They apply not only to the behavior of real animals (including humans), but also to that of cartoony or stylized ones.

KuteEngine, our in-house tech for behavioral AI, powers 3D characters that live up to all 24 principles. This allows them to feel alive and to act like first-class citizens of the virtual world they inhabit.

1. Behavior is observable

At first sight, behavior seems to be about muscles that move. Let's say you're sitting in a fancy restaurant, waiting for your date to show up. Your fingers are nervously tapping on the table and your heart (also a muscle) is racing.

Does that mean you're 'doing' two things at the same time here? Not quite. Living bodies are full of complicated stuff doing complicated things, but most of this isn't perceivable from the outside. For our purposes, behavior includes only events that are observable without special instruments (such as an MRI scanner). So if your racing heart contributes to your overall nervousness and you end up knocking over your glass of orange juice and ruining your shirt - that would be observable, hence behavior.

For behavioral AI, this first principle entails a welcome simplification: We don't have to try to recreate life itself, just its appearance. Disney called this the illusion of life. For our purposes, we need to go one step further and decide to chase the illusion of interactive life.

2. Behavior is continuous

Living beings behave all the time, from birth all the way to their death. Our language recognizes this by providing us with an arsenal of terms we can apply to someone who isn't showing any movement or making any audible sound. For example, we may say that this person is sleeping, sitting still, holding their breath, playing dead, and so on.

Doesn't this conflict with principle 1? No, because even when an agent is seemingly doing nothing, we can observe something: In the GIF above, you can tell effortlessly that sitting perfectly still under a shower of balloons is a skilled (and probably rehearsed) display of behavior. The mere act of sitting straight requires coordinated use of dozens of muscles. In a more general vein, we may say that agents emit continuous behavior streams. The problem of behavioral AI is thus to generate such streams from individual behaviors that are connected to preceding and subsequent behavior.

3. Behavior is adaptive (responsive, interactive)

There is no real life behavior that is not interactive. For example, playing with a friend involves responding to their actions, and climbing a rock requires adapting one’s hands to its shape. Even the most self-involved behavior takes place in a context and needs to interact with it. Take breathing as an example, where the respiration rate depends (among other things) on the density of oxygen in the atmosphere. If we take away the context (oxygen), the behavior (breathing) ceases to make sense.

Behavior is how agents relate to the world, and that is why all behavior needs to be interactive. This also means that there is no difference between behavior that is interactive, adaptive or responsive - these words just add different flavors to the fact that behavior is necessarily contextual. For behavioral AI, this means that all behavior needs to be procedurally generated - which is unfortunately the exact opposite of what happens in most games today, which instead tend to assemble behavior streams from canned packages of pre-configured behavior: stand-loop, walk-loop, jump and so on, with awkward transitions between them.

4. Behavior is constrained

Context imposes lots of constraints on behavior, in the form of conditions that shape it in various ways. By far the most important one is the physical makeup of the world - the resistance it offers to the agent's body, the way it allows sound to propagate, and more.

Constraints can be passive or active themselves, thereby directing an agent's behavior dynamically and somewhat unpredictably. Behavioral AI must hence go beyond mere procedural selection of behavior and offer full-fledged support for procedural animation, allowing the behavior stream to adapt to constraints on the fly.

5. Behavior is sequenced

AI textbooks often distinguish 'scripted' from 'unscripted' behavior, implying that the latter is somehow better and more organic. This seems a bit pointless to us, because real agent behavior is always a combination of both. In fact, our brains have dedicated circuitry (notably the cerebellum) to store gigantic databases of parametric motion sequences.

These sequences make it much easier for the brain to deploy standard forms of behavior. At the same time, such sequences are highly adaptable to concrete environments and dynamic context. This makes for a powerful combination. Instead of having to decide freshly each time exactly which muscles to move, when, and how much, to produce, say, a tango, it can use templates that leave only a few parameters to be filled in at 'runtime', so to speak. Apart from reducing complexity, this approach also facilitates synchronization of behavior between several individuals, and it explains in part why real behavior can sometimes feel scripted. Modern behavioral AI engines such as KuteEngine take their inspiration from neuroscience and incorporate parametric, adaptive sequencing into their architecture.

6. Behavior is interruptible

Even the most perfectly planned behaviors won't always survive first contact with reality. If they do, agents change their mind all the time and their behaviors will have to follow suit. This is an almost trivial observation about the real world but a hard challenge for behavioral AI, mostly because of principle 2. Interruptions can't just break off the behavior stream and start a fresh one.

The requirements for continuity and for rapid interruptibility pull in opposite directions, creating a tension that even an athlete like LeBron James can't always resolve gracefully. Behavioral AI engines are faced with the added challenge that such a perceived lack of control may be precisely what the user of the engine wants to achieve (e.g., for comic effect). KuteEngine realizes this via a layered control architecture that's inspired by robotics.

7. Behavior shows patterned variation

You cannot step twice into the same river, and you cannot display twice the same behavior. Some difference, however small, will always persist - and that's part of what makes natural behavior, well, natural.

Importantly, these different expressions of one and the 'same' behavior tend to be both random and structured. The Weasley twins may hold their heads and open their lips in slightly different ways when they ask 'What?', but they cannot go so far as to, say, close their mouth when it needs to be open, or vice versa. Evolutionary biologists call this phenomenon patterned variation. Whenever it's found, it indicates that the variations are due to underlying generative principles or rules - for example, rules governing how the vocal apparatus can produce the word 'what'. That doesn't mean that behavioral AI engines need to simulate (say) an entire vocal apparatus to produce believable variations. In practice, the dimensionality of possible variations is often limited and can be approximated in more superficial ways.

8. Behavior is hierarchical

The closer we look at an agent's body while it's displaying behavior, the more we see that several things usually occur at once. This and the following principle help to establish some order here.

Let's start with the observation that, from a kinematic point of view, behavior is almost always hierarchically organized. A handshake illustrates this nicely. Despite its name, this little ritual involves coordination of many body parts that are in hierarchical relationships, where subordinate parts are affected by superior ones.

In the GIF above it all starts with the torso, which positions the arms (which are subordinate to the torso) and leans forward during the shake. Meanwhile, the head (which also depends on the torso) orients towards the other party and the eyes (which depend on the head) need to look downwards initially to coordinate the initial grip. They then look up and connect with those of the other.

Once we start looking for hierarchies in behavior, we find them everywhere, and to make things worse, they evolve rapidly over time (recall principle 5). The consequences for behavioral AI are significant, but (fortunately) identical to those of the next principle.

9. Behavior is parallel

What is Peggy Olson from 'Mad Men' doing? She is walking. She is smoking. The fact that there are (at least) two perfectly good answers bothers no one because it's normal to do several things in parallel.

These things don't even have to be in a nested relationship (litmus test: you can smoke without walking, and vice versa). Still, the consequences of principles 8 and 9 for behavioral AI are identical. They entail that the behavior stream must be composed from multiple sub-behaviors that can be hierarchically organized. As an added complication, these sub-behaviors can control distinct or overlapping motor domains of the body (eyes, mouth, limbs, ...). For an example of distinct domains, look no further than Peggy, who's smoking behavior doesn't interfere at all with her walk. For overlapping domains, imagine that Peggy were walking as well as shaking from fear - two behaviors that will effect the same body parts, but in distinctive and potentially complex ways.

10-12. Behavior is cognitively caused, monitored, and readable

The final three principles can be discussed together for the purposes of this overview, as they are about the relationship between behavior and cognition.

The things that emit behavior (i.e., agents) are also the things that have central nervous systems which control their behavior. And the things that see this behavior (i.e., other agents) also automatically interpret this behavior. We have been hardwired by evolution to 'read' (unobservable) cognitive causes into observable behavior and thereby give it intentionality and meaning.

Thus, in the GIF above, you don't just see an anchorwoman who is lifting and then lowering her arm - you see a lady who is trying to high-five her colleague, failing to solicit her attention, and ultimately ashamed about her failure. Tons of psychological studies have shown that such attributions are automatic and irrepressible. For behavioral AI, this implies that it's impossible to separate the behavior emitted by artificial agents from the meaning it elicits. Behaviors always express something, whether you want it to or not. Behavioral AI engine development is therefore not just an engineering challenge, but also (and foremost) a psychological one. It's about convincing the player of the (artificial) meaningfulness of generated behavior.


We hope that this overview has given you a greater appreciation of the sheer complexity of behavior in the real world and the challenges of translating it into artificial behavior. But if you think about it, behavior is really all we have to connect with our fellow creatures, to understand them and be understood by them in turn. That's why we're passionate about it, and why we've set ourselves the goal to bring believable interactive behavior within the reach of any 3D developer.

Last updated