An Introduction to Behavior Trees – Part 3

This post is the third and last part of the Introduction to Behavior Trees series. This part explains some implementation details of BTs.

Fast links for other parts and tutorials:

If you’re looking for actual code to use, check it out:


Decorator Examples

Inverter

Like the NOT operator, the inverter decorator negates the result of its child node, i.e., SUCCESS state becomes FAILURE, and FAILURE becomes SUCCESS. Notice that, inverter does not change RUNNING or ERROR states, as described in algorithm below.

state = Tick(child)

if state == SUCCESS:
    return FAILURE
else if state == FAILURE:
    return SUCCESS
else
    return state

 Succeeder

Succeeder is a decorator that returns SUCCESS always, no matter what its child returns. This is specially useful for debug and test purposes. The algorithm below shows the pseudo-code for this node.

state = Tick(child)

return SUCCESS

 Failer

Inverse of suceeder, this decorator return FAILURE for any child result, as shown below.

state = Tick(child)

return FAILURE

 Repeater

Repeater decorator sends the tick signal to its child every time that its child returns a SUCCESS or FAILURE value, or when this decorator receives the tick. Additionally, a maximum number of repetition can be provided.

while j < M:
   state = Tick(child)
   if state != SUCCESS and state != FAILURE:
       return state

   j++

return SUCCESS

 Repeat Until Fail

This decorator keeps calling its child until the child returns a FAILURE value. When this happen, the decorator return a SUCCESS state.

repeat
    state = Tick(child)
    if state != SUCCESS and state != FAILURE:
       return state

until state == FAILURE

return SUCCESS

Repeat Until Succeed

Similar to the previous one, this decorator calls the child until it returns a SUCCESS.

repeat
    state = Tick(child)
    if state != SUCCESS and state != FAILURE:
       return state

until state == SUCCESS

return SUCCESS

Limiter

This decorator imposes a maximum number of calls its child can have within the whole execution of the Behavior Tree, i.e., after a certain number of calls, its child will never be called again. The limiter pseudo-code is described in algorithm below.

if totalCalls < C_max:
   state = Tick(child)
   totalCalls++;

   return state

return FAILURE

Max Time

Max Time limits the maximum time its child can be running. If the child does not complete its execution before the maximum time, the child task is terminated and a failure is returned, as shown algorithm below.

if totalTime < T_max:
    state = Tick(child)
    return state
else:
    Stop(child)
    return FAILURE

Prioritizing Behaviors

Considering the composite nodes described in the previous post, the Behavior Tree traversal is performed with a dynamic depth-first algorithm (notice that, except by Parallel node, all composite nodes run a child at a time and run orderly from left to right). This procedure allows a definition of priority among behaviors in the tree: the left branch of the tree (starting from the root) contains the high-priority behaviors while the right branch contains the low-priority behaviors.

In order to exploit this, the designer must put important behaviors such as auto-preservation and collision avoidance at the left branches and low-priority behaviors such as idle or rest at the right branches. Notice that, depending on the agent, may be necessary to add an unconditional behavior as the lowest-priority in order to keep the tree executing at least this behavior.

Treating Running States

One common question when implementing a Behavior Tree is that: what to do in the next tick after a node returned a running state? There are two answer to it: starting the graph traversal from the running node or starting it over from the first node.

The major drawback of starting the tick at the node that returned the running state is that this node can take too much time running an action and, thus, avoiding the execution of most important behaviors. For instance, suppose a robot performing an action to follow a certain path; in the half way the robot finds a hole, but it cannot avoid it because the tree is running the action of path following.

Therefore, the best option is always start over the tree, and if a most-important behavior wants to run, the previous node stops.

Composite Node Extensions

Node*: Remembering Running Nodes

When we start the tree traversal at the root every tick, even after some node returned a running state, we can fall at the following situation. Suppose an agent performing a patrol behavior, which is illustrated by the figure below. Now suppose that the agent completed the first action (“go to point A”) at the tick 1 and then, the second action (“go to point B”) is started, returning a running state. At tick 2, the traversal starts from the top and reach the first action again, which will send the robot again to point A, thus, never completing the second action and never even calling the third one.

problem_running

The * extension over the Priority and Sequence nodes overcome this problem by recording the last child that returned RUNNING. The figure below shows the same example above but now using a sequence with * extension. After completed the first action (“go to point A”), the second action (“go to point B”) is executed and returned RUNNING state. In the next tick, the sequence node do not execute the first action, but jumps directly to the second one.

problem_running_2

This extension is only valid for the Sequence and Priority nodes. Parallel node does not suffer from the problem cited here, because it don’t execute its children sequentially but concurrently.

for i = lastRunning to N:
    state = Tick(child[i])

    if state != FAILURE:
        return state

return FAILURE
for i = lastRunning to N:
    state = Tick(child[i])

    if state != SUCCESS:
        return state

return SUCCESS

Node~: Probabilistic Choice

When an agent performs exactly the same sequence of actions given a particular situation, the agent may become predictable. The simplest way to avoid the predictability is to using random choices on some nodes. For example, an agent with a grasp behavior can randomly choose left or right hand to grasp some object.

The ~ extension allows Sequence and Priority nodes to randomly choose its children, instead of sequentially select them. The ~ nodes randomly choose one of its children to execute, then ignore the selected ones and choose other until all children were selected.

problem_random

A common use of this extension is to choose children with equiprobable distribution, but it can use other distributions by weighting the choices.

for i = 1 to N:
    selectedChild = RandomlyChoose(child)
    state = Tick(selectedChild)

    if state != FAILURE:
        return state

return FAILURE
for i = 1 to N:
    selectedChild = RandomlyChoose(child)
    state = Tick(selectedChild)

    if state != SUCCESS:
        return state

return SUCCESS

 Multi-Parenting

Behavior Trees provide the advantage of allowing reuse of conditions, actions or even subtrees. The reuse of subtrees can be done in two ways: creating a new instance of the branch or using the same branch but with multiple parents. Duplicating subtrees have a major drawback in the memory usage, because it consumes more memory than necessary to represent the same thing, specially when duplicating large subtrees. On the other hand, multiple parents save memory but prejudice the visualization of the tree, decreasing readability.

The solution to this problem is to use a hybrid approach: the duplication of the subtrees happens only in the level of control and visualization while multi-parenting is used in the level of implementation and execution. Notice that, the duplication of the subtrees is only virtual and does not apply to the real structure of the tree.

Handling Persistent Data

Autonomous intelligent agents, whether virtual or real, need an internal state to store its belief about the world. These beliefs may include information from its perception system (e.g., last known position of the enemy; last known position of the allies; etc.) and other computed information (e.g., world position).

Similarly, some actions in Behavior Tree may need to store information (e.g., total time running the child node; time since last action; total failures; etc.), but we do not want to add this inside the node because it would be attached to a specific agent, therefore, obligating the use of a different tree for all agents. Running a different tree for each agent is a waste of resources and may be impractical.

problem_memory

A common approach to fulfill the requirement for persistent data is using memory pools, which can be used to store information about the world and allow behaviors to read and write new information on it. Notice that, this memory pools are individually maintained for each agent, thus allowing to share a single Behavior Tree for hundreds of agents.

2 Comments, RSS

  1. Gavalakis Vaggelis December 6, 2014 @ 16:56

    Very well said and nice articles 🙂
    This is prety much spot on on my implementation of BTs in nodeCanvas except for the instance per agent thing, which I am working on.

    Going to read your rest of articles now 😉
    Cheers

    • Renato Pereira December 8, 2014 @ 16:12

      Pretty Cool the nodeCanvas, and the editor seems awesome.

      I made a JS library called Behavior3JS, which uses the structure described here. [http://behavior3js.guineashots.com/]

      Let me know if you have any suggestion! =)

Your email address will not be published. Required fields are marked *

*