This blog series is a part of the write-up assignments of my A.I. for Games class in the Master of Entertainment Arts & Engineering program at University of Utah. The series will focus on implementing different kinds of A.I. algorithms in C++ with the openFrameworks library, following most of the topics in the book Artificial Intelligence for Games by Ian Millington and John Funge.

In this post, I will talk about the implementations of some basic movement steering algorithm.


First all of, I want to talk about the how I am going to structure my algorithms, game objects, and the overall execution flow.


Since I am going to follow the structure above and treat movement algorithms as services, I have a ISterringAlgorithm base class (interface) with only pure virtual function.

class ISteeringAlgorithm {
    virtual ISteeringOutput GetSteering() const = 0;

The other algorithms will inherit from this base class and implement the steering algorithm. This structure decouples the algorithms from the game object class pretty well and makes it really easy to request movement. Inside the update function of my game object class, it simple just need to make a request to the steering algorithm and then use it to update its kinematic component.

void cGameObject::Update(float i_dt){
    if (m_steeringAlgorithm) {
        AI::ISteeringOutput steering = m_steeringAlgorithm->GetSteering();
        m_kinematic->Update(steering, i_dt);

As for the kinematic class, it simply takes a steering output and a delta time as inputs and use them to update its states. The kinematic class has no idea and does not care whether the steering output came from a steering algorithm or something else.

void cKinematic::Update(AI::ISteeringOutput i_steering, float i_dt)
    // Update position/orientation
    m_position += m_velocity * i_dt; // +0.5f * i_steering.m_v2.m_linear * i_dt * i_dt;
    m_orientation += m_rotation * i_dt; // +0.5f * i_steering.m_float.m_angular * i_dt * i_dt;
    m_orientation = ofWrapRadians(m_orientation); // Wrap between -PI ~ PI

    // Update velocity and rotation
    m_velocity += i_steering.m_v2.m_linear * i_dt;
    m_rotation += i_steering.m_float.m_angular * i_dt;

    // Cap maximum velocity
    if (m_velocity.length() > m_maxSpeed) m_velocity = m_velocity.normalize() * m_maxSpeed;


First thing on our task is to use the openFramework to draw a boid shape on our screen. To achieve this, I basically pass in the position and orientation of the object into a draw function that will then draw a circle with a triangle according to the orientation.



The first steering algorithm I am going to show is just a simple kinematic motion that traverses along the window’s boundary and takes a 90-degree turn when encountering corners.

    The next one is a simple kinematic seeking. Given a target kinematic, our boid will try to move to the target’s position by directly update its velocity and orientation. You can see that since it’s only kinematic motion, the boid will start bouncing back and forth when it reaches the target position.


Dynamic Arrive

Now let’s see what dynamic motions look like! First, we will be looking at the dynamic seek and arrive. The first video I am showing below has a setting of max acceleration = 100.0f, max speed = 150.0f, target radius = 1.0f, slow radius = 2.0f.

    As you can see, the boid’s motion has a large amount of sliding after reaching the target. This probably because of the small values from the slow radius and the target radius. We can try raising the values a bit and maybe also give it a larger acceleration so that it can stop the motion faster. The video below has a setting of max acceleration = 300.0f, max speed = 200.0f, target radius = 10.0f, slow radius = 100.0f.

    This already looks more natural and better now. Let’s see what would happen if instead of just returning the original steering output, we return a negative acceleration to try to fully stop the boid!

    You can easily see that this simple modification gives the boid a really nice, smooth stop. However, there is still a big mystery here, what does the “time to target” exactly do? Looking at the algorithm, we can see that it is used to calculate how fast the boid reaches the target velocity, so it is scaling our acceleration. Let’s see what would happen if we change the time to target to a large number, perhaps 50.0f.

    It is expected that our boid is accelerating much slower. What is more interesting is the fact that it is also overshooting the target tremendously! In my opinion, the “time to target” is sort of like a “reaction time”. The larger the value is, the more time the algorithm needs to react.

Now, it is time to integrate our rotation algorithm to make the boid gradually look at the direction that is is going. To achieve this, I implemented the blended behavior, and combined dynamic arrive with “look where you’re going”.

    You can see that it’s overshooting remarkably. I added the same fix as the dynamic arrive, which adds a negative acceleration if the angle distance to the target is within the target threshold. After some parameters tuning, it finally looked better. However, I also noticed that when the boid’s velocity is zero, it is still rotating. This is because even when we are not delegating to the dynamic align. The boid still has rotation value remain. To fix this issue, I also added a piece of code that adds a negative angular acceleration to the boid when its velocity is zero.

After adding this fix, the boid’s rotation motion looks a lot better and accurate!

Wander Steering

Next, I am going to check out the wander steering behavior. To smoothly changing the orientation of my boid, I am using a random binomial distribution ranges from -1 to +1 (the difference between two uniform distribution ranges from 0 to 1) to modify the boid’s angular acceleration. And the results are looking decently well.

// Setting the random seed
ofSeedRandom(ofGetMonth() * ofGetHours() * ofGetMinutes() * ofGetSeconds());
// Random binomial distribution
float randNum = ofRandom(0.0f, 1.0f) - ofRandom(0.0f, 1.0f); // -1.0f ~ 1.0f

Because I am not directly changing the angular velocity or orientation, the motion looks quite smooth. Note that since ofRandom wraps around rand() from C++, which does not have a true uniform distribution, what I claimed to be random binomial distribution is also not a true binomial distribution. However, it is sufficient for the simulation here. Also, note that the wandering behavior that I showed in the video above is close to the kinematic wander that is shown below. Let’s take a look at what actual full wander behavior looks like in below!

Kinematic wander and full wander behavior.

At first, the boid wasn’t rotating at all. It took me a while to realize that because the wandering behavior is delegating to the dynamic facing behavior and the target angle threshold was too high for it to reach. We can fix this problem quite easily with setting the target radius to 0. The video shown below has the setting of wander offset = 100.0f, and wander radius = 10.0f.

We can see that the motion of the boid is usually quite calm and smooth. It does go into a circling motion from time to time, though. We can try some different settings of wander offset and radius and see how the algorithm responds. The setting below has a wander offset = 150.0f and a wander radius = 12.0f, which means that the target is further away from the boid and that should make the motion even calmer.

Despite the fact that this is only one test, it shows matching result to my prediction. However, it appears that the boid can get stuck in circular motions sometimes. As an attempt to address this issue and reduce the chance of being stuck at one end of the direction even more, I use a sine wave to shift the mean value of the binomial distribution between -1 and +1. After this is added, we barely see the boid doing circular motion anymore!

But which kind of wandering behavior should we use? Well, it definitely depends on the game and the object type. While more frantic motion might work for a pack of zombies, a smoother and calmer motion might work better for vehicles, like ships, which do not have the ability to make drastic turns.


Now, let’s take a look at the flocking behavior, which is the first behavior so far that affects multiple entities as a group. The flocking behavior relies on blending three steering behaviors, separation, cohesion, and velocity matching. To simulate this behavior,  there are still some algorithms missing, such as pursue and dynamic separation, so I will be implementing them first.


Note that this is not necessary for flocking to work, we could simply use a dynamic arrive for cohesion. However, I want to provide a larger degree of freedom to control the behavior and give the flock a sense of going for a goal in front of them. In the video below, I let the target use a wandering behavior and the player object will try to pursue the target.

    The difference between pursuing and simple seeking is huge and easily observable. Instead of just trying to go to the target’s position, pursuing is predicting the future position of the target, and tries to go to that point instead. Interestingly, just a simple prediction already made the behavior looks more intelligent and lifelike!

Actual Flocking

Since dynamic separation is tricky to demonstrate, I decided to directly dive into the flocking behavior. First, let’s see what it will look like if we use the same weights 1 for all three behavior.

    Well, definitely not realistic. Apparently, cohesion is way too dominant now. I can see many potential causes of this behavior on the top of my head.

  1. I didn’t give any of them initial velocity,.
  2. Separation threshold is too low, and that’s causing them to clump up easily.
  3. The decay coefficient for separation is too low, which cannot generate a strong enough force to keep the boids away from each other.

First, let’s increase the separation threshold (10 to 100) and give them some initial velocity (200, 0). Also, to have a larger space to observe the behavior, I am going to increase the windows size from 720p to 1080p.

    You can see that we are still getting the same behavior. After some parameters tuning and realizing that there is a bug in my separation algorithm, the behavior is starting to look like flocking now.

    After some more trials and errors, I finally made the flocking behavior looks somehow decent and believable. The weights that I reached are 1.0 for separation, 0.6 for velocity matching, and 0.7 for cohesion. I also increased the separation threshold so that they are less likely to clump together with a decay coefficient of 5000.0 and maximum acceleration of 400.0!

    I was actually surprised that I could get this result without needing to spend too much time on tuning the parameters since it involves not just the weights settings, but also settings of dynamic pursuing, dynamic alignment and velocity match. However, since I borrowed most of the parameters from previous experiments, it would make sense that they can perform decently well.

One problem that I had with flocking was whether it needs a leader or not. As you can see in the videos above, there is no leader of the flock. However, we can try to assign one of them as the leader, and observe how the behavior changes. The leader is the boid with pink color on it.

    Currently, the leader has a weight of 1.0 for its wandering behavior, while the followers always have weights of 1.0 for pursuing the leader. The leader does not seem to care about the flock actually. I could try to adjust the weights and make the follows pursue the leader more closely, and make the leader sticks to the flock more than his wandering behavior. As you can see below, these two adjustments are quite effective. I also change the boundary reaction from wrapping around to bouncing off walls, so the boid won’t panic when one of them suddenly disappears and reappear on the far end.

    Now let’s try something a little bit crazy, let’s see how our algorithms react to more objects in a flock! In the videos below, there are 50 boids in the scene including the leader (if there is one).

    Interestingly, the flocking behavior without the leader seems to be adjusting well, while the one with the leader is having some troubles. The one with a leader tends to form into a circle, and sometimes having a single boid being trapped inside. I am going to tune the parameters and make the followers separate from each other more, and make the leader stick with the group more.

    To be honest, the result looks quite nice! It is really amazing seeing how the basic algorithms can produce such organic and emergent behavior just with simple combinations.