This blog series is a part of the write-up assignments of our Game Engineering II class in the Master of Entertainment Arts & Engineering program at University of Utah.

The main purpose of this assignment is to add a camera and game object representation that provide easy to use interfaces for the gameplay programmer to use. Also, use some key inputs to move the camera and game objects.


I made a new cGameObject class that’s just a rigid body with a renderer. This class is inside our application project, and I assume that graphics should have no knowledge of it, therefore the game object needs to provide all the things (mesh, effect, transform) that our graphics system needs to render. I provided some simple setters and getters for easy access to some of the properties.

	class cGameObject
void  Update(float i_time);
// rigid body data setters and getters
		inline Physics::sRigidBodyState GetRigidBody() const;
		inline void SetPosition(float i_x, float i_y, float i_z);
		inline void SetPosition(const Math::sVector & i_newPosition);
		inline void SetVelocity(const Math::sVector & i_newVelocity);
		inline void SetVelocity(float i_x, float i_y, float i_z);
		inline void SetAcceleration(const Math::sVector & i_newAcceleration);
		inline void SetAcceleration(float i_x, float i_y, float i_z);
		inline void SetRotation(const Math::cQuaternion & i_newOrientation);

		inline Math::sVector GetPosition() const;
		inline Math::sVector GetVelocity() const;
		inline Math::sVector GetAcceleration() const;
		inline Math::cQuaternion GetRotation() const;

		// render data
		inline void SetRenderer(Graphics::cMeshRenderer*);
		inline Graphics::cMeshRenderer* GetRenderer() const;

		Physics::sRigidBodyState m_rigidBody;
		Graphics::cMeshRenderer* m_renderer = nullptr;


The renderer class is a holder for our mesh and effect, adding this extra layer of abstraction provides an easy way to swap out the visual aspects for our game objects. I decided not to initialize the mesh and effect within the renderer class since there might be different renderers sharing some same meshes or effects.

		class cMeshRenderer
			cMeshRenderer(Graphics::cMesh* i_mesh, Graphics::cEffect* i_effect);

			inline void CleanUp();

			inline void SetMesh(Graphics::cMesh* i_mesh);
			inline void SetEffect(Graphics::cEffect* i_effect);

			inline Graphics::cMesh* GetMesh()const;
			inline Graphics::cEffect* GetEffect() const;

			Graphics::cMesh* m_mesh = nullptr;
			Graphics::cEffect* m_effect = nullptr;

Position Extrapolation

Since our game and graphics system are running on different threads, and we set our game to run at an extremely low frame-rate (15 FPS), if we use the positions of our game objects that we change in every update, the motion will be laggy on screen.

To solve this, we need to consider the fact that, even though we only update every 1/15 seconds, our application is submitting data to graphics at the speed that our graphics system is rendering. Therefore, if we do some position extrapolation during the submission function according to the object’s velocity that was just set, we can display smoother motion. As long as we are not updating the position to drastically within one update, there won’t be any noticeable extrapolation error!


(Mesh and effect submission interface)



(Two game objects at different positions)


(Use arrow keys to move the square object)


(Use Z key to change one of the object’s mesh)



Memory Usage

Now since we are using more data (matrices transforms for each draw call), we are going to examine the memory usage again. Our new bucket size became

x86 (OpenGL): 300 Bytes
x64 (D3D): 320 Byes

Either one uses 120 bytes more than before. This contains a new per draw call constant buffer data, which is a 4 x 4 matrix so 16 * 4 = 64 byes. Along with two new arrays with length 2 needed to render meshes, one for positions (4 floats) and one for orientation (3 floats quaternion). 64 + (4 + 3) * 4 * 2 = 120 bytes with no memory wasted.

Other Stuff

Depth Buffering

After reading a thread on our discussion board, I realized that our current graphics system doesn’t enable depth buffering by default. I guess I was just lucky that the last thing I render on screen happens to be the closest one to the camera so I didn’t notice anything fishy.

In order to achieve this, we need to change some initialization parameter for our effect class so it can set its render state to use depth buffering if needed. I added another parameter cRenderState enum which is just a unit8_t to our effect factory and initialization functions. By doing it this way instead of providing multiple booleans in the interface, I am assuming that the game programmer has knowledge about the render state enum.

Below is the result by putting the sqare at z = -1.0f and diamond at z = 0.0f and since our camera is orthographic, it can be easily seen that there is a sense of depth now!


x64 , x86 Performance Difference

Even after extrapolation, the movement in x86 is still a little bit jittery comparing to x64. After running some profiling (instrumentation), we can see that SubmitDataToBeRendered takes twice the time (inclusive) of x64 under x86. This might be a result of the differences between OpenGL and Direct3D, but I still haven’t found the real cause.





Background Color Extrapolation

In the previous article, I mentioned that the background color animation is jittery because of our game updates at a slower rate. That can also be solved with extrapolation in the submission function. Below is the result.


Optional Challenges

To add horizontal rotation to our camera, we need to set our angular speed in our input update function, and then actually update the orientation inside the update with time function. Also, we need to change the camera movement from just moving along the world axis to moving along its forward and right vector.



Hold down “Z” to change the square object to diamond. Use “WASD” to move the camera, “J” and “L” to rotate the camera horizontally. “Arrow keys” to move the white square object.

Windows x86 (OpenGL)
Windows x64 (Direct 3D)