This blog series is a part of the write-up assignments of our Game Engineering II class in the Master of Entertainment Arts & Engineering program at University of Utah.
The main purpose of this assignment is to make our Graphics.cpp truly platform independent, which means no #if def C preprocessor guard and no separate .cpp files for D3D and OpenGL.
To achieve this, we look at the original Graphics.cpp, and we can see that since we moved out most of the initialization codes into separate mesh and effect classes. The only platform (graphics library) dependent parts left are some view initialization, view clear, buffer swapping and a little bit of cleaning up.
What I did was to create a new .h file called RenderView, which contain some functions under the Graphics namespace, to handle all these operations and extract the codes into two different .cpp files for D3D and OpenGL. After doing this, we don’t need two different Graphics.cpp files anymore. Now we can easily initialize the view, clear it with our background color and swap the buffers when rendering a frame, and use it to clean up when exiting the application.
Now we can easily change the background color of our application to any color we want by passing the color into the Clear() function.
To initialize a mesh, we need to pass in some parameters. The triangle counts and vertices per triangle are for converting meshes from right-handed to the left-handed coordinate system also for calculating the total number of vertices. And then we need the pointer to the actual vertices array and indices array. Along with the size of our indices array.
We don’t really need to pass in the vertexCountPerTriangle since it’s almost always gonna be 3.
Now if we look at the size of our mesh class with sizeof().
OpenGL: 16 Bytes
D3D: 32 Bytes
This looks bad for D3D, but the reason is that we use D3D under x64, in which the pointers become 8 bytes each. After adding one more unsigned int, the memory usage grew from 24 to 32, which contains 4 bytes wasted due to alignment. However, since we need the extra unsigned int, there is no way to eliminate this. On the other hand, there is no memory wasted on our OpenGL side, but we also cannot make the class any more compact.
To initialize a new effect class, we simply pass in the paths to the vertex shader and fragment shader that we want to use. Now we look at the size of our effect class.
OpenGL: 16 Bytes
D3D: 40 Bytes
The biggest difference here is that our RenderState class under x64 contains three more pointers (3 * 8 = 24 bytes) comparing to only one uint8_t under OpenGL. Our shader handles are always uint32_t (4 bytes). I tried to erase some memory wasting due to alignment by re-arranging the orders of some member variables, but it wouldn’t change anything in our case.
Since we can pretty easily get the simulation time and system time from a static variable that holds some values to render a frame. We can just use the simulation with sin and cosine function to easily modify the color and pass it into the clear view function.
This assignment is not too different from the previous one, all we needed to do was simply identify the codes that are different between platforms, and then provide a new interface for them. The only concern now is that there are two functions in our OpenGL version of the interface are actually empty functions that are just returning success. I will look for a better way in the future and see if I can fix it. Switching to index buffer was also not that complicated. OpenGL was easy enough since I’ve done it before, at the same time, D3D is not that different.