This blog series is a part of the write-up assignments of our Game Engineering II class in the Master of Entertainment Arts & Engineering program at University of Utah.
The main purpose of this assignment is to separate some codes inside Graphics.cpp into two other different structs/classes that represents mesh (geometry data) and effect (shading).
The most important part of this assignment is to make our new mesh and effect class/struct interface independent and easy to extend/maintain. Also, what should be handled by the graphics class and what should be handled by the separate mesh/effect classes.
The mesh should be a representation with the geometry data of some sorts. And we would want to draw it on the screen as easily as possible when needed. If we look at OpenGL, drawing requires us to bind the vertex array (ID) to the current OpenGL state, and then we can draw it by specifying the first index and the count. Therefore, those parameters needed along with the vertices data should be set up during the initialization stage.
In our initialization stage, our mesh class takes a pointer to the actual data, the triangle count in that data, and how many vertices for a triangle. We then generate and bind the vertex array and buffer, set up the vertex attribute with proper stride and offset accordingly (since we’re not dealing with any texture or other stuff, we only care about the 3D coordinates).
Finally, we need to clean up the mesh when our engine shuts down. Here, we first bind the vertex array to 0, and then we delete the vertex array and buffer.
Our effect class should contain some shading information that can be bind and used to render our meshes at runtime.
Our render class contains the shader handles for vertex shader and fragment shader, along with a render state that contains some state parameters for the frame being rendered.
When initializing, we use the path to the two shaders that were passed in to load and store the shader handles.
When binding, we will use the shader that we stored and also bind the render state to set some properties for the current mesh that is being rendered, such as culling, depth test, etc.
When cleaning up, we just release the shader resource with the handles inside our effect class.
Combining our new mesh and effect class, we can use them easily to render what we want.
Since it’s hard to debug shader comparing to normal codes. There are some tools that can alleviate this pain.
We can use the visual studio graphics debugging tool for Direct 3D codes. We can use PrtScrn to capture different frames. Inside each frame, we can see all sorts of details such as the shaders used for the draw call, or how a pixel is rendered to get the resulting color. We can also see the results of different stages in the render pipeline.
Before running the GPU capture for OpenGL, I changed the vertices to render a different shape. Also, since we cannot use the visual studio built-in tool for OpenGL programs, we needed to download other tools, such as RenderDoc that’s being used right here. We can use F12 or PrtScrn to capture frames and look into their details. We can easily see all the detailed information of the vertices and actual gl_position during each draw call, which can be extremely super powerful and useful for debugging.
I made a diamond shape (sort of).
This assignment forces us to understand and abstract codes and create a platform-independent interface for creating and using meshes and effects. Which can be kind of tricky given the fact that I’m not extremely familiar with the different graphics library API. However, I think the practice of thinking in higher level and providing an easy way for others or other parts of the engine to use the rendering system is probably more important.
Special thanks to
for pointing out the problem when I was trying to set up the solution on my own laptop.