This blog series is a part of the write-up assignments of my Real-Time Game Rendering class in the Master of Entertainment Arts & Engineering program at University of Utah. The series will focus on C++, Direct3D 11 API and HLSL.
In this post, I will talk about how I added 2D sprite support to my rendering engine.
To display the 2D sprites on screen, we don’t really need an actual “mesh” to represent it. We can simply use a quad that has only 4 vertices to draw any 2D UI that we need. To put it differently, we only need one vertex array buffer and one vertex input layout to draw all the 2D UIs that we need! Below shows my singleton sprite class that will only be instanced once throughout the whole game (notice the static GetSprite() function).
For the actual vertices data, I am assuming that when the values are passed into the shaders, they will be normalized to the projected space position. Therefore, I can just use R8G8_SNORM for vertices positions and R8G8_UNORM for texture UVs.
Inside the sprite class initialization function. We will need to initialize the vertex array buffer with some data. However, we have no idea what kinds/sizes/positions that the sprites of this game will have. Therefore, we need to make it general. One of the reasonable approaches would be setting it to fix the whole screen. Which would look like what is shown below because we are using SNORM and UNORM respectively for positions and UVs. We can manipulate the actual display positions/orientations/scales later on with some data inside our constant buffer.
Now we have a representation of a quad to draw our sprites on, but what textures are we using? What about positions and orientations? To link all these together, I created another sprite object class. This class holds a handle to a material, which contains the texture we need, with some other data such as the scale, the position, render order (layer), etc. Notice that the scale means the scale compared to the whole screen, and the position is in the projected space. This implies upward leakage from lower-level engine code to game code and can cause some other problems that will be discussed later.
Now let’s talk about the sprites shaders and how the engine code passes information to our graphics hardware. We already have a matrix for local to world transformation and another matrix for local to projected. However, when transforming UI, we don’t really care about the third dimension so using a full 4×4 matrix to transform is kind of a waste of calculation. Instead, we will only need 2×2 at most to perform the rotation operation. To achieve this, we can pass all the information needed from C++ to the shader codes.
Notice the seemingly arbitrary indices for the position values and the scaling values. They are set this way because D3D operates in column major. By setting them in those positions inside the matrix, we can conveniently access them in the shader code using something like g_transform_localToWorld.xy. More importantly, we can just pack one column into a 2×2 matrix to perform our rotation operation!
// Scale and rotate rotate, and then translate float2 newPos = Transform((float2x2)g_transform_localToWorld, i_vertexPosition_local * g_transform_localToWorld.zw) + g_transform_localToWorld.xy;
Comparing to other meshes in the game scene. We know that the sprites objects will always be directly in front of the camera and cover any other objects in the world. Therefore, we can separate them into a different render command and just draw them after all of the meshes in the scenes are drawn. By doing this, we can turn off the depth testing and write to depth buffer for our sprites. This means that whichever sprite gets drawn last will be on the top of other sprites. To gain more control on this part, I pack the render order inside the sprite objects into the render commands and use them as the primary factor to sort the commands.
If we take a look at the GPU timeline when rendering the sprites, we will see that we set the vertex buffer for our sprites before the first time that we are rendering sprites, and then we set the primitive to be triangle strips instead of triangles. After setting those correctly, we only need to call Draw(4, 0) and it should draw correctly.
Below is the result of my new sprite system.
Since we’re directly setting the positions in projected space and store them inside the sprite object, the UI will get squeezed or stretched if the screen resolution changes. Therefore, we need to come up with some methods to address this issue.
Resolution Independent Sprite Size
Instead of directly using a ratio comparing to the screen size, we can use a fixed desired size in the sprite object to calculate the scale according to current screen resolution. By doing it this way, we can maintain fixed sizes for the sprites.