This blog series is a part of the write-up assignments of my Real-Time Game Rendering class in the Master of Entertainment Arts & Engineering program at University of Utah. The series will focus on C++, Direct3D 11 API and HLSL.

In this post, I will talk about how I add texture into our rendering pipeline and apply to my meshes.

Texture File Processing

The first thing that I need to do, is to add the texture path into my material file and modify my material builder so that it will take the texture into account. By doing this, we don’t need to directly specify which textures that we want the texture builder to build for us since it will be registered during the material building process.

MaterialTexturePath.JPG
Texture path in material Lua file
MAterialBuilderRegisterTextureToBuilt.JPG
Material asset register texture to build

Now that the material format has more entries than before, the read/write codes need to change to take that into account. Before, I only had a color struct and a string path, while the string path is at the end of the file so that I didn’t need to worry about the string length. Now that there are two string paths, I need to also write the string length out to the binary file so that we can quickly point the pointers to the correct memory location.

MaterialBuilderWriteTexturePath.JPG
Writing texture path to binary format
MAterialReadTexturePath.JPG
Material class reading the binary format

Vertex Data and Input Layout

Even though we got everything built and ready, the vertex data and shader still don’t know the existence of the texture. First, I need to change our vertex format and then add the correct description into the D3D layout description so that the graphics API understand how to interpret those vertex data.

NewVertexFormat.JPG
New vertex struct in C++
LayoutDescTextureUV.JPG
New description for vertex UV

Now that our C++ code and the D3D API know about the new texture coordinates data, but the shaders still have no clue about them. First, I also need to add the input layout to our HLSL shader code, so that the GPU knows what to expect in our data. Next, we can declare some helper functions for platform-independency purposes. After that, I register an actual texture2D and a sampler state. After these are done, I can start sampling from the texture in my shaders.

ShaderInputLayout
Shader code input layout
DecalreShaderTextureFunctions.JPG
Texture functions in shader code
DeclareTextureandSamplerState.JPG
Texture and sampler state declaration

Results

WholeTextureScene
A textured Sting from Lord of The Rings
PlanetsTexture
Textured planets spheres

Note that I am using some very low poly spheres in the scene. If you look closely, you can see the polygon shape. But still, even though the mesh is super low-resolution, with the help of textures and interpolation, the spheres actually look a lot like actual planets.

One problem I encountered was that before I converted the textures (JPG or PNG) into .tga file and added them into my engine, I had to rotate or rotate some of the images so that it will map to the texel space correctly. It turned out that it was because Maya uses different UV coordinate system from D3D. The problem was fixed after I changed the v to 1 – v inside my MeshBuilder before it exports the final binary (platform-dependent) file.

Reduce Texture Coordinates Memory Usage

Currently, I am using 32-bit precision floating point numbers for the texture UV coordinates. However, in most cases, 16-bit half precision floating point numbers will be sufficient and players won’t be able to tell the difference. Therefore, I am going to convert my texture to use 16-bit half precision floating point numbers instead.

To make this work, I’ll need to change the MeshBuilder, the vertex layouts for D3D and OpenGL, and the vertex format. After this is done, we can check out the mesh file sizes and see if they got smaller.

MeshBuilderConvertingTextureUVHalfPrecision.JPG
Mesh builder outputting half-precision floating point numbers
uint16TextureUV.JPG
New texture UV in vertex format
NewHalfPrecisionLayoutUVDesc.JPG
New DXGI format for D3D11 input layout
NewHalfPrecisionLayoutUVGLDESC.JPG
Using GL_HALF_FLOAT instead of GL_FLOAT in OpenGL

Now that I have set everything up, let’s take a look at the effect and the file sizes shown below. You can see that the sizes reduced by almost 10%! Not bad at all! We can also take a look at the two pictures of the earth in the bottom, where one is using 16-bits UV while the other one is using 32-bits UV. I swear that they really use different precision floats for their UVs, but seriously, I’m sure that no one will be able to tell.

MeshBinFileSizedWith32Bit.JPG
Mesh files with 32-bits texture UVs
MeshBinFileSizedWith16Bit.JPG
Mesh files with 16-bits texture UVs
EarthCloseup32Bits.JPG
Earth with 32-bits UV
EarthCloseup16Bits.JPG
Earth with 16-bits UV