This blog series is a part of the write-up assignments of my Real-Time Game Rendering class in the Master of Entertainment Arts & Engineering program at University of Utah. The series will focus on C++, Direct3D 11 API and HLSL.
In this post, I will demonstrate the visual differences introduced with enabling/disabling mipmaps, alpha cutoff, and texture filtering.
By setting the sampler states bits in our game, we can change how the texture is being filtered. Currently, our filtering option is set to D3D11_FILTER_MIN_MAG_MIP_LINEAR for D3D11 and GL_LINEAR_MIPMAP_LINEAR for OpenGL, which is using linear interpolation for minification, magnification, and mip-level sampling.
Below are the results of sampling using nearest neighbor and linear interpolation. You can see that linear sampling makes the texture looks really smooth and quite coherent from close distance to far distance. At the same time, while nearest neighbor point sampling seems to preserve more detail, it also looks more jittery and might suffer from aliasing when a fragment is covering multiple texels, such as in the rear end.
Next, let’s take a look at how mipmaps make the textures look differently. Mipmaps are also really useful for eliminating aliasing. Instead of not generating the mipmaps during the building process, I set the mip level to 1 so that it won’t switch to any other mipmap, for our purpose of examining the difference between enabling/disabling mipmaps.
Visual Studio’s graphics visualizer provides the option to look at all the mipmap levels that we generated.
Sometimes, when we want to render some textures with sharper details (such as tree leaves). Instead of just using the standard alpha blending, we can just simply discard fragments if the alpha is below a certain threshold. In the pictures below, you can see that it’s more blurry using alpha blending, while the cutoff version is displaying sharper outlines.
We can achieve some simple animation effects with nicely authored tiled textures, and use them for something like lava, or waterfall that is far away. Changing the UV values according to the elapsed simulation time is the easiest way to go. If we want to achieve more complicated effects, we can also have the artists design uneven UV values, which will make different parts in the textures progress at different rates. To make things even more interesting, we can also introduce sine and cosine functions. We can also make a texture rotate, such as the black hole example shown below. However, because the texture was authored to be viewed at a certain angle, it looks a bit odd when it rotates to some orientation.
(Update: Since it’s hard to tell whether it’s the mesh rotating or the texture, I added another video without alpha blending so that the effect can be seen more clearly)
With a sprite sheet as a texture, we can author the shader to sample different parts of the texture at a certain rate that will make it looks like it is playing an animation. To achieve this, we calculate which sprite should be rendered according to the current time. And then covert it to the correct UV values. Since the UVs are being linearly interpolated, we can do everything inside the vertex shader, which requires only 4 calculations (for a quad), comparing to who knows how many in the fragment shader.
While this is cheap and nice, we don’t have too much control (play, pause, rate, etc.) over it. It would be good for something like a flock of flying birds in the far distance that will just keep flapping their wings until the end of time and doesn’t need to be high-res or more dynamic than that.