This blog series is a part of the write-up assignments of my Real-Time Game Rendering class in the Master of Entertainment Arts & Engineering program at University of Utah. The series will focus on C++, Direct3D 11 API and HLSL.

In this post, I will talk about why some rendering results that we see aren’t actually what we had in mind, and also how to achieve color correctness when rendering.

## Human Perception

To discuss the topic of color correctness, we need to first understand that human eyes don’t perceive colors and brightness linearly. In a low-light environment, human vision is mainly contributed by the rod cells in the retina. While rod cells are more sensitive than the cone cells, rod cells have little role in color vision, which is the main reason why colors are much less apparent in dim light, and not at all at night.

This means that if we have a picture in which the gradient it perfectly interpolated from 0 – 255, we will actually perceive more bright parts than dark parts (Fig. 1). However, the actual color values are interpolated correctly and perfectly like what  Fig. 2 is showing.

This happens because of the nonlinear transformation between sRGB values and the numbers that are actually stored. From the curves below, you can see that the curve is giving more data to the lower values, which will make us see more darker areas like what the images above are showing.

## The Problem

When we’re performing any operations in the shaders, we were assuming that the values are in linear space, which is not true and can result in incorrect values. Take alpha blending for example, we were imagining ourselves blending the halves of two values together with 0.5 * A + 0.5 * B. However, we are not actually taking half and half of each value because of the gamma correction curve.

To solve this problem, we need to make sure that shaders only handle linear space color values. But what about the art assets and the colors in our game code?

## Linear or sRGB

Usually, for the art assets being used in the game that are exported from some software like Maya or Photoshop, the colors will be in sRGB space. This means that we’ll need to convert the values into linear space before sending them to Direct3D. However, we can make a decision on whether other colors, such as the light color, or the material color, to be using linear color or sRGB color.

To reduce confusion and to make it more consistent, I decided to assume that any color specified by a human, including mesh color, texture color, and others, will all be in sRGB space. Therefore, I will need to transform all of them before sending to Direct3D!

We still want to keep the color values in sRGB space in our human-readable Lua files so we can debug them against the values in our content building software. So we are going to transform the values into linear space when building the binary files which the engine actually uses.

By changing the back buffer format from UNORM to UNORM_SRGB, we are telling the graphics hardware to convert the linear values that we use in the shaders to sRGB values correctly before putting them into the back buffer.

## Texture Building

Now what we need to do is make sure that the textures should be built with sRGB format correctly. If a texture has channels that represent actual color values, then we would want to convert it from sRGB space to linear. However, we wouldn’t want to do the same for textures such as normal maps or roughness maps. Therefore, I needed to come up with a system that can provide enough information to the texture builder so that it knows what to do.

To achieve this, I set up new enum values to represent different texture usages, which only contains two at this moment. Right now, we don’t directly specify the textures that need to be built but let the material file register them to the texture builder along with the correct usage as the command line argument. We can then later check the argument inside our texture builder and override the format to sRGB when needed.

## Transformation Formula

We can follow the transformation formula on Wikipedia. Which can be pretty easily converted into C++ codes.

## Result

Below are the scenes comparison with/without correct linear color space. You can see that they match our expectations pretty well. We were expecting to see a more drastic transition from the bright side to the dark side if we are color correct. We can see this from the earth pictures that are shown below.

As for color gradient. If the engine is color correct, then the interpolation should be smooth and perfect, comparing to the gradient in the first picture, which maintains the same color for a larger area and then changes very suddenly.