Day Night Cycle using LUT in Fragment Shader of Materials

Shahriar Shahrabi
19 min readJun 6, 2023

--

I will cover how to use look up tables, aka LUTs in the fragment shader of the object’s material to make a day night transition or adjust the feel of your scene. The example code is written for Blender and tested in Unreal, but you can as easily use it for Unity.

As usual you can find the files for this on my Github: https://github.com/IRCSS/Day-Night-LUT-Shader

And the demo scene is available on my Sketchfab. Day time version, and night time version.

For Puzzling Places we wanted to try out a puzzle where the location swtiches between day and night, as the player progresses through the puzzle. We can of course have two different textures, one for day and one for night. As the player progresses we could interpolate between them. However that would mean two 8k textures sampled in the fragment shader at all time. That is both a huge memory cost as well as bandwith pressure.

Luckily you don’t need a second texture. You can use Look Up Tables to change the look and feel of your scene.

Look Up Tables are conversion maps which take your standard texture colors and transform them to a modified color space. Instead of doing a transformation matrix on the color, it is like you have baked that transformation in a table. You convert your transformation matrix to a series of samples, then you reconstruct your original transformation function through sampling and interpolation.

In artistic terms, you have baked a bunch of adjustment layers in a table that says how each color should look like at the end.

Because most color transformations we do on a scene are linear and are not high frequency, we can get away with a very low resolution LUT. Although our standard 8 bit per channel color has a depth of 256 possible variations, we can use as little as 16 samples to succesfully cover most color transformations. For non mathsy people, I might have already lost you with the low frequency and linear color transformations, but hopefully things will be clearer by the end.

First let’s get the artistic side out of the way and cover how we can create the LUT itself, before we move on to the shader bits.

Generating a Night Time Scene

We need to figure out how we can change the colors of a scene so that the player expriences the same scene as night time.

Your first thought might be this. Make everything darker! This is what a lot of modern films do (to also make good looking CG cheaper) and it is a horrible idea. I frequently find myself staring at frames of a film and really struggle to make out what is going on, beacuse it is so damn dark.

Look at these two scenes from Disney’s Aladin.

The night version is darker for sure. But not by that much. And it is clearly a night scene. As a matter of fact, we can check its values:

The night version is nearly the same as the day version. If we look at the histograms this becomes even more evident.

The first image is the night version histogram, second the day version

The night version doesn’t have a lot of bright values (or highlights). Its total range is more or less condensed to the first half of the value range. But if you look at the middle of the distribution curve, you will see that the actual median has only shifted by 10 percent! So on average the image is only 10 percent darker. And more importantly most of the pixels are still above 10 percent brightness! It is also worth noticing that there are barely any really dark pixels and there are almost no pure blacks.

As comparison, here is a scene from the movie Batman. The image is way too dark, the median is probably by 5 percent brightness.

An even better example is the night scene from Nausicaa and Valley of the Wind. These values are comparable to a day scene, while still reading as a night scene.

The trick is the following. Make everything 10 to 20 percent darker. But make sure you are only moving the middle of the curve, so you are making the average pixels darker, but you are not pushing the dark pixels to be even darker! If you do that, you will be pushing a lot of pixels to pure black and that is rarely desired.

In presence of light textures are typically more “clear”. That means there is more detail. That translates to a higher amount of micro contrast on the surfaces. By pushing down highlights and mid-level brightness, you are also effectivly reducing contrast.

It is also important to add a hell tone of blue and purple.

At night time whatever lighting we get is either skylight or moonlight. Skylight has strong blue component and we percieve the moon light to also be blue. I say we percieve, because the moonlight is not actually blue. In darker enviroments our color receptors turn off and our rod receptors are slightly more sensitive to blue light. So in low lighting we percieve the moonlight as blue. We will still use the color blue as an indicator for night time though. After all our mental model of the world is more important than how things physically really work.

Practically, what I usually like to do is use curves adjustment layer in Photoshop to push down in brightness, pixels that are in the middle and the top. But I also place control points on the lower part of the curve to make sure the darker areas don’t get too dark. Then I use Selective Color to add a lot of blue to the shadows (all ambient lighting is blue here), a lot of blue to the highlights (we assume main light source is the moon) and reduce contrast abit in reds and yellows.

In 3D what you are getting at the end is a color change that kind of looks like this:

As a final step, it is time to “bake” these color transformations into a LUT, which is a rather straight forward task. First you need to take the identity LUT, which is the LUT with no color transformations in it.

Identity LUT. This is like the number 1, nothing happens if you apply to an image

If you apply this LUT to your iamge, it is as if you multiplied a number by 1. Nothing would change. You import this LUT in photoshop, and place the layer below all the adjustment layers. Like this, you have baked in all those changes on the default color space. Then you simply export it as a format with no compression applied to it. The scene in the video for example was generated using the LUT below.

An example of a modified LUT. The night scene in the video was generated by this.

Use in the Shader

Now you have your LUT. Next is to apply it.

What you need is a code that takes your normal unmodified texture as an INPUT, then uses that as UVs to sample the LUT. Whatever comes out of sampling the LUT texture is your OUTPUT, which is the original textured recieving the same color adjustment you baked into your LUT.

So you take your texture which has RGB values, and you convert it to UVs used to sample LUTs. You are going from 3 dimensions to 2. The LUT is set up in a way, that the Red and Blue channel together make the horizontal U dimensions of UV and the Green channel makes up the vertical V. You can see that in the LUT image itself. As you move vertically the amount of green changes, but if you move horizontally, depending on the slice you are in both Red and Blue can change.

A quick side note on the settings on the textures used for sampling. For now let’s set the sampling to closest (this is called Point in some enviroments) and wrap mode to clip.

This is baisiclly the whole shader:

It has various parts which I will go over bit by bit, but here is the overview:

  1. Conversion from whatever space the LUT is in to Linear space. This can be sRGB, but it is up to you when you are building your LUT
  2. The calculation of the V channel of UV using the Green channel of the original texture
  3. The Red channel contribution to the calculation of the U of UV
  4. An offset you need to add to sample the correct pixel
  5. The Blue channel contribution to the calculatiuon of the U of UV
  6. Doing your own billinear filtering between the the two slices of blue along the U channel

Now we can go over them one by one. Step one is straight fowrad so I will skip that.

Calculating the V of the UV

We can start with the V (vertical component) because it is very easy to calculate. If you look at your LUT, you will see that the V component is actually simply the green channel. The green goes between 0–1 and the V also goes between 0–1 so you can almost use this directly.

There are a few gotchays. First of all, the natual LUT typically has pure black at the top left. At the top left, although the V value is 1, the Green channel of RGB is 0. You either need to flip the LUT image vertically, or do the following calculation.

v_of_uv = 1. - INcolor.G;

Second point has to do with the way sampling works. Lets import the standard LUT image as a texture in Blender, and write a shader that applies the LUT to itself. We can lerp back and forth between the original and the applied LUT to spot logic erros in our code. Ideally after the LUT being applied on itself, our natural LUT image should remain exactly the same. Here is how that looks with our naive implementation:

Notice how there is a black bar at the top of the V dimension. This happens because this is a very low resolution texture and for sampling you want to sample the center of the pixel. So if you are going from pixel 0 to pixel 15 (you have 16 pixels), you need to sample at 0.5, 1.5, 2.5, 3.5 etc. Since the coordinates are normalized that would be 0.5/16 to 15.5/16. Right now we are sampling from 0 to 1. Since we are using Point or Closest filter for the sampling, it rounds it up to the closest pixel center. That means at 1 it is actually sampling pixel 16.5 which doesnt exist. So we get black.

Solution for this is to normalize it, so that the 1 is mapped to the last pixel (for 16x16 that is 15) and add half a pixel so that you are always sampling the correct pixel. This means:

v_of_uv = INColor.G * ((LUT_Dimension - 1) / LUT_Dimension ) + 0.5/LUT_Dimension ;

Before the code above, your green channel was going from 0 to 1. This was sampling from pixels 0 to 16. After the transformation above, your 0 to 1 code has been remapped to 0–15/16. You add the half a pixel offset of 0.5/16, which means now you are sampling from 0.5 to 15.5.

That is all you need to do for calculating the V channel of the LUT. In Blender’s material system, it looks like this:

Calculating the U of UV

This is a bit more complicated. This calculation has two parts. You take the amount of Red in your color and the amount of Blue, and out of the two, you calculate your U. First lets start with the Blues.

Calculation of the Blue contribution: LUT is a 3D cube. As a matter of fact you can have your LUT as a 3D texture. But our layout is not like that. We have sliced the cube in N cuts, and layed them out side by side. So the first thing we need to do is to figure out which of these Cuts is relevant for our current InColor. The amount of blue determines this. In otherwords, when you look at the natural LUT image, you will see that as we go from left to right through these different RG cuts, the amount of blue in each cut increases.

In the natural LUT, the slice all the way to the left has a blue value of 0. The one all the way right has a blue value of 1. So we take the InColor.B and based on in which of these slices we are in, we start with an Offset for the U dimension.

As an example, if our Lut was 4x4x4, and blue was 0, we want to sample the first slice. If B was 0.25–0.5 we want to sample the second slice, 0.5–0.75 the third slice etc.

We have a problem though, our InColor.B is a float, so it can have any values, but our LUT is made out of discrete steps of NxN, so we need to find out in which of these cells we land with that Blue. The maths for this is super simple:

U_of_UV = Offset_from_Blue + Contribution_From_Red;

Offset_from_Blue = Floor(InColor.B * LUT_Dimension) /LUT_Dimension

We are using a standard trick here. For the 4x4x4, if our number is between 0 and 0.25 for example, multiplaying our InColor.B by 4, we will get a number between 0 and 1, flooring it we will get 0 again. Which means everything between 0 and 0.25 will get an offset of 0, between 0.25 and 0.5, offset of 1, between 0.5 and 0.75, 2 and between 0.75 and 1 it will be an offset of 3. Of course we want this normalized so it will be 0/4, 1/4, 2/4 etc.

There is still a problem with the implementation. In the case of 4x4x4 LUT, we never want to have an offset of 4, because there are only 4 Cuts. Remember we are going to still add the contribution of the red channel and when Blue is 1, we shouldnt get a 4/4 offset. Because that would mean our U will become bigger than 1, when we add the Red contribution.

What we actually want is that when we normalize, Blue value of 1 will give us the index of 3, because that is the starting point on which we add the red offset to read the correct color (it is the corner left of the 4th square).

To do this we need a little change:

Offset_from_Blue = Floor(InColor.B * (LUT_Dimension - 1)) /LUT_Dimension

Here is how that looks like in node based:

And that is all we have to do for the Blue channel. Now for the red.

Calculation of the Red contribution: Now for the second half. You need to add the offset from the Blue channel to the offset from the Red channel to get the final coordinate in the U axis.

The Red goes between 0–1 in the LUT space. However, since there are N Cuts layed side by side, the red actually qoes from 0–1/N. So it goes from 0/N to 1/N, then from 1/N to 2/N then 2/N to 3/N and so on and so forth until N-1/ N to N/N (which is 1). That means the first thing we need to do is divide the Red channel by N.

Contribution_From_Red = InColor.R / LUT_Dimension

If we take this and add this to the Blue offset, we should theoratically have our U coordinate. However just like in V coordinate calculation, this will give us a black bar on the right. For the same reason. At value 1 for Red, we will be sampling pixel 16 (for the 16x16x16 LUT) which in the Closest sampling filter is 16.5 which is a pixel that doesnt exist. So we need to normalize this also so that Red = 1 gives us (LUT_Dimension -1)/LUT_Dimension. We add half a pixel of the 16x16 square so that we are sampling the center of the 16th pixel which has a coordinate of 15.5 (normalized to 0–1).

Last point here is that the U direction of the LUT texture is stretched. So half a pixel value of the U channel is 16 times more stretched than the V channel (remember that the X direction is 16 times longer, or whatever LUT_Dimension is). So to get the value for half the pixel we do 0.5/LUT_Dimension*LUT_Dimension.

Contribution_From_Red = (InColor.R / LUT_Dimension ) * ((LUT_Dimension -1)/ LUT_Dimension ) + (0.5/ pow(LUT_Dimension ,2);

And in nodes:

All put together:

U_of_UV = Offset_from_Blue + Contribution_From_Red;

OR

U_of_UV = Floor(InColor.B * (LUT_Dimension - 1)) /LUT_Dimension + (InColor.R / LUT_Dimension ) * ((LUT_Dimension -1)/ LUT_Dimension ) + (0.5/ pow(LUT_Dimension ,2);

Discritization Artifacts

Here is how the natural LUT applied to a scene looks like with our shader code:

The image with nothing applied
The image with the LUT. Notice how there are these banding artifacts inside

The image is a bit yellower due to the fake lighting, but more importantily notice the banding and the weird artifacts on the wall. This comes from the fact that a standard RGB color has 256 variations for each channel (so 8 bit). By bringing it down to a 16x16x16, you are reducing each channels to 16 variations (4 bits). So you have 16 times less bit depths, which causes the banding.

One solution is of course to move to a 32x32x32 LUT which will double the precision. But if you want to get to the native no banding, you need to do a 256x256x256 LUT, which has as many pixels as a 4k texture. For our case, where we are using an 8K texture, using a LUT like this is still better than having another 8K texture that has the color variation baked in it, but it still wont make for good performance.

Luckily there is something we still havent done here. We are not interpolating our LUT readings. While our LUT is only 16x16x16, our input is still 256 per channel. Since between each of the LUT pixels, there is only linear color change, we can baisiclly get the same result by using linear interpolation. This will cost us 4 texture reads, compared to a single tap, but on a 16x16x16 texture, and considering the texture cache, this is very cheap.

For the non mathsy folks, what I am talking about above is similar to zooming into an image. You can increase the resolution of a 16x16 image in photoshop, and there are different methods you can use for decided what color value the new pixels have. If you go from 16x16 to 32x32, you have twice as many inbetween pixels which you need to somehow decide its color of. The simplest method is the closest or Point filtering. This is what we have been doing so far with our Blue channel, as well as the Red and Green. In this method you just take the color of the closest pixel. But there is also billinear filtering. In this case, you will take the color of the two closest pixels, and mix their two colors based on how far away they are from the new pixel to get its color.

Billinear filtering can guess the bewteen color by linearly interpolate between the two closest pixels, based on the distance of the desire position to the two pixels

Usually when you sample a texture in a shader, the hardware takes care of the filtering for you. However you can’t use the hardware interpolation here, because it can accidently sample the neighbouring slice, and it has no idea how to interpolate the blue channel. This is because of the unique way our LUT is laid out. We have these Cuts put next to eachother, and for the case of blue for example the neighbouring pixel is a whole Cut away. You can of course use a 3D LUT, and rely on the interpolation there, but here we have our LUT layed out in 2D.

All our channels need a billinear interpolation for our LUT to not cause banding. For the red and green channel, setting the interpolation to Cubic is good enough. You get some minimal artifacts, but it is so unnoticable that for our use case at the moment, it is good enough. The artifacts come from the border between the different Cuts in the U dimension. For InColor.R values very close to 1.0, it will accidently sample the neighbouring slice.

At some point it is worth solving this properly, either looking at how to remap the uvs so that the hardware interpolation works, or doing this ourselves per hand. For now the most important thing is to setup the Blue channel interpolation so that we can get rid of banding there.

Before we did this to calculate the U for reading the LUT:

U_of_UV = Offset_from_Blue + Contribution_From_Red;

Offset_from_Blue = Floor(InColor.B * (LUT_Dimension - 1)) /LUT_Dimension

What we want to do now, is to also read the value of the Blue, in the slice above the one we are reading and interpolate between them. Imagine you have 4x4x4 LUT. You have a blue value of 0.125. In our current system, we read the LUT value for the slice where blue is 0. We asign this as final color. This is not good, because every blue value between 0 and 0.25 will read the same blue from the LUT. What we want to do is to read the Blue at 0, then the Blue at 0.25 and then Lerp between the two by 50 percent. This is of course for the case of InColor.B being 0.125, which is half way between 0 and 0.25. So the caclulation is this.

Read the Blue Value at the Lower end:

Offset_from_Blue_Lower = Floor(InColor.B * (LUT_Dimension - 1)) /LUT_Dimension

Read the Blue Value at the Higher End:

Offset_from_Blue_Higher = Ceil(InColor.B * (LUT_Dimension - 1)) /LUT_Dimension

Then mix them together base on how far away your InColor.B * (LUT_Dimension — 1) is from the lower end:

Final_Color = Lerp(LUT_Read_At_Lower, LUT_Read_At_Higher, Frac(InColor.B * (LUT_Dimension - 1));

As you can see in the shader code, the main change is that we calculate the Blue contribution of the code in two parts, one for the lower band and one for the upper band

We then do two different texture reads and interpolate based on Frac(InColor.B * (LUT_Dimension — 1)).

And that is the whole shader covered.

Improvements

This is almost the perfect LUT shader. The last thing left which I wont be doing is to do a proper interpolation for the Red and Green channel, instead of relying on hardware interpolation. As stated before, the problem area is the border between the different slices.

The problematic area

For my case this is good enough. The artifact only pops up for Red value between 15/16 to 16/16. So it is visually very minor.

One thing to keep in mind is that whatever enviroment you use this code in, you should make sure you set up your texture settings. In Unreal for example you need to take care of things like compression, Mipmap, color space etc. Since LUTs are special textures, you cant leave it with the default settings. In Unreal you can use the compression settings used for UI for example, which will apply the correct compression settings. Here is the settings we used for our Unreal texture.

What About the Lights?

You might have noticed that I have a bunch of lights turning on in the night version. This is made using a black and white mask which applies a different color transformation on the original texture. This transformation uses a similar workflow compared to generating the night feel. As in, it is based on our mental model of how we think light effects objects. A bit brighter on average, more yellow, more micro contrast, more highlights.

You can generate the black and white map itself by baking scene lights on a texture. Or you can also paint it per hand. You can generate the color transformations for how the light effects the scene colors, either in photo shop and bake it to a LUT, or do it directly in a shader if it is not expensive. At the end I simply did it in the shader, which you can see on the Github file.

Update 29.11.2023: You can look dev your scene also in Blender. The article above I do the actual color transformation and look dev in photoshop. Photoshop has tools like selective color which allow a robust workflow however Blender’s RGB curves also enable a lot of color cooreciton. If you have done your lookdev in a shader in Blender, you can also bake this in a LUT as long as the color transformation you are applying is not using any geometry specific information. To do this:

  1. first import the Natural LUT as an image (check out the images as plane plugin if you havent)
  2. In the LUT plane shader, coppy past your shader color transformation and apply that to your LUT texture before putting it in the material output
  3. Duplciate the LUT texture node, and make it single user copy. Alternativally you can just create a new image with the same dimension.
  4. While having this new texture selected, bake in Cycle the output to the new texture.
  5. Save the texture. What you saved is the LUT including your shader color transformation.

Two important points. When baking make sure to turn off noise threshold and set sample size on 1. Also turn off the denoiser. On top of that in the Color Managment tab make sure that the View Transform is set to standard. Last point, the original LUT texture should be sampled using Closest here. Per default it is set to linear, which might not correctly save your LUT.

As usual, thanks for reading, you can follow me on any of my socials, which are all linked on my website: https://ircss.github.io/

Further Readings

  1. I found this article to be very helpful, writing my own shader: https://defold.com/tutorials/grading/

--

--