Padding Transparent Textures for MIPs and Game Engines
For rendering and game dev, transparent textures usually have incorrect color values in the areas where alpha is zero. This can cause various problems with mipmapping, decal projections, or anywhere that blending takes the RGB value of a semi-transparent pixel into account.
I wrote a small Python program to pad textures outward from opaque pixels to take care of these rendering issues. You can find both the .exe and the code here: https://github.com/IRCSS/Transparent-Padder?tab=readme-ov-file
Note: This is obviously trageted to the case where you don’t have the correct RGB values on the transparent borders, which is indepndent of pre/ straight alpha workflows. More on that at the end of the article.
In the world of game dev, transparency doesn’t usually mean what it means to everyone else. The average user of GIFs and memes is only concerned with having certain areas of an image be see-through. A game engine, however, usually does a lot more with images than simply blending them over a background.
Some examples include techniques like mipmapping and anti-aliasing, which require sampling neighboring pixels. Technical artists commonly treat the alpha channel as just another place to save information independent of RGB. There’s a lot happening behind the scenes too, such as compressing image data per channel, applying fancy blurs, or adding visual effects like tears — things that go beyond simple blending of transparent images.
For the needs of game devs, aspects of the ecosystem are lacking. Saving out an actual alpha channel from Photoshop is very annoying. Transparent images you find online often have dirty pixels in the transparent areas, or have premultiplied background color on semi-transparent border pixels, causing visual seams in the engine.
Here’s a concrete example of how this can lead to problems. I have a texture atlas of butterflies that’s being sampled in a shader. At first glance, it seems like a fine texture with no problems.
However, once you zoom out a bit, you’ll start noticing a weird edge where the alpha meets the transparent pixels.
What’s happening is that the shader is sampling pixels from areas that are technically transparent. In Unreal, if we turn off the alpha channel and look at just the RGB channel, we’ll find that the texture actually looks like this:
That explains the white border. But why is it only happening as you zoom out? As you zoom out, the engine uses a more blurred and downsampled version of your texture to avoid aliasing. This blurring and sampling reduces the maximum frequency of detail in the sampled texture, helping match it to the resolution and theoretical detail limits of what your screen can display at that distance. You probably know this process by the term Mips.
Could we change a setting to ensure that our mips never sample transparent pixels? Yes, by more aggressively considering an area transparent as we go down the mip chain. But that has an unintended effect: the more you zoom out, the more your texture “shrinks” as transparent areas eat into the opaque pixels. You might recognize this phenomenon from games where tree leaves shrink as you move away from them.
Alternatively, if you set the mip generation to be more conservative about what’s considered opaque, the shape of your object will maintain its integrity as you zoom out — but you’ll notice the weird border even more.
The topic of how to correctly deal with alpha during mip generation and image compression is a whole beast of its own. But the core issue is that pixels neighboring transparent areas often have nothing to do with the opaque pixels — and that’s a problem we can solve.
Average Pad Blend
Let’s run the image through the little tool I wrote. Our transparent texture now looks like this:
In summary, what the application does is bleed out the opaque pixels outward, to ensure there’s local coherence around the borders of the butterflies. In other words, the colors around them actually relate to the neighboring pixels.
Here how that looks:
More in-depth, it’s doing three different things at the same time:
1. Weed out bad pixels on the border
As mentioned before, a lot of transparent images you find online have dirty borders. They might contain dark pixels premultiplied in semi-transparent areas. You can clean these up by comparing semi-transparent pixels with their neighboring opaque pixels, and filtering based on a threshold. On top of that, you can apply a very slight blur to the edges, overwriting the premultiplied RGB values with more relevant colors — and as a bonus, improving anti-aliasing.
2. Average pixels on the transparent border
A common way to deal with problems like this is to define a kernel with a certain radius and average the RGB values around the border among all opaque pixels within that radius.
Why average instead of simply taking the nearest pixel? Because sometimes borders contain high-frequency details, noise, or bad pixels. To avoid any single pixel causing weird colors on the edge, we average.
3. Flood fill and extrude out
Averaging with a kernel only adds a border as wide as the kernel. Depending on the texture size and the typical mip level it’s viewed from, that may be enough. But there’s no reason to stop there.
Ideally, the entire image should avoid garbage RGB values in the transparent pixels. You could increase the kernel size, but that would tank performance and reduce relevance. A larger kernel would average too many unrelated pixels, making the result less accurate and introducing color shifts between averaged transparent pixels and the nearby opaque ones.
That’s the gist of what the application does. You can load a PNG with transparency, it will pad the RGB values, and write out a TGA with a proper alpha channel!
Before we close, here’s a bonus side use for the application:
On Premultiply Alpha
If your transparent objects have been correctly authored as premultiply alpha and your rendering eco system correctly handles them (blending, mip generation and all shaders that deal with it), then you don’t have this problem. But keep in mind that premultiply doesn’t really solve the problem I mentioned, it just means that someone already premultipled the transparent images with the correct rgb values as they were authoring the asset. That means they had the correct values to begin with, which you don’t when you just download a transparent image! The question of premultiply alpha or straight alpha is a question “when” part of the blending equation happens.
In my exprience both of the above are a big ifs. Most transparent images you find online have dirty alphas and the rendering pipelines and formats are all over the place regarding what they expect from you. Of course if you are a mid size studio and are authoring all your assets yourself and control both the rendering as well as asset creation, then premultiply is probably the way to go (except some edge cases where the asset gets some special shader that requires tapping in relevant info in neighbouring pixels at all time). But as an indie, I use assets from all over the place and shader snippets which others have written and they don’t have that consistency. So at least in our use case, I prefer authoring straight alpha assets.
Even in a premultiply only pipeline, the tool I presented is useful. If you get a dirty alpha asset, you can clean and pad it with this tool, then convert it to premultiply alpha, but now with relevant colors on the borders that are correct.
A brief look on how all shader codes need to correctly take in to account when the asset is premultiply. The most common case of dealing with alpha is blending:
DestinationColor.rgb = (SourceColor.rgb * SourceColor.a) + (DestinationColor.rgb * (1 — SourceColor.a));
The above is how you deal with straight alpha. If the alpha is premultipled, then you can simply add one of the terms as is, since Source.RGB has already been multiplied with Source.A during asset authoring.
DestinationColor.rgb = (SourceColor.rgb) + (DestinationColor.rgb * (1 — SourceColor.a));
Another example of where you need to keep straight vs premultiply alpha in mind is whatever processing which deals with a kernel (blurring, up and downsampling and mip generation, tearing, advection etc.). In these cases, with straight alpha you can simply add the contribution of all the pixels that are in the kernel and divide it by their total number. But with premultiply you need to adjust the total weight you divide by, so that transparent pixels dont contribute to the final value that comes out of the function.
Anyway that whole topic is a different blog post, so let’s move on.
Padding UV Island for Mips
The mip issue I described isn’t unique to transparency. Averaging neighboring pixels can cause problems in any texture if the surrounding pixels have unrelated colors.
For regular textures, people usually add padding between UV islands to avoid seams in higher mip levels. 3D software packages typically fill this space using algorithms similar to the one used here — some better than others.
As of this writing, Blender offers two methods of padding:
- One finds adjacent faces and floods from those
- The other extends pixel values using a constant pixel value
Simplygon, on the other hand, fills the empty areas across the whole texture based on the closest UV neighbor.
In any case, you might end up with a texture that has no padding. You could rebake it in your 3D software — or use this tool. I added the option to pass in a normal texture without transparency like this:
And a checkbox to provide a separate UV map like this:
Then, running it through the tool, you’ll get this:
Hope this is of some use. Your game engine of choice might already include some or all of this functionality as part of its import process, so make sure to check that.
As usual, thanks for reading. You can follow me on various socials under: https://ircss.github.io/