When working with textures with alpha there are few caveats you should be aware of. Let's look into some of these. I'm using this synthesized texture as an example, made in Photoshop. It's simply a gray circle with dark red outline (colors don't matter really) but most importantly there is colored noise added into the almost fully transparent pixels around the red rim.
When you import this texture in Unity, what you will see is some white and colored fringing. Notice that the image below is rendered using bilinear filtering to reveal the problem with fully transparent pixels and filtering.
On the left you see the texture rendered against black background, and on the right you see just the color channel without alpha. The white zero alpha pixels are caused by Photoshop, as when saved, all fully transparent pixels will have white color (that's how Photoshop works).
Notice how the upper side of the transparent rendering (left) has this white/gray fringing. This is because bilinear filtering starts to blend into white while blending to zero alpha.
The compressed version doesn't differ much from the uncompressed RGBA32 version. The compressor gets a bit confused by some of the colored noise around the edges, but not too much.
In Unity, you can fix the white fringe by enabling "Alpha is Transparency" setting. This removes some of the white fringing, but not the colored noise, obviously, because that's part of the picture made by me. However, what you see in the color channel now on the right is quite radical. It looks like C64 loading screen or something. Why? Because that's how Unity's implementation of calculating color for the zero alpha pixels works. It just extends the color from the nearest non-zero alpha pixels into the zero alpha pixels. Because I added colored noise into the rim of the image, it's confusing Unity to use those noise colors instead of the dark red rim color.
The purpose of this is to have the color of the image continued into the zero alpha pixels to help with filtering, mipmap generation and compression. But it doesn't always work as expected - mainly because the almost zero alpha pixels can contain weird colors not obvious from the image. If you have 1% alpha pixel in Photoshop, you simply can't tell which color it is and that's why it's easy to have unwanted color information in the image.
While the RGBA32 version looks better than the original, the compressed version looks much worse than before. The reason of this is that now the color channels contain more the noise color, and they take more priority in the compression. Most of the texture compression algorithms are block based, and they work on the RGB channel and alpha separately. In the compression it does not matter if the pixel is fully transparent or not - the colors get compressed without alpha. This means that if only few pixels in a block are visible, they need to share bandwidth with all the transparent pixels, too.
In DTU, we have a method to fix the colors for near-zero alpha pixels (TextureUtils.FixAlphaPixels). Instead of working just with zero alpha pixels and extending color from nearest non-zero alpha pixels, there's a threshold for the alpha, and all the pixels below the threshold are processed. You can set the algorithm to work on everything below 10%, for example. In our example, I've used threshold high as 50% to really get rid of some of the colored noise. The noise in the example image was exaggerated in the first place to make the issue more obvious.
The DTU algorithm finds the closest pixel considered visible (above the alpha threshold), and then takes all the colors found at that range around the pixel to be fixed. This gives us a nice smooth blend between different pixels instead of continuing one discrete color like Unity's default implementation does.
When this image is compressed, the quality does not suffer that much as the colored noise is reduced considerably and is not spread into the zero-alpha pixels.
Difference between Unity's built-in alpha fix and the one in DTU is quite noticeable.
So is DTU's method superior to the built-in one? Yes and no. While the quality is much better, it is also quite slow due to the detection of nearest pixels (optimizations are WIP). It's also destructive meaning that in order to have the fixed pixels you need to store the fixed texture into the asset database. In contrast, the Unity's approach is a texture importer parameter, and the fix is done at the import phase, leaving the original image intact.
But the DTU method is useful when processing textures before storing them. It can be used to fix the top/left lit images used for normal map generation in case those images contain alpha. Or it can be used to treat an atlas before it is saved to the asset database.
Documentation and discussion for Draconus Texture Utils available in the Unity Asset Store.
1 post • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 1 guest