This is going to be an odd comment, but I immediately recognised the parrot in the test images. It's the scarlet macaw from 2004 which is often used in many Wikipedia articles about colour graphics.
Parrots are often used in articles and research papers about computer graphics and I think I know almost all the parrots that have ever appeared in computing literature. This particular one must be the oldest computing literature parrot I know!
By the way, I've always been fascinated by dithering ever since I first noticed it in newspapers as a child. Here was a clever human invention that could produce rich images with so little, something I could see every day and instinctively understand how it creates the optical illusion of smooth gradients, long before I knew what it was called.
This feels better than the original anyway. I never liked the yellow color that one had. Maybe it was an artistic choice, but to me it just looked degraded, like when white plastic is left exposed to the sun.
For anyone interested in seeing how dithering can be pushed to the limits, play 'Return of the Obra Dinn'. Dithering will always remind you of this game after that.
A related bit of tech trivia is that digital audio also often involves dithering, and not just decimated or compressed audio. Even very high-quality studio mastered audio benefits from an audio specific kind of dithering called noise shaping. Depending on the content, studio mixing engineers may choose different noise shaping algorithms.
Even though it looks less accurate, I prefer the look of the Ordered Bayer image. It looks artistically low-fi while the others look more like a highly compressed image to me. Considering we are able to just represent images with full colour today, the only reason I'd dither is for the aesthetic.
It's only a 3 line function but the jump in visual quality in dark scenes was dramatic. It always makes me sad when I see streamed content or games with bad banding, because the fix is so simple and cheap!
One thing that's important to note is that it's a bit tricky to make dithering on / off comparisons because resizing a screenshot of a scene with dithering makes the dithering no longer work unless one pixel in the image ends up exactly corresponding to one pixel on your screen
Dithering is still very common in rendering pipelines. 8 bits per channel is not enough to capture subtle gradients, and you’ll get tons of banding. Particularly in mostly monochrome gradients produced by light sources. So you render everything to a floating point buffer and apply dithering.
Unlike the examples in this post, this dithering is basically invisible at high resolutions, but it’s still very much in use.
The figures in this article are really great. How where they made? If I was to try and recreate them I might render things individually and then lay it out in Illustrator to get that 3D isomorphic look, but I assume there's a better way.
Back in the late 90s maybe. Gifs and other paletted image formats were popular.
I even experimented with them. I designed various formats for The Palace. The most popular was 20-bit (6,6,6,2:RGBA, also 5,5,5,5; but the lack of color was intense, 15 bits versus 18 is quite a difference). This allowed fairly high color with anti-aliasing -edges that were semi transparent.
The article points out that, historically, RAM limitations were a major incentive for dithering on computer hardware. (It's the reason Heckbert discussed in his dissertation, too.) Palettizing your framebuffer is clearly one solution to this problem, but I wonder if chroma subsampling hardware might have been a better idea?
The ZX Spectrum did something vaguely like this: the screen was 256×192 pixels, and you could set the pixels independently to foreground and background colors, but the colors were provided by "attribute bytes" which each provided the color pairs for an 8×8 region http://www.breakintoprogram.co.uk/hardware/computers/zx-spec.... This gave you a pretty decent simulation of a 16-color gaming experience while using only 1.125 bits per pixel instead of the 4 you would need on an EGA. So you got a near-EGA-color experience on half the RAM budget of a CGA, and you could move things around the screen much faster than on even the CGA. (The EGA, however, had a customizable palette, so the ZX Spectrum game colors tend to be a lot more garish. The EGA also had 4.6× as many pixels.)
Occasionally in ZX Spectrum game videos like https://www.youtube.com/watch?v=Nx_RJLpWu98 you will see color-bleeding artifacts where two sprites overlap or a sprite crosses a boundary between two background colors. For applications like CAD the problem would have been significantly worse, and for reproducing photos it would have been awful.
The Nintendo did something similar, but I think had four colors per tile instead of two.
So, suppose it was 01987 and your hardware budget permitted 8 bits per pixel. The common approach at the time was to set a palette and dither to it. But suppose that, instead, you statically allocated five of those bits to brightness (a Y channel providing 32 levels of grayscale before dithering) and the other three to a 4:2:0 subsampled chroma (https://www.rtings.com/tv/learn/chroma-subsampling has nice illustrations). Each 2×2 4-pixel block on the display would have one sample of chroma, which could be a 12-bit sample: 6 bits of U and 6 bits of V. Moreover, you can interpolate the U and V values from one 2×2 block to the next. As long as you're careful to avoid drawing text on backgrounds that differ only in chroma (as in the examples in that web page) you'd get full resolution for antialiased text and near-photo-quality images.
That wouldn't liberate you completely from the need for dithering, but I think you could have produced much higher quality images that way than we in fact did with MCGA and VGA GIFs.
Dithering isn't only applied to 2D graphics, it can be applied in any type of spatial or temporal data to reduce the noise floor, or tune aliasing distortion noise to other parts of the frequency spectrum. Also common in audio.
Dithering can be for aesthetic reasons, I presume especially old-school dithering that is especially pronounced. However, dithering is actually still useful in all sorts of signal processing, particularly when there are perceptible artifacts of quantization. This occurs all the time: you can trivially observe it by making gradients that go between close looking colors, something you can see on the web right now. There are many techniques to avoid banding like this, but dithering lets you hide banding without needing increased bit depth or choosing strategic stop colors by trading off spatial resolution for (perceived) color resolution, which works excellently for gradients because it's all low frequency.
And frankly, it turns out 256 colors is quite a lot of colors especially for a small image, so with a very good quantization algorithm and a very good dithering algorithm, you can seriously crunch a lot of things down to PNG8 with no obvious loss in quality. I have done this at many of my employers, armed with other tricks, to dramatically reduce page load sizes.
The article is simple wrong, dithering is still widely used, and no we do not have enough color depth to avoid it. Go render a blue sky gradient without dithering, you will see obvious bands.
Yep, even high quality 24-bit uncompressed imagery often benefits from dithering, especially if it's synthetically generated and, even if it's natural imagery, if it's processed or manipulated - even mildly - it'll probably benefit from dithering. If it's a digital photograph, it was probably already dithered during the de-bayering process.
You can do with a static dither pattern (I've done it, and it works well). It's a bit of a trade-off between banding and noise, but at least static stuff stays static and thus easily compressable.
A very simple black-to-white gradient can only be, at most, 256 pixels wide before it starts banding on the majority of computers that use SDR displays. HDR only gives you a couple extra bits where each bit doubles how wide the gradient can be before it starts running out of unique color values. If the two color endpoints of the gradient are closer together, you get banding sooner. Dithering completely solves gradient banding.
The average desktop computer is running with 8 bit color depth the vast majority of the time, so find or generate basically any wide basic gradient and you'll see it.
In terms of color spaces, SRGB (the typical baseline default RGB of desktop computing) is quite naive and inefficient. Pretty much its only upsides are its conceptual and mathematical simplicity. There are much more efficient color spaces which use dynamic non-linear curves and are based on how the rods and cones in human eyes sense color.
Playdead Games did a really nice presentation about dithering for games, it gets passed around and I'm sure it's been on HN already: https://loopit.dk/banding_in_games.pdf
This is going to be an odd comment, but I immediately recognised the parrot in the test images. It's the scarlet macaw from 2004 which is often used in many Wikipedia articles about colour graphics.
I think this is the original, photographed and contributed by Adrian Pingstone: https://commons.wikimedia.org/wiki/File:Parrot.red.macaw.1.a...
But this particular derivative is the one that appears most often in the Wikipedia articles: https://commons.wikimedia.org/wiki/File:RGB_24bits_palette_s...
This parrot has occurred in several articles on the web. For example, here's one article from a decade or so ago: https://retroshowcase.gr/index.php?p=palette
Parrots are often used in articles and research papers about computer graphics and I think I know almost all the parrots that have ever appeared in computing literature. This particular one must be the oldest computing literature parrot I know!
By the way, I've always been fascinated by dithering ever since I first noticed it in newspapers as a child. Here was a clever human invention that could produce rich images with so little, something I could see every day and instinctively understand how it creates the optical illusion of smooth gradients, long before I knew what it was called.
This also used to be a really common test image: https://en.wikipedia.org/wiki/Lenna
But its apparently a cropped centerfold from Playboy
The original Lenna is controversial, but I'm delighted to share the "ethically sourced Lenna": https://mortenhannemose.github.io/lena/
What's the impetus behind replacing the image with something even sexier?
This feels better than the original anyway. I never liked the yellow color that one had. Maybe it was an artistic choice, but to me it just looked degraded, like when white plastic is left exposed to the sun.
Oh my goodness that is delightful
It was shot by an actual Hooker, too.
This was recently shared on HN: https://visualrambling.space/dithering-part-1/
For anyone interested in seeing how dithering can be pushed to the limits, play 'Return of the Obra Dinn'. Dithering will always remind you of this game after that.
- https://visualrambling.space/dithering-part-1
- https://store.steampowered.com/app/653530/Return_of_the_Obra...
A related bit of tech trivia is that digital audio also often involves dithering, and not just decimated or compressed audio. Even very high-quality studio mastered audio benefits from an audio specific kind of dithering called noise shaping. Depending on the content, studio mixing engineers may choose different noise shaping algorithms.
https://en.wikipedia.org/wiki/Noise_shaping
Even though it looks less accurate, I prefer the look of the Ordered Bayer image. It looks artistically low-fi while the others look more like a highly compressed image to me. Considering we are able to just represent images with full colour today, the only reason I'd dither is for the aesthetic.
Dithering is super useful in dark scenes in games and movies.
By adding random noise to the screen it makes bands of color with harsh transitions imperceptible, and the dithering itself also isn't perceptible.
I'm sure there are better approaches nowadays but in some of my game projects I've used the screen space dither approach used in Portal 2 that was detailed in this talk: https://media.steampowered.com/apps/valve/2015/Alex_Vlachos_...
It's only a 3 line function but the jump in visual quality in dark scenes was dramatic. It always makes me sad when I see streamed content or games with bad banding, because the fix is so simple and cheap!
One thing that's important to note is that it's a bit tricky to make dithering on / off comparisons because resizing a screenshot of a scene with dithering makes the dithering no longer work unless one pixel in the image ends up exactly corresponding to one pixel on your screen
Also by the author: https://www.makingsoftware.com/
Recent discussions:
Making Software - https://news.ycombinator.com/item?id=43678144
How does a screen work? - https://news.ycombinator.com/item?id=44550572
What is a color space? - https://news.ycombinator.com/item?id=45013154
I love the authors style!
Dithering is still very common in rendering pipelines. 8 bits per channel is not enough to capture subtle gradients, and you’ll get tons of banding. Particularly in mostly monochrome gradients produced by light sources. So you render everything to a floating point buffer and apply dithering.
Unlike the examples in this post, this dithering is basically invisible at high resolutions, but it’s still very much in use.
The figures in this article are really great. How where they made? If I was to try and recreate them I might render things individually and then lay it out in Illustrator to get that 3D isomorphic look, but I assume there's a better way.
You still see dithering from time to time as a cheap transparency, it's been a few years since Mario Odyssey but that's when last I recall it really stood out: https://xcancel.com/chriswade__/status/924071608976924673
What an insanely beautiful website. Reminds me of the golden days of the internet, remastered tastefully.
What's with the dithering trend? Why do I keep hearing about it everywhere at least once a week? Where did this originate from?
This is the best explanation I’ve come across. I enjoy dithering as a playful way to compress file size when it makes sense.
> Before we all mute the word 'dithering'
Is this a reply to something?
Yes, it's referencing a tweet which briefly made the rounds a few weeks ago:
https://x.com/TukiFromKL/status/1981024017390731293
Many people believed that the author was claiming to have invented a particular illustration style which involved dithering.
Slightly frustrating the author started out with color images and then switched to grayscale.
We really don't anymore.
Back in the late 90s maybe. Gifs and other paletted image formats were popular.
I even experimented with them. I designed various formats for The Palace. The most popular was 20-bit (6,6,6,2:RGBA, also 5,5,5,5; but the lack of color was intense, 15 bits versus 18 is quite a difference). This allowed fairly high color with anti-aliasing -edges that were semi transparent.
We've had a couple of other recent discussions on dithering: https://news.ycombinator.com/item?id=45750954 and https://news.ycombinator.com/item?id=45698323. I commented specifically about the history of blue-noise dithering at https://news.ycombinator.com/item?id=45728231.
The article points out that, historically, RAM limitations were a major incentive for dithering on computer hardware. (It's the reason Heckbert discussed in his dissertation, too.) Palettizing your framebuffer is clearly one solution to this problem, but I wonder if chroma subsampling hardware might have been a better idea?
The ZX Spectrum did something vaguely like this: the screen was 256×192 pixels, and you could set the pixels independently to foreground and background colors, but the colors were provided by "attribute bytes" which each provided the color pairs for an 8×8 region http://www.breakintoprogram.co.uk/hardware/computers/zx-spec.... This gave you a pretty decent simulation of a 16-color gaming experience while using only 1.125 bits per pixel instead of the 4 you would need on an EGA. So you got a near-EGA-color experience on half the RAM budget of a CGA, and you could move things around the screen much faster than on even the CGA. (The EGA, however, had a customizable palette, so the ZX Spectrum game colors tend to be a lot more garish. The EGA also had 4.6× as many pixels.)
Occasionally in ZX Spectrum game videos like https://www.youtube.com/watch?v=Nx_RJLpWu98 you will see color-bleeding artifacts where two sprites overlap or a sprite crosses a boundary between two background colors. For applications like CAD the problem would have been significantly worse, and for reproducing photos it would have been awful.
The Nintendo did something similar, but I think had four colors per tile instead of two.
So, suppose it was 01987 and your hardware budget permitted 8 bits per pixel. The common approach at the time was to set a palette and dither to it. But suppose that, instead, you statically allocated five of those bits to brightness (a Y channel providing 32 levels of grayscale before dithering) and the other three to a 4:2:0 subsampled chroma (https://www.rtings.com/tv/learn/chroma-subsampling has nice illustrations). Each 2×2 4-pixel block on the display would have one sample of chroma, which could be a 12-bit sample: 6 bits of U and 6 bits of V. Moreover, you can interpolate the U and V values from one 2×2 block to the next. As long as you're careful to avoid drawing text on backgrounds that differ only in chroma (as in the examples in that web page) you'd get full resolution for antialiased text and near-photo-quality images.
That wouldn't liberate you completely from the need for dithering, but I think you could have produced much higher quality images that way than we in fact did with MCGA and VGA GIFs.
They answered the question in the first two sentences: We don't need it, it's just an aesthetic nowadays.
Dithering isn't only applied to 2D graphics, it can be applied in any type of spatial or temporal data to reduce the noise floor, or tune aliasing distortion noise to other parts of the frequency spectrum. Also common in audio.
Dithering can be for aesthetic reasons, I presume especially old-school dithering that is especially pronounced. However, dithering is actually still useful in all sorts of signal processing, particularly when there are perceptible artifacts of quantization. This occurs all the time: you can trivially observe it by making gradients that go between close looking colors, something you can see on the web right now. There are many techniques to avoid banding like this, but dithering lets you hide banding without needing increased bit depth or choosing strategic stop colors by trading off spatial resolution for (perceived) color resolution, which works excellently for gradients because it's all low frequency.
And frankly, it turns out 256 colors is quite a lot of colors especially for a small image, so with a very good quantization algorithm and a very good dithering algorithm, you can seriously crunch a lot of things down to PNG8 with no obvious loss in quality. I have done this at many of my employers, armed with other tricks, to dramatically reduce page load sizes.
It's not just aesthetic, I keep seeing games with color banding because they don't bother to dither before quantizing.
From the article:
> We don't really need dithering anymore because we have high bit-depth colors so its largely just a retro aesthetic now.
By the way, dithering in video creates additional problems because you want some kind of stability between successive frames.
The article is simple wrong, dithering is still widely used, and no we do not have enough color depth to avoid it. Go render a blue sky gradient without dithering, you will see obvious bands.
Yep, even high quality 24-bit uncompressed imagery often benefits from dithering, especially if it's synthetically generated and, even if it's natural imagery, if it's processed or manipulated - even mildly - it'll probably benefit from dithering. If it's a digital photograph, it was probably already dithered during the de-bayering process.
You can do with a static dither pattern (I've done it, and it works well). It's a bit of a trade-off between banding and noise, but at least static stuff stays static and thus easily compressable.
Yeah, the article is wrong about that.
It would be nice if you had some examples.
A very simple black-to-white gradient can only be, at most, 256 pixels wide before it starts banding on the majority of computers that use SDR displays. HDR only gives you a couple extra bits where each bit doubles how wide the gradient can be before it starts running out of unique color values. If the two color endpoints of the gradient are closer together, you get banding sooner. Dithering completely solves gradient banding.
Acerola recently made a video about how Silk Song has banding with dark colors due to poor dithering (and how to fix it): https://www.youtube.com/watch?v=au9pce-xg5s
Highly recommend for any graphics programmer that might think dithering is unnecessary or simply a "aesthetic choice".
To lend more credibility, the devs added more dithering in the next patch.
A great many can be found here: https://en.wikipedia.org/wiki/Dither
(also a very nice explanation of why dithering is a fundamental signal processing step applicable to many fields, not just an "aesthetic".)
The average desktop computer is running with 8 bit color depth the vast majority of the time, so find or generate basically any wide basic gradient and you'll see it.
I think you mean 24 bit. 8 bit would only be 256 colors total.
8 bits for each of R G and B. So a grey-scale gradient indeed has only 256 colors available. Any gradient also will have about that many at most.
In most gradients, the transitions in R, G, and B are at different places.
True. Also, in most gradients, the full range of R G and B is not used.
In rgb(50, 60, 70) to rgb(150, 130, 120), there are only 200 total transitions.
In terms of color spaces, SRGB (the typical baseline default RGB of desktop computing) is quite naive and inefficient. Pretty much its only upsides are its conceptual and mathematical simplicity. There are much more efficient color spaces which use dynamic non-linear curves and are based on how the rods and cones in human eyes sense color.
The current hotness for wide color gamuts and High Dynamic Range is ICTCP (https://en.wikipedia.org/wiki/ICtCp) which is conceptually similar to (https://en.wikipedia.org/wiki/LMS_color_space).
True!
see eg https://xcancel.com/theo/status/1978161273214058786?s=46
Playdead Games did a really nice presentation about dithering for games, it gets passed around and I'm sure it's been on HN already: https://loopit.dk/banding_in_games.pdf