🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

error-diffusion algorithm

Started by
7 comments, last by Tank2k 24 years, 9 months ago
Error diffusion is used to map e picture to a display area with the same width and height. The error between the input intensity value and displayed intensity at a certain point is dispersed or diffused to pixel positions to the right and below the current pixel position. Starting with a matrix of intensity values obtained by scanning a picture, we want to construct an array I of pixel intensity values for an area of the screen. That is done by first scanning accros the rows of the matrix, from left to right, top to bottom, and determining the nearest available pixel-intensity level for each element of the matrix. Then the eror between the value stored in the matrix and the displayed intensity level at each pixel position is distributed to neighboring elements in the matrix.
Once the element is matrix I have been assigned intensity level values, we than map the matrix to some area of a display device. We cannot disperse the error past the last matrix column (j = n) or below the last matrix row (i = m). You might find the following nice to know to know what the paramneters are for distributing the error:
a + ß + y + ø <= 1

7/16 3/16 5/16 and 1/16 produces fairly good results.

Hope this helps a little......I'll make you some little source later this day, when I'm off work. Ciao

------------------
Dance with me......

Advertisement
I have some questions:

So, if I wanted to dither a 16-bit 640x480 screen,
array 'I' would probably be an array of 3-byte RGB values like:

BYTE intensity_values[640][480][3];

according to what I understand from your post.

So, how do you determine 'the nearest available pixel-intensity level'?
How do you compute the error from these two values?

Thanks for any additional info.

-Tank2k

[This message has been edited by Tank2k (edited September 10, 1999).]

Okay.....I will try to show you how a mathematican would do it

I don't know how to write down those lower-characters, so I write them down italic, and the rest bold.

Ii,j = Ik
Error = Mi,j-Ii,j
Mi,j+1 = Mi,j+1 + a*Error
Mi+1,j-1 = Mi+1,j-1 + ß*Error
Mi+1,j = Mi+1,j + y*Error
Mi+1,j+1 = Mi+1,j+1 + ø*Error

do this for the whole image you have and use (for example) the values I showed you before.

More questions?

------------------
Dance with me......

I think I understand now how the error is distributed from your
last post, but can you do a simple step through of dithering 1 pixel
to its neighbors?

Let's say I want to dither a 24-bit image to a 16-bit image.
The first 24-bit pixel has RGB values (0,0,255). So the error
value for blue, for example, would be:

error = 31 - 255 = -224

I would then distribute this blue error value to its 4 neighboring pixels.
Let's say all the neighbor blue values have the same value 255. I have the
array Image defined as Image[640][480] with red,green,blue members.

Image[x+1][y ].blue += error * 7/16 = 255 + -98 = 157
Image[x-1][y+1].blue += error * 5/16 = 255 + -70 = 185
Image[x ][y+1].blue += error * 3/16 = 255 + -42 = 213
Image[x+1][y+1].blue += error * 1/16 = 255 + -14 = 241

So when I advance to the next pixel to dither, the blue value will
already be 157 from my last calculation.

I tried several attempts to dither a blue to black gradient, however I
must be doing something wrong as I can clearly see distinct bands.

-Tank2k

[This message has been edited by Tank2k (edited September 12, 1999).]

Heres the basic pipeline for each pixel.

a)retrive pixel color

b)find closest mapping in destination color space (16 bit, 8 bit palletted , misc)

c)add error componenets from previous pixels into each approriate color component.

d)take difference of each color component between the original color value vs the remapped color value plus error in source color space.

e)convert error into destination color space.

f)distribute error to adjacent pixel in any manner you prefer.

g)repeat for next pixel.

So for example lets do 24 bit -> 16 bit conversion.

source : 24 bit, 8 bit per channel color space

desintation : 16 bit, 5 bit per channel color space

(step A)

source pixel : 0,0,255 (RGB format)

(step B)

remapped into destionation we get :

destination pixel : 0, 0, 31 (RGB format)

(step C)
since this was the first pixel there are no error values to contribute for this pixel.

(step D)
to take the difference between the oringal color value and the ramapped color value in source color space we have to convert the ramapped color value back into source color space. Lets do this for each component:

ramapped pixel color : 0,0,31

delta_R = orignal_R - remapped_R*convert;
delta_G = orignal_G - remapped_G*convert;
delta_B = orignal_B - remapped_B*convert;

convet = some scaling factor which remaps the destination color space into the source in this case its 8.2258

delta_R = 255 - 31*8.2258;
delta_G = 0 - 0*8.2258;
delta_B = 0 - 0*8.2258;

delta_R = 0.025;
delta_G = 0;
delta_B = 0;

So we have a small red error component, mostly due to our low precesion remap value.

(step E)
remaped_error = error/convert;

convert all the color componenets.

(step F)
let do a simple error diffusion where all the error just goes to the next pixel. So add the errors into the next pixel error buffer, since we dont need to keep track of any pixel other then the next one we can use a preallocated reusable fixed buffer, no need to generate one for each pixel.

(step G)
Ya, now you do the next pixel, and so on.

Hope this helps, Good luck!

-ddn

Just a couple remarks:

It doesn't seem like step (c) is needed since step (f) already distributes
the error into the neighboring pixels.

I don't clearly understand step (f) since the difference seems to be
always zero since the 16-bit color value times 8.2258 will always
equal the original 24-bit color value if we didn't lose any precision
when dividing in the first place.

I haven't got my code working yet, but I feel I am very close. I think
I may be using 16-bit values instead of 24-bit values in places where I
shouldn't be using them.

Anyway, does anyone know of any Windows graphic programs that you can apply
different kinds of dithering algorithms to a color image? I would like
to compare results to see if I am doing this right.

-Tank2k

[This message has been edited by Tank2k (edited September 14, 1999).]

Yep, PaintShop Pro

------------------
Drago

osu!
Perhaps someone can help me out. I am looking for any information
on how to implement a very simple error-diffusion dithering algorithm
in C/C++. If you have a graphics reference book lying about, please
let me know if it has anything of interest about this subject.

-Tank2k

Ya sorry, it was late at night, and there are some errors and important omissions. Well to your question as whether there will be error, rememebr your destination pixel format is an 5 bit, not a float so fractional componenets are loss. Ya, and another thing you will have to clip the error, so it doesnt exceed the bounds of your bitdepht. So clip the error to between 31 and 0. Good luck!

-ddn

This topic is closed to new replies.

Advertisement