🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

OpenGL 4 depth of field

Started by
5 comments, last by JoeJ 1 year, 8 months ago

I am having some difficulty implementing a good depth of field algorithm.

So far I'm making the pixels blurrier based on distance – the further from the camera, the blurrier. It's pretty naive.

What I've done though is passed in a focal length to the shader. What I'm trying to do is make it blurrier based on the focal length and the pixel depth value. Any good ideas on how to implement this? My math is not the best.

The shader in question is

#version 430

uniform sampler2D depth_tex; // texture uniform
uniform sampler2D colour_tex; // texture uniform

uniform int img_width;
uniform int img_height;
uniform float model_distance;
uniform float near;
uniform float far;




vec2 img_size = vec2(img_width, img_height);

in vec2 ftexcoord;

layout(location = 0) out vec4 frag_colour;

float to_distance(float depth_colour)
{
    float dist = (2.0*near*far) / (far + near - depth_colour*(far - near));	
    return dist;
}

float to_depth(float dist)
{
    float depth = (far*(dist - 2.0*near) + near*dist)/(dist*(far - near));
    return depth;
}

// https://www.shadertoy.com/view/Xltfzj
void main()
{
    const float pi_times_2 = 6.28318530718; // Pi*2
    
    float directions = 16.0; // BLUR directions (Default 16.0 - More is better but slower)
    float quality = 10.0; // BLUR quality (Default 4.0 - More is better but slower)
    float size = 8.0; // BLUR size (radius)
   
    vec2 radius = vec2(size/img_size.x, size/img_size.y);
    
    vec4 blurred_colour = texture(colour_tex, ftexcoord);
    
    for( float d=0.0; d<pi_times_2; d+= pi_times_2/directions)
		for(float i=1.0/quality; i<=1.0; i+=1.0/quality)
			blurred_colour += texture( colour_tex, ftexcoord + vec2(cos(d),sin(d))*radius*i);		
    
    // Output to screen
    blurred_colour /= quality * directions - 15.0;

	vec4 unblurred_colour = texture(colour_tex, ftexcoord);

	float depth_colour = texture(depth_tex, ftexcoord).r;
    depth_colour = pow(depth_colour, 100.0);
 
    frag_colour.rgb = vec3(mix(unblurred_colour, blurred_colour, depth_colour));
    frag_colour.a = 1.0;
}

The whole code can be found at:

https://github.com/sjhalayka/obj_ogl4

Advertisement

I don't know about math regarding near field / focus / far field. (But by typing these words i actually remember something related to the bleeding problem: Many games addressed it by using separate blurred buffers for the near and far stuff.)

And i never thought about DOF in detail at all, but one thing you do seems clearly wrong to me:

Your blur is of constant radius.
Then you mix the sharp image with the blurred depending on distance.
(Result is much more a kind of bloom than dof.)

A better approach should be:
Make the blur radius depending on distance.
Display just that - no mix with the sharp image. In places where it's meant to be sharp, the blur radius is zero.

Thanks again for all of your help. I will work on the code for the radius based on distance.

For what it's worth, I have it working now, with constant radius and blur.

#version 430

uniform sampler2D depth_tex; // texture uniform
uniform sampler2D colour_tex; // texture uniform

uniform int img_width;
uniform int img_height;
uniform float model_distance;
uniform float near;
uniform float far;




vec2 img_size = vec2(img_width, img_height);

in vec2 ftexcoord;

layout(location = 0) out vec4 frag_colour;

float to_distance(float depth_colour)
{
    float dist = (2.0*near*far) / (far + near - depth_colour*(far - near));	
    return dist;
}

float to_depth(float dist)
{
    float depth = (far*(dist - 2.0*near) + near*dist)/(dist*(far - near));
    return depth;
}

// https://www.shadertoy.com/view/Xltfzj
void main()
{
    const float pi_times_2 = 6.28318530718; // Pi*2
    
    float directions = 16.0; // BLUR directions (Default 16.0 - More is better but slower)
    float quality = 10.0; // BLUR quality (Default 4.0 - More is better but slower)
    float size = 8.0; // BLUR size (radius)
   
    vec2 radius = vec2(size/img_size.x, size/img_size.y);
    
    vec4 blurred_colour = texture(colour_tex, ftexcoord);
    
    for( float d=0.0; d<pi_times_2; d+= pi_times_2/directions)
		for(float i=1.0/quality; i<=1.0; i+=1.0/quality)
			blurred_colour += texture( colour_tex, ftexcoord + vec2(cos(d),sin(d))*radius*i);	
    
    // Output to screen
    blurred_colour /= quality * directions - 15.0;

	vec4 unblurred_colour = texture(colour_tex, ftexcoord);

	float depth_colour = texture(depth_tex, ftexcoord).r;
 
    float distance_to_pixel = to_distance(depth_colour);

    float x = clamp(abs(distance_to_pixel - model_distance) / model_distance, 0.0, 1.0);

     x = max(1 - x, 0);

     x = 1.0 - pow(x, 1.0/10.0);



    frag_colour.rgb = vec3(mix(unblurred_colour, blurred_colour, x));
    frag_colour.a = 1.0;
}


Using variable radius produces basically the same result. Good intuition on your part though!

#version 430

uniform sampler2D depth_tex; // texture uniform
uniform sampler2D colour_tex; // texture uniform

uniform int img_width;
uniform int img_height;
uniform float model_distance;
uniform float near;
uniform float far;




vec2 img_size = vec2(img_width, img_height);

in vec2 ftexcoord;

layout(location = 0) out vec4 frag_colour;

float to_distance(float depth_colour)
{
    float dist = (2.0*near*far) / (far + near - depth_colour*(far - near));	
    return dist;
}

float to_depth(float dist)
{
    float depth = (far*(dist - 2.0*near) + near*dist)/(dist*(far - near));
    return depth;
}

// https://www.shadertoy.com/view/Xltfzj
void main()
{
    const float pi_times_2 = 6.28318530718; // Pi*2
    
    float directions = 16.0; // BLUR directions (Default 16.0 - More is better but slower)
    float quality = 10.0; // BLUR quality (Default 4.0 - More is better but slower)
    float size = 8.0; // BLUR size (radius)

	vec4 unblurred_colour = texture(colour_tex, ftexcoord);
	float depth_colour = texture(depth_tex, ftexcoord).r;
 
    float distance_to_pixel = to_distance(depth_colour);
    float x = clamp(abs(distance_to_pixel - model_distance) / model_distance, 0.0, 1.0);
    x = max(1.0 - x, 0.0);
    x = 1.0 - pow(x, 1.0/10.0);

    vec2 radius = vec2(size/img_size.x, size/img_size.y);
    
    vec4 blurred_colour = texture(colour_tex, ftexcoord);
    
    for( float d=0.0; d<pi_times_2; d+= pi_times_2/directions)
		for(float i=1.0/quality; i<=1.0; i+=1.0/quality)
			blurred_colour += texture( colour_tex, ftexcoord + vec2(cos(d),sin(d))*radius*x*i);	
    
    // Output to screen
    blurred_colour /= quality * directions - 15.0;

    frag_colour.rgb = blurred_colour.rgb;//vec3(mix(unblurred_colour, blurred_colour, x));
    frag_colour.a = 1.0;
}


Oops, I had commented out the - 15.0 term, which made things darker than they should have.

Yeah, joej was right all along!

This is better:

Now you need to fix the bleeding. The sharp voxels should not affect the distant background.

I do not understand the optical process related to pinhole vs. apperture at all, so it's difficult to propose a math model.

But i see those rules:

It's always fine to to add a sample which is more distant than the current pixel. So it's weight could be 1.
It's bad to add a sample which is closer than the current pixel, so it's weight should be 0.

That's probably naive. Maybe the rules reverse in the near field. Some research to understand the optics is needed.
But i would try this, see what artifacts it causes, and then try to make a smooth function to calculate a weight from current and sampled distance.

This topic is closed to new replies.

Advertisement