Advertisement

Texture Mapping Issue on Rendering Volumetric Clouds

Started by September 15, 2024 09:30 AM
6 comments, last by isu diss 2 days, 14 hours ago

Hi, I'm trying to generate clouds based on The Real-Time Volumetric Cloudscapes of Horizon Zero Dawn. But I'm facing the problem on how to sample Shape(3D tex, 128*128*128), Detail(3D tex, 32*32*32) and Weathermap (2D tex, 1024*1024) correctly. Here's my implementation, could anyone give me hints to resolve it?

float3 GetUVWMap1(in float3 worldPos)
{
    float3 uvw;
    float radius = length(worldPos)-InnerRadius;
    
    float theta = atan2(worldPos.y, worldPos.x); // azimuthal angle
    float phi = acos(worldPos.z / radius); // polar angle
    
 // Normalize spherical coordinates to [0, 1] range for UVW mapping
    float u = theta / (2.0f * PI);
    float v = phi / (PI * 0.5f);
    float w = radius / (inMinMaxCloudLayer.y - inMinMaxCloudLayer.x);
    
    return float3(u, v, w);
}

float2 GetUVMap2(in float3 worldPos)
{
    float3 pos = normalize(worldPos - float3(g_sceneCB.CameraPos.x, -(g_sceneCB.CameraPos.y + InnerRadius), g_sceneCB.CameraPos.z));
    float r = atan2(sqrt(pos.x * pos.x + pos.z * pos.z), pos.y) / (PI);
    float phi = atan2(pos.z, pos.x);
    float2 coord = float2(r * cos(phi) + 0.5f, r * sin(phi) + 0.5f);
    return coord;
}

float3 SampleWeather(in float3 inPos)
{
	float3 wind_direction = float3(1,0,0);
	float cloud_speed = 500000;
	float cloud_offset = 700000;

	float height_fraction = GetHeightFractionForPoint(inPos);
	inPos += height_fraction * wind_direction * cloud_offset;
    inPos += wind_direction * cloud_speed * g_sceneCB.mTime;

    return txWeatherMap.SampleLevel(ssClouds, GetUVMap2(inPos), 0).rgb * 2;
}

float SampleCloudDensity(float3 inP, float3 inWeatherData)
{
    float3 coord = GetUVWMap1(inP);

	float4 low_frequency_noises = txShape.SampleLevel(ssClouds, coord, 0).rgba;
	float low_freq_fbm = (low_frequency_noises.g * 0.625f) + (low_frequency_noises.b * 0.25f) + (low_frequency_noises.a * 0.125f);
	float base_cloud = Remap(low_frequency_noises.r, -(1.0f - low_freq_fbm), 1.0f, 0.0f, 1.0f);
	float density_height_gradient = GetDensityHeightGradientForPoint(inP, inWeatherData);
	base_cloud *= density_height_gradient;
    float cloud_coverage = inWeatherData.r;
	float base_cloud_with_coverage = Remap(base_cloud, cloud_coverage, 1.0f, 0.0f, 1.0f);
	base_cloud_with_coverage *= cloud_coverage;
    float3 high_frequency_noises = txDetail.SampleLevel(ssClouds, coord * 0.1f, 0).rgb;
	float high_freq_fbm = (high_frequency_noises.r * 0.625f) + (high_frequency_noises.g * 0.25f) + (high_frequency_noises.b * 0.125f);
	float height_fraction = GetHeightFractionForPoint(inP);
	float high_freq_noise_modifier = lerp(high_freq_fbm, (1.0f - high_freq_fbm), saturate(height_fraction * 10.0f));
	float final_cloud = Remap(base_cloud_with_coverage, high_freq_noise_modifier * 0.2f, 1.0f, 0.0f, 1.0f);
    return saturate(final_cloud);
}
NO MATTER WHAT I DO, THIS DISTORTION HAPPENS

After some tweaks, I managed to achieve this. I'm so happy with the result but Texture distortions are there, can anyone pinpoint how to resolve them?

Advertisement

isu diss said:
but Texture distortions are there

Maybe it's intended because the texture is meant to be projected on a sky dome mesh?

@joej you always comment on my post. thank you very much. I'm at a dead end with this behavior. could you please give me some hints? I'm making a DXR app. I use sky and clouds as env maps. here's a video clip of the app running.

https://www.youtube.com/watch?v=iKSYpS4_byA&ab_channel=isudiss

isu diss said:
could you please give me some hints?

Not really, but it looks like if you would put the texture on a mesh of a half sphere like so:

And you would position it further down so the circle base aligns to the horizon, it would look right maybe.

If we look at a player walking on earth, we woudl positiopn the skydome like that eventually:

You get what i mean.
But i don't know about any existing conventions on how to do this exactly.
I assume your GetUV*() functions are related to how the texture is meant to be projected on such skydome mesh, or some shape closer to a flat disc. (The flatter disc might compensate the stretch due to perspective projection)
If you got these from code examples or papers, you would need to track back the source, hoping they provide further explanation.
Or you tweak them so they fit your own projection method.

I also assume that only a center region of your current texture should be used. At the boundary of this region the stretch is most noticable, and the pixels beyond the boundary are actually wrong and should neither be shown nor calculated.

So i guess your clouds rendering itself is right, but you're now left with a projection problem.

Maybe drop the FOV of the camera a bit might help.

Advertisement

I used the following for my raymarching loop. I cant seem to find the fault of the logic. @joej @magnus wootton can you spot something?


static float InnerRadius = 6320000;
static float OutterRadius = 6420000;
static const float2 inMinMaxCloudLayer = float2(InnerRadius+1500, InnerRadius+5000);

void GenerateCameraRay(uint2 index, out float3 origin, out float3 direction)
{
    float2 xy = index + 0.5f;
    float2 screenPos = (xy / float2(1024.0f, 768.0f)) * 2.0 - 1.0;

    screenPos.y = -screenPos.y;

    float4 world = mul(float4(screenPos, 0, 1), g_sceneCB.mInvWVP);

    world.xyz /= world.w;
    origin = g_sceneCB.CameraPos.xyz;
    direction = normalize(world.xyz - origin);
}


[numthreads(16, 16, 1)]
void ComputeClouds(uint3 DTID : SV_DispatchThreadID)
{
    float3 Eye;
    float3 Direction;
    GenerateCameraRay(DTID.xy, Eye, Direction);

    Eye.y += -InnerRadius + 100;
    float3 SunDir = normalize(g_sceneCB.LightDirection).xyz;
    SunDir.z *= -1;
    float cosTheta = dot(SunDir, Direction);

    float ViewRay_A = RaySphereIntersection(Eye, Direction, float3(0, 0, 0), inMinMaxCloudLayer.x);
    float ViewRay_B = RaySphereIntersection(Eye, Direction, float3(0, 0, 0), inMinMaxCloudLayer.y);
    float3 Pos_A = Eye + ViewRay_A * Direction;
    int NumSamples = lerp(64, 250, saturate(abs(1 - ((ViewRay_B - ViewRay_A) / (inMinMaxCloudLayer.y - inMinMaxCloudLayer.x)))));
	float SampleLength = (ViewRay_B - ViewRay_A) / NumSamples;
    float3 tmpPos = Pos_A + .5f * SampleLength * Direction;
...
Advertisement