🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

"in game" lightmap creation

Started by
25 comments, last by Programmer71 1 year, 6 months ago

Does anyone of a game engine where the lightmaps are created automatically in the game? (i.e. during scene loading, not in the editor)

Advertisement

I guess such feature does not exist in off the shelf engines.
The problem is, you need to spend lots of effort on global parametrization to support light mapping.
And after you did this investment, you want hiqh quality lightmaps to be worth it.
But if you do it on client and it must finish in short time, you'll get only low quality. So baking offline seems just the better option.

The only game i know which does this is Trackmania. But that's proprietary engine ofc, and it was just shadows, no GI.

Playcanvas has this feature

https://developer.playcanvas.com/ru/user-manual/graphics/lighting/runtime-lightmaps/

SerialFr said:
Playcanvas has this feature

Interesting. But it's direct lighting only, so no GI which i associate with lighmaps.
But i see it makes sense for web / mobile, when you want to keep download size and specs low.

JoeJ said:

I guess such feature does not exist in off the shelf engines.
The problem is, you need to spend lots of effort on global parametrization to support light mapping.
And after you did this investment, you want hiqh quality lightmaps to be worth it.
But if you do it on client and it must finish in short time, you'll get only low quality. So baking offline seems just the better option.

The only game i know which does this is Trackmania. But that's proprietary engine ofc, and it was just shadows, no GI.

Thanks for the answers guys.

I was thinking that lightmaps may not necessarily use ray tracing which is slow. Instead they could use some GI algorithm and be done on the GPU to speed things up. (Though, I have to admit I don't know much about GI algorithms). This would be useful for procedurally generated scenes, which can't use offline creation.

Ed Welch said:

I was thinking that lightmaps may not necessarily use ray tracing which is slow. Instead they could use some GI algorithm and be done on the GPU to speed things up.

Try to bake your lightmap for a sample scene in your favorite 3D modeling software (which of course will use GPU for that) and measure how long it takes. Judging for how long it takes me to bake AO maps (not even GI) in blender with a high-end GPU, I doubt this is realistically possible or with awful quality as @joej outlined.

Ed Welch said:
was thinking that lightmaps may not necessarily use ray tracing which is slow. Instead they could use some GI algorithm and be done on the GPU to speed things up.

Like others have said, computing GI light maps is very expensive and not likely to work fast enough on a client machine. You basically are treating all surfaces in the scene as a camera image sensor, which is a harder problem than rendering a single viewpoint with a normal camera (and typically more pixels to deal with). Plus, you need a global UV parameterization to map between surface points and light map texels, which is difficult to get with arbitrary geometry (you usually can't reuse the existing UVs due to tiling). Monte Carlo ray tracing is pretty much the only way to calculate accurate GI. Anything else is just a hack or makes some kind of accuracy/speed trade-off. Monte Carlo ray tracing won't be that fast on GPUs because of the incoherence of the rays causing divergence.

Aressera said:
Monte Carlo ray tracing is pretty much the only way to calculate accurate GI.

Wait and see… >:D

It's not the only way, but the easiest one. Mostly because it needs no parametrization, probes for a radiance cache, etc.

Ed Welch said:
This would be useful for procedurally generated scenes, which can't use offline creation.

How do those scenes look like, how large? How do you plan to get lightmap UVs? What's your requirement on quality? (just diffuse or some directional for specular too reflections? One, N, or infinite bounces? How large can a single lightmap texel be?)

JoeJ said:
probes for a radiance cache

Hence why I said “accurate” GI. Stuff like that makes simplifications or approximations to the rendering equation, such as limiting light bounces or spatial resolution, or other tricks/hacks.

Aressera said:
Hence why I said “accurate” GI. Stuff like that makes simplifications or approximations to the rendering equation, such as limiting light bounces or spatial resolution, or other tricks/hacks.

The rendering equation (can and should) remain unchanged and accurate.

Approximation ideally only affects the scene representation due to spatial quantization.
But on the other hand, spatial quantization done right gives you a prefiltered representation of the scene, so has much higher accuracy for integration than ray traced point sampling. (it's like cone tracing vs. ray tracing, or mip maps vs. nearest neighbor)

Path tracing can support only a limited number of max. bounces, thus it's accuracy is limited too.
Radiosity method using caches in realtime gives practically infinite bounces (each frame adds one bounce, and accumulation is accurate not hacky).

Thus i can argue: Radiosity method is more accurate than path tracing regarding global light transport. And radiosity method in realtime is even more accurate than offline, because we get more bounces.
If we would continue the discussion, we would end up listing many pros and cons for either method, but the above are facts.
We'll always use tricks or hacks for realtime application, likely causing some acceptable bias or error, for any method we use. But in theory radiosity method is accurate and correct, and clearly the better choice for realtime application, if we can solve the surface parametrization problem for probe placement.

The reason anybody thinks caching causes lower accuracy simply is that this latter problem is still open. All we saw is volume probes using regular grids (e.g. RTX GI back to VCT, etc.). So the probes are not on the surface, and most of them may even be in empty space and pretty useless. That's very inaccurate and also very inefficient, thus the bad impression. The implementations are inaccurate and bad, not the idea.

Afaict, the first paper proposing radiosity method came out one year before the path tracing paper which introduced the rendering equation. (Don't remember the authors, maybe Ward)
The solution in this paper was to use polygons for the cache, subdividing them with each iteration of the solver to increase accuracy.
That's pretty good, and was often used for offline rendering. But path tracing became the better option due to increasing geometry resolution, and extending radiosity method to support specular reflections also isn't trivial.

At this point we also lack a precise definition of the ‘Global Illumination’ term, which may be restricted to global light transport assuming Lambert diffuse material for everything, or may contain specular reflections, refractions, and all the rest a complex BRDF defines. Thus we can not say which method is the better GI method in general.
However, for games it's clear that multi bounce diffuse GI + single bounce reflections is good enough.
And even if not (e.g. because everybody would buy a 4090), we should still use a cached method for performance reasons if we can.
Metro Exodus is actually a first example for this. They use ‘Path Tracing’ for the first bounce, but then fall back to (kind of) RTX GI probe lookups. That's the proper way, because as RT becomes faster (if so), we can decide to path trace the first 2, then 3 bounces, and so forth, to get the best compromise of accuracy and performance.
However, once we have proper surface caches, you won't see a difference between path tracing just 1 or 3 bounces, thus i personally think HW RT is useful mostly for accurate direct lighting, not so much for GI.

That said to prevent you from joining the doubtless path tracing fallacy :D
I worked on a fast GI algorithm for 10 years, and i could do Portal RTX on a PS4. It's O(N) and biased, but visual accuracy would be mostly the same. I miss out only on sharp reflections, refractions or shadows, but i have much lower lag and no noise.
But now i've already spent >5 years on tools to have a practical and high quality solution for the probe placement problem. Global parametrization for open worlds really is hard, but the only way if we want efficient realtime GI.

This topic is closed to new replies.

Advertisement