🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Issue with packed uints in shader on Intel HD-Chips

Started by
21 comments, last by DukeThrust 1 year, 7 months ago

JoeJ said:
This makes clear why you want to use constant ram if you can. A situation where this often is no longer possible is having too many lights or bone matrices, or bindless rendering techniques.

Thanks, that makes things a lot more clear. Then the solution here is just to make the cbuffer as large as supported. The number of animated tiles is never going to become really large. I suppose I could use a shader-switch to change the number of supported elements in increments. Do you happen to also know if there is an overhead in having a large cbuffer, lets say 4096-float4 if only 8 of those are currently in use?

JoeJ said:
Further i guess Load turns texture memory access into the same as a general memory access we see with StructuredBuffer. But yeah, not sure. VRAM memory access, related pipelined execution, caching, etc., is where my knowledge is bad. Otherwise GPU performance is easier to predict and understand than CPU perf. to me, because there is no branch prediction, speculative execution and such black boxes.

Interesting, I find CPU-performance much easier to understand. Especially after learning ASM for my JIT-compiler, but even before :D True, there are systems in the background that you don't control directly, but the basics of whats faster seem much clearer to me - use less memory, access memory in a linear fashion, precompute results (as long as it doesn't violate the former two), cache expensive calls in local variables, etc… For GPU, for me, even if I know whats basically the right thing, I sometimes find it hard to execute - MAD-instructions, as an example. The one class we had on (CUDA) compute-shader, where the guy explained how to optimize the performance of a shader by x128 just be changing the way that memory is accessed and how the batches are executed, I couldn't execute myself :D Maybe with a bit more experience -I never got around to implementing compute-shaders in my engine so far, not really all that important for 2d graphics which I'm working on right now.

Advertisement

Juliean said:

Ultraporing said:
I tried your sample and it worked for my NVIDIA card. I did a bit of googling and found an example for the PointClamp state, and noticed when comparing yours with theirs that you use for AddressW WRAP instead of CLAMP like in their point wrap. No clue if it has something todo with it.

I noticed that too, but I also wouldn't expect that to cause the issue. The calculated z-coordinate would have been (0.0f + 0.5f)/1.0f = 0.5f, which should not be in a range where it eigther needs to be wrapper or clamped. Unless the wrap-calculation causes some issues with the precision of the float that is being read? Idk really :D

Ultraporing said:
Sadly my experience with shader programming is almost non existent so I will stop posting here to not spam the topic too much.

Oh not at all, I appreciate all input.

Since my intuition doesn't leave me alone and had a similar issue like the following which cost me nearly a week of debugging at an old job, I'll just throw this out there.

Did you test some simple division and calculations like in your Broken sample as standalone test with known values and result on both machines? The Intel one and the one it runs fine on, to verify the consistency between the machines and operations used by both hardware's.
Since I had a problem a few years back with purely CPU processed code (20 years old legacy software and my personal hell) of packed floats values into INTs. And subsequent division operations on different hardware. It ran fine on my new laptop with win10 but It would not work on a clients machine and do funky stuff like turning after the division of a value which should have been i.e. 0.8f into 0f. I think I fixed it by removing the division operations and only using multiplication.

My stomach tells me it's some banal incompatibility with unpredictable results due driver/OS/hardware (maybe even firmware) changes in the background like you suggested.

Good luck

“It's a cruel and random world, but the chaos is all so beautiful.”
― Hiromu Arakawa

@ultraporing Yeah, I suppose it would go into that direction. But since my fix works and is an improvement overall for the shader, I don't really feel like investigating it much further. Especially since the test-machine is slow AF. But thanks for all the input, if I ever run into a similar issue (or feel like it), I'll probably investigate into that direction ?

Juliean said:
Do you happen to also know if there is an overhead in having a large cbuffer, lets say 4096-float4 if only 8 of those are currently in use?

Hmmm, i can only draw conclusions…

Assume multiple different shaders, each having different cbuffers, would run on the same CU.
In this case using large cbuffers would reduce occupancy, because when exceeding the physical limit, multiple shaders could not run on the same CU.

But i have never heard of constant ram being a factor for occupancy. It's just registers and LDS. So i would conclude different shaders never run on the same CU at the same time at all, and thus we can use all ram which is there without worries.

However, this can't be the whole story. Because then all GPUs would have the same exact size of constant ram as defined by API limits, because having more ram would be pointless then. And i doubt that's the case.

So, after all that thinking… i don't know :D But i'm very sure there is no good reason to minimize constant ram usage in general. Much more likely you want to ‘utilize’ it.

Juliean said:
use less memory, access memory in a linear fashion, precompute results

That's the same on GPU too, but there are more related details and it's more complex.

On CPU you're good with reading like so:

a[0]
a[1]
a[2] ...

On GPU, assuming it has only 2 threads executing in lockstep, and writing the threads horizontally and execution order vertically, the ideal pattern is this:

a[0] | a[1]
a[2] | a[3]

But not this, which is worse:

a[0] | a[2]
a[1] | a[3]

Though, you can care about this only with compute shaders. For PS, VS, etc. the second option is ofc. all you can do.

Also, on GPU you want to ‘manually pre cache’ your data if you access it more often than once and from different threads in inner loops.
So you may read a block of data from VRAM to LDS,
then do your processing stuff only using LDS, and you may write your results to another LDS array with random access,
then, when you're done you write back your results from LDS to VRAM with (ideally) linear access.

So there's lots of extra effort to deal with memory most efficiently. And the goal always is to have no VRAM access in inner loops.
Again, that's CS only, and depends on algorithm ofc.

Juliean said:
cache expensive calls in local variables

I have not worked for years on GPU and don't know how compilers have improved, but on GPU you usually want to inline everything, so no little helper functions at all.
Almost nobody does this, but it's true. When optimizing a shader, the first impression can be that having sub functions even helps, but after being done with optimizing inlining all the code always ended up faster to me.
So if hlsl has pragmas not only to roll and unroll, but also to inline, try it out for low effort gains. Vulkan had neither of these when i worked on it.

Caching to local variables itself also is more pain, because it may increase your register count by one, eventually reducing the occupancy tier. So i do a lot of bit packing on GPU, and also use LDS for the cache instead local variables.

Juliean said:
I never got around to implementing compute-shaders in my engine so far, not really all that important for 2d graphics which I'm working on right now.

Probably.
It's a pain in the ass. The code is hard to maintain, HW details affect ideal choices on algorithms, languages are meh, etc.

But i really love the concept of parallel algorithms. The programming process itself becomes interesting and rewarding, even for simple stuff. It's something else and new.

But sadly, current APIs are not cross platform, or outdated, or dying, and low level. So the power of GPUs is not available to the general programmer. I have plenty of experience, but i still shy away from porting any of my (slow) preprocessing tools to compute for those reasons.
It's really a pity, and imo the largest failure of the tech industry to come up with a proper standard still after so many years. : (

JoeJ said:
So if hlsl has pragmas not only to roll and unroll, but also to inline, try it out for low effort gains. Vulkan had neither of these when i worked on it.

From my understanding, there are no functions after compilation of a shader is done. Everything will always be inlined. There is no “call” instruction as there is on the GPU, so inlining functions is a necessity. That to me usually means that having a sub-routine on the GPU is purely a matter of whether it makes the code more readable or not.

JoeJ said:
Probably. It's a pain in the ass. The code is hard to maintain, HW details affect ideal choices on algorithms, languages are meh, etc. But i really love the concept of parallel algorithms. The programming process itself becomes interesting and rewarding, even for simple stuff. It's something else and new. But sadly, current APIs are not cross platform, or outdated, or dying, and low level. So the power of GPUs is not available to the general programmer. I have plenty of experience, but i still shy away from porting any of my (slow) preprocessing tools to compute for those reasons. It's really a pity, and imo the largest failure of the tech industry to come up with a proper standard still after so many years. : (

Yeah, thats also an argument for why I haven't started yet. Since my engine is (theoretically) cross-platform, I would want have a way to only write computer-shaders once, like I do now with my own meta-shader language. But that requires a lot more time than it would even just for getting started, since I'd have to look at the differences between the languages, and I'm not even sure if this is technically possible to get good cross-platform results for compute-shaders; as you suggested its probably not.

Juliean said:
Yeah, thats also an argument for why I haven't started yet. Since my engine is (theoretically) cross-platform, I would want have a way to only write computer-shaders once, like I do now with my own meta-shader language. But that requires a lot more time than it would even just for getting started, since I'd have to look at the differences between the languages, and I'm not even sure if this is technically possible to get good cross-platform results for compute-shaders; as you suggested its probably not.

If you decide to implement Compute Shaders sometime in the future Heterogeneous-Computing Interface for Portability (HIP) looks promising.
With this you would only need to Translate your Meta Language to HIP, and from there you can just use HCC (AMD) or NVCC (NVIDIA) compilers.

“It's a cruel and random world, but the chaos is all so beautiful.”
― Hiromu Arakawa

Ultraporing said:
With this you would only need to Translate your Meta Language to HIP, and from there you can just use HCC (AMD) or NVCC (NVIDIA) compilers.

Oh, the thing with me is, I don't use external libraries at all, unless virtually required (like DX/GL) :D I know this is (objectively) stupid, writing your own implementation for things like the (awesome sounding) library you posted, but thats just how I roll :D After all, my project(s) are just for fun, and I have great fun reinventing the wheel like that, always have. Though I might take a look at the library for some inspiration if I ever end up tackling that feature, so thanks for posting after all ?

Juliean said:
From my understanding, there are no functions after compilation of a shader is done. Everything will always be inlined. There is no “call” instruction as there is on the GPU, so inlining functions is a necessity. That to me usually means that having a sub-routine on the GPU is purely a matter of whether it makes the code more readable or not.

Oops, agree about the missing call, which makes my expectation on getting an inline pragma pretty stupid, admitted.
I don't know the reason of the problem, never compared any ISA output. In fact it should be no difference, generating the same output no matter if we use functions or not. But i got higher register usage from using functions so often, at some point i stopped to use them almost in general.
I really hope this has been ‘fixed’ since then.

Optimizing for register usage is black art for other reasons too. Slight syntax changes which have no expected effect often do have a big effect.
It went so far, once i have improved performance by a factor of two by including a pointless branch like this: if (localThreadID < 256). The branch is always true on any device, but it still motivated the compiler to reduce register usage by something like 10.
That's almost a compiler bug, producing correct code but bad performance, fixed by writing really bad code.
Again i hope that's much better now than 5 years ago.

Juliean said:
and I'm not even sure if this is technically possible to get good cross-platform results for compute-shaders

For this case i would not expect problems.
Personally i have used the same shader code for OpenCL 1.2 and Vulkan (GLSL). I did not write some transpiler tools myself, but used a regular C preprocessor and some defines to deal with different syntax.
There is little difference between OpenCL 1.2 and compute shaders, and i assume it's even much less for compute shaders across GLSL / HLSL / Metal.
So you would only need to figure out the subtle differences in syntax to make generated code for multiple APIs. It should not be any harder than with pixel shaders.

For general GPU usage i would just prefer a simple API with a C alike language and features such as indirect dispatch and device generated dispatch. Something like OpenCL 2.0 or Cuda.
I don't want to use a bloated gfx API for that. Although, that's currently the best option regarding cross platform.
Maybe SYCL gets wide support, now that Intel builds on it. Otherwise my only hope is future C++.

JoeJ said:
I don't know the reason of the problem, never compared any ISA output. In fact it should be no difference, generating the same output no matter if we use functions or not. But i got higher register usage from using functions so often, at some point i stopped to use them almost in general.

Note sure eigther, maybe there are subtle differences between the code you wrote with the function and without. While the forced-inlining might sound like an advantage, it could turn out to be a disadvantage too, if the function is large, used in many places, maybe inside a loop. This could negatively affect register-usage, and the overhead of a “call” (at least on traditional cpu) could be far less then the increased code-size etc…

JoeJ said:
Optimizing for register usage is black art for other reasons too. Slight syntax changes which have no expected effect often do have a big effect. It went so far, once i have improved performance by a factor of two by including a pointless branch like this: if (localThreadID < 256). The branch is always true on any device, but it still motivated the compiler to reduce register usage by something like 10. That's almost a compiler bug, producing correct code but bad performance, fixed by writing really bad code. Again i hope that's much better now than 5 years ago.

Yeah, exactly, that where I totally lack understand on the GPU so far. I'll have to include a disassembler in my engine sometimes, so that I can start looking at the generated ASM, now that I probably understand a bit better whats going on. Though FWIW, I should also mention that for writing shaders for ingame effects, I tend to prefer using a visual interface (like shader-graph etc…) which leaves less room to optimize in that regard as well :D Only really complicated shaders, like the tilemap from this thread, I write in my cross-platform language.

JoeJ said:
Personally i have used the same shader code for OpenCL 1.2 and Vulkan (GLSL). I did not write some transpiler tools myself, but used a regular C preprocessor and some defines to deal with different syntax.

I'm not even sure where my tool (for shaders currently) would even fit in :D Its definately more complicated than a preprocessor, but I'm not sure if it counts as a transpiler. I've basically defined a JSON-esque interface, where blocks are then translated to the respective language-equivalent. There's probably some old blogpost floating around where I showcase this. In reality, I didn't update the OpenGL-backend in a very long time, since I didn't really need to so far. Only if I ever go for android/iOS-support, I suppose I'd have to deal with that again.

JoeJ said:
For general GPU usage i would just prefer a simple API with a C alike language and features such as indirect dispatch and device generated dispatch. Something like OpenCL 2.0 or Cuda. I don't want to use a bloated gfx API for that. Although, that's currently the best option regarding cross platform. Maybe SYCL gets wide support, now that Intel builds on it. Otherwise my only hope is future C++.

That would be the least of my worries (I hope). I have a pretty well-rounded, platform-agnostic wrapper for GFX-stuff so far, and adding the capabilities to execute compute-shaders that way should (hopefully) be easy. I have to say, I'm getting intrigued trying it out. Though I have to work more on the game right now. I'll add it to the list with technical things that I want to try (like DX12 or Android-support) for the future :D

Ultraporing said:
If you decide to implement Compute Shaders sometime in the future Heterogeneous-Computing Interface for Portability (HIP) looks promising.

HIP so far is AMD only, and can't run on Windows due to some restrictions of the Windows driver model, afaik. So it's AMD + Linux only, and thus even worse than Cuda.
This also is one reason SYCL is not yet widely supported, as AMD (or somebody else?) implements it on top of HIP, afaik. SYCL is modern C++ for GPUs, so quite interesting…

But it's all vendor specific, even if those vendors declare their standards to be open for others. OpenCL tried to solve this, but NV refused to implement 2.0 on consumer drivers.
NV now does support OpenCL 3.0, but 3.0 is actually a step back, making the advanced features of 2.0 (device side enqueue) optional, which NV still does not implement.
AMD has recently discontinued OpenCL support for their CPUs. So OpenCL is basically a dying API of decreasing support. CL 1.2 still has wide support, but is outdated, lacking both indirect dispatch and device command lists.

Microsofts AMP seems abandoned too, but this was Windows only anyway.

So the options are:
NV: Cuda
Intel: SYCL
AMD: HIP (Linux only)

We can't use any of this. But for games we don't really want to anyway. For games we better use the Compute options of our selected gfx API, because only then we have fine grained control over async execution and robust interop with gfx.
OpenGL or Vulkan has good cross platform support, so our situation isn't bad. Only problem is that the APIs do not evolve. We lack an alternative to device side enqueue or Cudas dynamic parallelism and there is no sign of progress.

Still, for game devs the situation is ok. It only sucks we depend on some API which might be outdated after some years, forcing us to port our code just to keep up.
But for anyone else the burden to deal with complex gfx APIs and using silly ‘shading languages’ to write general purpose code is quite big.

This topic is closed to new replies.

Advertisement