🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

I know C but am planning on using a 3d engine. Should I use C# or C++?

Started by
25 comments, last by Tom Sloper 1 year, 10 months ago

Yes, C++17 is widely adopted, but it does not have much that studios really needed. Templates and a few type rules had some tightening up that feel more intuitive to me. The library changes were mostly things that developers used system specific libraries with platform wrappers rather than the language's generic versions that offer less nuance. I don't think any console vendors accept submissions built on older pre-C++17 compilers, so the features are present if you use them or not.

C++20 is getting used in ever-increasing places, although modules and concepts are not widely used yet. Even so, I have seen far less reluctance to use features than there was two decades ago. Back then the issues were severe and critical, like an executable growing 10x in size from a template glitch or 2x in size from extensive exception handling. These days the changes are mostly incremental refinement. New features are rightly suspect until a few people have looked under the hood, but the features are generally acceptable compared to other alternative implementations. If you don't actually need them then an older model will be more likely to survive code review, but it is common enough that people are learning about C++20 features in code reviews and seeing they work out.

Advertisement

@JoeJ

JoeJ said:

but C++ seems stable …..

That depends what you mean by stable. I use C++ almost exclusively, but I would be overjoyed if something better came along. I loved C++ when it first came out but now it seems very convoluted and it constantly changes. I have picked up a few modern C++ things (like standardized threading) but a lot of it I could do without. I think the worst part is, teachers now encourage newer programmers to used the standard library without really understanding what's going on beneath the surface. I have seen many bugs, and even more performance issues because of this. That being said I'm using C++20, but the only reason I switched was to use semaphores. The rest I could do without. Some things in modern C++ like std::shared_ptr are downright bad (at least for general usage). This is coming from someone who's been a big fan of reference counting since the early 90s. Somehow they managed to implement it so the pointer is actually 2X! the size of a normal pointer. That must have taken some thought.

Gnollrunner said:
That being said I'm using C++20, but the only reason I switched was to use semaphores. The rest I could do without.

I'm on the opposite, I couldn't do without C++20. From simple things like spaceship (+ default operator==), to coroutines, constraints and concepts… I personally feel that people who don't get that much from newer standards might not be using c++ to its full capacity in the first place. Adapting every new standard has transformed c++ to an entirely new (read: better) language back when I started pre c++11, and every new standard has iterated on many of the features that I regularily use, and added more useful things.

Gnollrunner said:
Some things in modern C++ like std::shared_ptr are downright bad (at least for general usage). This is coming from someone who's been a big fan of reference counting since the early 90s. Somehow they managed to implement it so the pointer is actually 2X! the size of a normal pointer. That must have taken some thought.

Well, how else would you implement reference counting that works without storing the counter inside the object, and without relying on all objects being constructed via make_shared?
I'm personally also using ref-counting that stores the count inside the objects themselves, but I'm also working in a closed ecosystem for an engine that I maintain completely. Thats the main problem with the standard library - it needs to be able to work in a wide range of situations, where one cannot be expected to always be able to modify the entire source freely to fit a specific requirement. Sure, specific implementations might be able to perform better - I'm also using a custom “vector” class, mainly since std::vector<bool> is a complete fuckup; and also so that I can specify the size-type, to allow the vector to only require the size of 2 instead of 3 pointers where less then 32-bit of elements need to be stored. That being said, I belive those are rather minor optimizations, and very rarely should a size-of 2xuint64_t on a shared_ptr make a noticeable difference in performance (otherwise, 32 bit would still theoretically be faster than 64 bit because their pointers are also only half size, wouldn't it)?

That being said, I belive those are rather minor optimizations, and very rarely should a size-of 2xuint64_t on a shared_ptr make a noticeable difference in performance (otherwise, 32 bit would still theoretically be faster than 64 bit because their pointers are also only half size, wouldn't it)?

I'm going to avoid getting into rathole language arguments here. I think a lot of things come down to programming style and mindset. If what you're doing works for you, that's great. The same goes for old school C++ programmers.

That being said I did want to address this bit. I use a lot of pointer heavy structures. In fact I even implement a half-pointer (32 bits) which is basically a relative addressing 8 byte aligned pointer for referencing objects in the same 16 gig heap. While I don't expect a standard library to go that far, the 2X pointer size would really be a huge hit over a normal reference counting pointer. Imagine a large DAG structure. I have something like this where particular nodes are referenced outside of the structure, so when it's deleted I save certain key nodes I want to keep. Reference counting works well for this. However a 2X pointer size would be highly sub-optimal. Not everything is about speed. In this case it's about space and that can indirectly lead to performance issues.

There are several ways to implement reference counting. Some involve special heaps, macros etc. My personal opinion is however it's done, a 2X pointer size penalty is unacceptable for the general case. It really doesn't get you anything except the ability to use a separate control block which is generally something programmers try to avoid. It also may have some slight advantage with weak pointers and large objects, but that's about as far as it goes. Meanwhile you pay the 2X penalty no matter what. I believe you are also stuck with thread safe counts even if you don't need them.

I thought this was an interesting video, only because it demonstrates why you should not count on std::shared_ptr.

Juliean said:
I'm on the opposite, I couldn't do without C++20. From simple things like spaceship (+ default operator==), to coroutines, constraints and concepts… I personally feel that people who don't get that much from newer standards might not be using c++ to its full capacity in the first place. Adapting every new standard has transformed c++ to an entirely new (read: better) language back when I started pre c++11, and every new standard has iterated on many of the features that I regularily use, and added more useful things.

The elements in could be done already, they're just a little easier or requirements made explicit. It isn't like we went from a non-Turing-complete language into a Turing-complete language. It's also not like we're getting new CPU instructions out of it.

Just as something I've reminded beginner programmers many times, all looping constructs can be reduced to a while loop. Do/while, for, range based, all of them can boil down to the same while loop but with the compiler simplifying elements along the way. The programming language is easier to use with them, they can open op optimization opportunities, but fundamentally they don't create anything new. The same has been true with everything I've seen in C++ over the decades.

Concepts and requires are useful for template selection, but could be done already though a mix of SFINAE and enable_if<>, along with some other compile-time constructs. It is a more graceful solution to those problems when they're encountered, but it's not a problem we often experience. It will certainly help compile times in some situations, can guide specialization, and can make template programmer's lives easier, but it isn't something radically new as the underlying behavior was already present through more complex means.

The spaceship operator is something that programmers could already implement, but most of the benefit is for partial ordering situations. It is useful for systems like database NULLs and numeric NAN that are incomparable, but in most games NANs are considered a major bug and NULL database entries are generally handled long before getting to the client. Even so, that solution already existed, the one incorporated in the standard makes it easier in the general case but isn't something fundamentally new.

Coroutines and the other parallel processing routines are generalizations of platform libraries, and systems of certain size and maturity either implement their own wrappers that cover the details of each platform, or use the platform's libraries directly for better nuance. That has always been the bane of the general purpose libraries in the standard, because they serve the general purpose the specialty purpose is unmet but games typically need those specialty purposes.

Several are a new option to compilation, since modules can be implemented as a major departure from the compilation model, but fundamentally nothing about the programming itself has changed. You could access externally available functions before and can still do it, it's just potentially less work for the compiler and more options for toolchain creators.

Several also add optimization opportunities, things the compiler can know about to produce better results, but they don't add anything fundamentally new.

That's been the big thing about the language standard, the 2011 and even 2003 standards didn't add anything fundamentally new to programming in the language. There are no new operations that didn't exist before and compilers still have access to the same hardware assembly opcodes. Instead it makes some constructs easier for programmers to access, sometimes much easier. In other cases it provided a generic way to access what was formerly platform-specific infrastructure, with operations like parallel processing degrading into no-op commands when not supported. In each case it simplified access to other elements, they enabled optimization opportunities, but nothing was fundamentally new to programming. Even lambdas which were a big change didn't introduce anything new, it was different typing but you could implement the same feature with either a small function or functor, with the new syntax and language feature there was more opportunity for additional optimization but it degrades into a worst case of a compiler-generated anonymous function or functor, exactly what you could implement before.

That's also exactly why there is little difficulty getting new features adopted in companies. They aren't fundamentally new concepts. Instead they're a more efficient way to write exactly what was written before, in programmer efficiency, generated code efficiency, or both. Unlike the compilers 20 years ago, these have had far more time (and money) to bake in the oven of experimental systems before being incorporated into the mainstream.

@Statusphere For C++20 features btw MS has the most complete compiler at the moment not Clang.

It will be years before we see 20 on consoles and thats not really to do with the constole makers but mostly with the standards themselves they are still not fully sure how modules should work at the implemetation level.

None

frob said:

Several are a new option to compilation, since modules can be implemented as a major departure from the compilation model, but fundamentally nothing about the programming itself has changed. You could access externally available functions before and can still do it, it's just potentially less work for the compiler.

Several also add optimization opportunities, things the compiler can know about to produce better results, but they don't add anything fundamentally new.

Modules is the only thing I would love to see support for since they can dramatically change compile times, we are talking an order of magnitude change here. But I also know its not simple to move the codebases we work in over to them sadly

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion

Anyone knows what's the plan regarding future GPU support? Afaik this should come with C++24, but i never found any detailed information.

It's my last hope.
OpenCL 3 is a step back from 2. CL 2 is still not supported by Nvidia. AMD has dropped CPU support, so i'm not sure if they may abandon CL entirely in the future.
Microsoft AMP did not really take off, and CUDA is vendor locked.

So compute shaders via gfx API is the only option, but it's too cumbersome to use in most cases.

Thus, GPU power remains accessible only to game engine experts or vendor locked enterprise software. That's unacceptable, and after decades, still no solution in sight.

frob said:
The elements in could be done already, they're just a little easier or requirements made explicit. It isn't like we went from a non-Turing-complete language into a Turing-complete language. It's also not like we're getting new CPU instructions out of it.

No, I agree, but most of those features are a huge boost in convenience/productivity, which is a reason enough to not want to work without them in my book. Sure, we could always loop over a collection:

for (std::vector<int>::iterator itr = vector.begin(); itr != vector.end(); ++itr)
{
	function(*itr);
}

But its just so much easier just to type and read:

for (const int x : vector)
{
	function(x);
}

Isn't it? I would never ever want to work in a c++ without range-based for loops.
Thats pretty much how most new features are for me. My definition for drastically new features are that they allow me to do something I had been doing, but way simpler. And c++ starting from 11 has constantly been doing that. I would say the same thing about other languages, for example my favourite feature of C# is interpolated strings, as well as the null-conditional operator. Not because they both allow me to do things I couldn't do before, but because they make my life a lot easier.

Juliean said:
for (std::vector::iterator itr = vector.begin(); itr != vector.end(); ++itr) { function(*itr); }

But that's really a bad example, because stl container iterators are much more cumbersome than the same code in C:

for (int *ptr = array; ptr < array+size; ptr++)
{
	function (*ptr);
}

Not so much more code than your modern version.

A pattern that has affected my coding the most form adopting modern features is lambdas combined with templates to replace cumbersome callbacks:

#if 0 // slow:
	void BlockParticlesAdjacencyIterator (const int block, 
		const float interactRadius, const float delta_x, const float cellMaxRad, 
		std::function<bool (const int pI)> OpenOuterLoop,
		std::function<void (const int aBegin, const int aEnd)> IterateInnerLoop,
		std::function<void (const int pI)> CloseOuterLoop
		) 
#else // fast:
	template <typename L0, typename L1, typename L2> 
	void BlockParticlesAdjacencyIterator (const int block, 
		const float interactRadius, const float delta_x, 
		L0 OpenOuterLoop, L1 IterateInnerLoop, L2 CloseOuterLoop) 
#endif
{
	// code here does all the things needed to iterate adjacent particles within given radius
	// that's complicated - needs to traverse tree to fuind adjacent blocks, cache results to minimaze traversals, iterate particles per sub cell
	// so i really only want to rite this iteration code once
}

some example using this:

		auto OpenOuterLoop = [&](const int pI)
		{
			// here we may do setup stuff, like initializing variables to zero where we accumulate stuff from neighbors to a current particle
		};

		auto IterateInnerLoop = [&](const int aBegin, const int aEnd) 
		{
			for (int apI = aBegin; apI < aEnd; apI++) //if (apI != pI) // iterate adjacent particles of a given block
			{
				Particle &ap = particles[apI];
							
				// do some interaction of particles, accumulate, etc...

			}
		};
		
		auto CloseOuterLoop = [&](const int pI)
		{
			// her we may write results to memory, or do other closing stuff on the current particle
		};

		BlockParticlesAdjacencyIterator (block, interactRadius, delta_x, 
			OpenOuterLoop, IterateInnerLoop, CloseOuterLoop);

Before modern C++, i had to write iterator objects to do this. This was often harder, e.g. if we traverse a tree, maintaining all state in such object and modelling all potential interactions with user code was MUCH more work than writing tree traversal in place.

Now i can do it the other way around. No more iterator object, but writing little lambdas to handle interactions and decisions. I really like this and use it often.

If there were no lambdas, i could create small structs or classes within my user function. But that's only possible since C++11 too, so we can just use lambdas.
The traditional alternatives would cause many callbacks and objects at global scope, cluttering code base, although we need the stuff inside of just one function.
That's so far where modern C++ really shines to me, but i still lag behind and don't know about most features you guys mention.

The interesting detail is performance. You see on the #ifdef i had tried std::function before, which works and is equally convenient.
But the cost was too big. Using templates, results with Clang where the same performance as with writing inline iteration code. With MSVC this still has some extra cost like 5% in my example of fluid simulation.

If you have some concerns, please let me know ; )
Templates will cause a larger executable, but so would copy pasting iteration code to many functions. So i'm very happy with this pattern.

This topic is closed to new replies.

Advertisement