🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

C++ size_t for everything?

Started by
30 comments, last by frob 4 years, 7 months ago

That's because of an implementation defined difference. 

On that particular implementation uint32_t is 32 bits, but size_t is 64 bits. The extra line in the assembly is to turn your 32-bit array index into a 64-bit array index.  Different platforms may need to do different manipulations for that sort of thing. If you were on a system where both were 32 bits it would likely have the same disassembly. 

But that isn't a flaw in the language. That is one of it's greatest strengths.

 

It isn't just the sizes of things that is implementation defined, it is also how access works in general.

Consider how Arduino works, or how many other microcontrollers work. You may not have any experience with it, so now is a great time to learn. On Von Neumann architectures like the PC you can mix code and data freely in your program. On other architectures like the Harvard architecture used by Arduino chips and many microcontrollers, code and data are kept separate. On a Harvard style architecture if you want to load data that was stored in code there is a separate step to perform the conversion.  Instead of using the hard-coded array directly it requires several additional steps. 

These implementation defined details are good for the language.

On the PC following the Von Neumann architecture you get more control over data and memory organization, but you pay a cost that parallel execution is actually more difficult in the hardware, with bottlenecks on the bus, and dangers due to self-modifying code. In fact, self-modifying code is something the modern PC had to protect itself against. While it enabled some interesting features like processor detection (which I used myself in the 80s and 90s) it was heavily abused by hackers.

On the flip side, many microcontrollers prefer the Harvard architecture. The chips have two separate data streams which makes them less complex to design. Program memory and data memory can have different costs, such as using flash/EEPROM and SRAM/DRAM. The fabrication cost is slightly more and the die space is bigger, but the designs of microcontrollers are usually are very inexpensive and generally not space constrained, so a slight increase to both isn't an issue

 

The language leaves many details up to the implementation. The language committee is really smart, and they take input from every manufacturer who wants to talk with them.  They don't say 'The PC does this, so that's what the standard should say'. Instead they consider every device from mainframes to microcontrollers, from tried-and-true mass-produced hardware to experimental computer models and academic theory. When there is division about if something should be specified or not, they generally leave it up to the implementation to decide.

 

There are compilers that compile the same C++ code into well-formed code on many chips.  Because the language leaves those details up to the implementation, you can use the same language to program your PC, your cell phone, your little toy robots, or your LED light strips. 

This topic is closed to new replies.

Advertisement