🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Integer math on 8-bit architecture, itoa()

Started by
8 comments, last by Nagle 2 years, 5 months ago

Long shot here, but I'm working with some microcontrollers and the 8-bit ones are ½ the price or less. However one thing I need to do is output a score to a LED 7-segment (a character array of length 8). So if I can treat 4 bytes on an 8-bit architecture as an integer, and perform an itoa() by converting it to a char[] array, then I can do this. I haven't thought much how to do it but I have a feeling it will be very complicated. I only need to store vales up to 99,999,999.

Obviously on a full 32-bit/64-bit architecture we just take the integer and perform some modulus % operations, and we count how many 1,10,100,1000 etc. But obviously on 8-bit I can't just modulus operate on 32 bit numbers when the architecture operates on 8-bit numbers.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Advertisement

Some 8 bit CPUs have support to do extended arithmetic (e.g. from 8 to 16 bit) iirc, others don't.
Often there were tooling functions in ROM to help with this as well.

So i guess you want to tell more about your system, and what programming language you use.

How do you figure that? Many popular 8-bit chips have toolchains that support C++, including 32-bit and 64-bit data types. Quite a few toolchains even offer software floating point math libraries so you can read float and double values from a hardware device and process it on the 8-bit chip in C++ without worry.

What is the microcontroller? What is the toolchain to program it? Have you checked your assumptions against the documentation?

The language specification is independent of the underlying hardware. Even on modern desktop PCs there are some operations that are directly supported in a single operation, but others that require a longer chain of CPU operations. When dealing with microcontrollers or RISC processors many of the C++ operations take a few more CPU instructions, but they can do the work with no observable side effects, which is exactly what the language requires.

Long time since I programmed 8 bit CPUs ?

The CPUs that I used had a carry bit for overflow on eg addition, and that carry bit is taken into account on the next addition, so adding is simply clear carry, add bytes 0, add bytes 1, add bytes 2, add bytes 3, with some load and store operations to get the data to add, and to store the results.

This also happens with subtract afaik, but I forgot the details.

div and mod are an option (and fun to work out how to do it), but it's way more simple to write a loop to repeatedly subtract 10,000,000, then repeatedly subtract 1,000,000, and so on until you're left with a value < 10. If that's not fast enough you can also try subtracting 50,000,000 first for the highest digit.

Another direction to consider is BCD (binary coded decimal) mode. Basically 8 bit becomes 2 decimal digits. You can add and subtract as normal and since it's already stored as decimal digits, conversion to LED segments is fairly trivial :p

I'm using MPLAB/Microchip with XC compiler. Maybe I'll have to check the XC website and see if they have an internal library to do so.

but it's way more simple to write a loop to repeatedly subtract 10,000,000

Makes sense. That loop would kind of suck though if your number were 9000. You would subtract 1000 (9) times to realize it has 9, but that's not just 9 subtractions, its 9x4 8-bit subtractions + whatever logic to merge/carry over to other bytes. 9999 would be 9 subtractions for each digit and each byte. That's a lot of processing.



NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

I've seen 8-Bit embedded systems do even more complex math than just modulus. There are plenty of encryption libraries for those systems as they're used for example in door scanners. The trick is to not think of an integer as 32-Bit value but an array of whatever the largest type is your architecture supports. Then you can write your own “long integer” math and handle 32-Bit, 64-Bit, 128-Bot or even 2048-Bit numbers as what they are, just numbers.

This is an example of our crypto lib working on any sized integers for elliptic curve data signing. You might change this to work with your embedded system easily

template<int size> inline uint64 Add(uint64 a[size], uint64 b[size], uint64 c[size])
{
    uint64 carry = 0;
    for(uint8 i = 0; i < size; i++)
    {
        uint64 tmp = (b[i] + c[i] + carry);
        if(tmp != b[i]) carry = ((tmp < b[i]) ? 1 : 0);
        a[i] = tmp;
    }
    return carry;
}

dpadam450 said:
That loop would kind of suck though if your number were 9000. You would subtract 1000 (9) times to realize it has 9

For more speed, try subtracting larger values. [EDIT: For the 4th digit counting from the back: ] First try 5000, then try 2000, then try 1000, and done.

But yeah, simple to implement doesn't mean it will be the fastest thing on the planet. Depending on your needs and on the speed that this achieves, that may not be necessary though.

If you aim for geeky hacking rather than a quick solution, then by all means write a custom div/mod 10 function on a 4-byte array. It's going to be a lot of fun making it work. ?

dpadam450 said:
MPLAB/Microchip with XC compiler

I've checked their documents, look up “Integer Data Types”. They support 1, 2, 4, and 8 byte (8-bit, 16-bit, 32-bit, and 64-bit) types on all the 8-bit chips, and some chips they also support 3-byte / 24-bit ints.

Just use the standard types of int8_t, int16_t, int24_t when supported, int32_t, and int64_t for integers of those specific widths, or the “uint” variety if you need unsinged. They also support the language standard types of int_least#_t for types of at least that many bits, and int_fast#_t for the fastest internal storage with at least that many bits. These have been part of the language standard for quite a long time, supported in many compilers before 1999 when they were added to the official C standard, then later incorporated into C++.

Rely on the compiler. They've already written the code to handle all those details for you. Yes it takes a little more processing to work with data types larger than the CPU's default register size, but you don't need to re-invent the wheel. That's the entire point of using higher level languages rather than writing it per-machine in assembly languages or bytecode.

Even the low-end PIC CPUs have 16-bit arithmetic. The 8501 is about the lowest end CPU still manufactured, and it really does have an 8-bit CPU. But nobody uses those for new designs. That part came out in 1980.

This topic is closed to new replies.

Advertisement