# Binary Vs Decimal Floating Point

Good Evening All,

I’m learning Python for the first time and have been looking into Binary floating-point limitations. I realise this was around before decimal floating-point was introduced.

As a mathematician I tremble at the thought of using binary floating-point with the possible (albeit small) inaccuracies. It seems like decimal floating-point is far far more accurate from some readings I have been doing.

My question is, is there any reason these days why you would use binary over decimal, apart from speed of course?

Apologies if this is a silly question and thanks for the help in advance!

More accurate? Aren’t they saying the same thing?
Hardware-wise it’s easier to tell if a signal is on / off than to match it to ten different levels. Your computer does not have instructions for operations on decimal floating point values.
If you’re using floats its because you want a fast approximation. If that’s not what you want, then you do not want to use floats at all.

You cannot represent an infinite amount of numbers using finite space.
If you use 1000 bytes of information, then no matter what base you use, you will get the same fixed amount of possible values that you can then spread out over your range.

From what I believed decimal FP can represent certain decimals (such as 0.1) as their actual representation rather than an estimate which would be used for binary FP in this particular example.

Thank you for your reply though, I appreciate your answer that for certain decimals that never end we will never necessarily be able to store them completely accurately due to space limitations…

NONE of them are accurate because you cannot tell whether it’s that value or one that is very close to it.

You can reason about the operations made and be able to tell that all operations were exact. But that’s just using it as an inconvenient integer. Use integer.

It doesn’t matter if you use decimal or binary, they spread out equally thin. You just end up with the representable values in different locations. You could switch places on your use of decimal and binary and say the same things.

I’m sorry but I have to disagree with you when you say none are accurate. The accuracy depends upon the number you need to store and the number of bits of precision available. 0.1 for example can never be stored accurately in base 2 whereas it can be stored absolutely accurately in base 10.

The point I’m trying to make is if you wish to deal with numbers that are not integers then what would be the best trade off. I appreciate that with decimals that go on for a long time (or never end at all) then both the decimal and binary floating points will be an estimation rather than completely accurate. However I believe that for all other decimals (when the number of bits of precision is greater than how many digits there are in the decimal) then decimal floating point is likely to have a better chance of storing the value than binary. For binary to be completely accurate the denominator of the rational form of a decimal must be a power of 2 (such as 3/16) whereas there is no such limitation for decimal floating point (just that the number of digits is less than the precision specified.)

You have exactly the same limitation, just a different base, and having your intermediary values specifically have nice representations in one base is really really weird and sounds a whole lot like you could be using integer to the same effect

You don’t know the accuracy, that information is not included.

You either are okay with approximating, or you’d use an exact representation

only if you’re very carefully choosing that value, but you could choose one for the binary representation as well… so if you instead pick a value at random, then, no, not more likely, because they have the same amount of representable values unless you’re allowing more storage for one over the other but that’s not exactly fair is it?

And why base 10 anyway? Shouldn’t you take the opportunity to multiply that by a couple more primes? You went from 2 to 2*5, what about 3 7 11 13 17 19 … where do you stop? there’s always more, and you’re still stuck with the same amount of information meaning you’re no more likely to be able to represent a random value and even if you are you can’t tell that apart from something that wasn’t exactly represented because you don’t know whether or not it was an exact match

I was making the silly mistake of assuming that I knew what the hypothetical values were. Thus, as you said, I can not be sure whether the number would be an exact value or merely a representation. You’re right of course that theoretically there are the same number of values that would be “nice” for binary representation as there would be for decimal representation.

The reason for the comparison of binary and decimal though is because, as far as I’m aware, those are the two most common representations used for programming!

Thanks for the help.

This topic was automatically closed 18 hours after the last reply. New replies are no longer allowed.