This gets into cpu architecture and how floating point numbers are stored in your computer

Here is a look into how the decimals .1 thru .9 are represented in 32bits

```
0.1 => "0 01111011 10011001100110011001101"
0.2 => "0 01111100 10011001100110011001101"
0.3 => "0 01111101 00110011001100110011010"
0.4 => "0 01111101 10011001100110011001101"
0.5 => "0 01111110 00000000000000000000000"
0.6 => "0 01111110 00110011001100110011010"
0.7 => "0 01111110 01100110011001100110011"
0.8 => "0 01111110 10011001100110011001101"
0.9 => "0 01111110 11001100110011001100110"
```

The first bit is a sign bit, 0 for positive 1 for negative

Notice some things about the last 23 bits, do you see how clean .5 looks?

Thats because its easy for a cpu to precisely store powers of 2, representing a half, a quarter, an eighth is very easy to do in binary.

In our human decimal system it’s easy to represent 1/10th 1/100th etc, its not so easy to represent 1/3 though .3333333333 goes on forever

The computer struggles to precisely store numbers like .1

Do you notice a pattern occurring in a lot of the numbers ? `00110011001100110011`

That is the computer trying to represent a tenth, just like.33333 repeats forever for us, the cpu would have to repeat it’s sequence forever to represent .1

Unfortunately the computer only has so much room to store that sequence, so some precision is lost once the cpu runs out of bits

Also note how the sequences of each number that looked odd to you look very similar in the cpu, the numbers that didn’t look odd to you also look similar to each other

Some more food for thought on rounding/floats/bits

3.1444444444 is an approximation of pi to 10 digits of precision, but it is certainly not an accurate one. Just because you use more decimal places(bits) doesn’t make your number more accurate