I ran the above code and was surprised by the result. When I divide by 10, the result is what I expect with just 1 decimal point for every number. However, when I multiply by 0.1, some result create many decimal points and is incorrect mathemetically.

Can someone help explain this? The error seems to only happen with some number i.e., 3, 6, 7. Thank you for your help.

floats are approximations
if you consider them different then you are doing exact comparisons, and that does not make sense between two approximations

if you want exact operations then use an exact representation, for example Fraction from the fractions module (which is obviously slower, which is why float is a thing)

The first bit is a sign bit, 0 for positive 1 for negative
Notice some things about the last 23 bits, do you see how clean .5 looks?
Thats because its easy for a cpu to precisely store powers of 2, representing a half, a quarter, an eighth is very easy to do in binary.
In our human decimal system it’s easy to represent 1/10th 1/100th etc, its not so easy to represent 1/3 though .3333333333 goes on forever
The computer struggles to precisely store numbers like .1
Do you notice a pattern occurring in a lot of the numbers ? 00110011001100110011
That is the computer trying to represent a tenth, just like.33333 repeats forever for us, the cpu would have to repeat it’s sequence forever to represent .1

Unfortunately the computer only has so much room to store that sequence, so some precision is lost once the cpu runs out of bits

Also note how the sequences of each number that looked odd to you look very similar in the cpu, the numbers that didn’t look odd to you also look similar to each other

Some more food for thought on rounding/floats/bits
3.1444444444 is an approximation of pi to 10 digits of precision, but it is certainly not an accurate one. Just because you use more decimal places(bits) doesn’t make your number more accurate