[Question] Project : Reggie's Linear Regression Apply Python lists & loops

The project link:
https://www.codecademy.com/paths/data-science/tracks/dspath-python-unit-project/modules/dspath-brute-force-lr/informationals/pwp-linear-regression

The Question:
At “Part 2: Try a bunch of slopes and intercepts!” , In [8]
I define
possible_ms = [m /10 for m in range(-100, 101)]
possible_bs = [b /10 for b in range(-200, 201)]
instead of
possible_ms = [m * 0.1 for m in range(-100, 101)]
possible_bs = [b * 0.1 for b in range(-200, 201)] ,

and it results best_m , best_b and smallest_error
become 0.4, 1.6, 5.0, from 0.3, 1.7, 5.0 .

I am a green hand on coding, could anyone help to explain this difference, is it related to the data type? According to my understanding, when subdivide by a “int” the data type will be change to the “float” automatically, so what is the difference for /10 versus *0.1 ?

This has to do with computer hardware and architecture.
The pydocs linked explain this in detail.
Floating point precision

floats being approximations, you’re not supposed to consider small differences to be different
so you would either make your comparisons in a way that takes into consideration that it isn’t exact, or if you want exact, then you shouldn’t be using float and will instead need to use some representation able to exactly represent what you’re doing

in this case you don’t care about it being exact. but you will need to consider what you’re doing when you’re interpreting the result. you might be forgetting to consider whether or not that is the best fit, and instead only see that the numbers are different which is the wrong thing to look at.