The project link:

https://www.codecademy.com/paths/data-science/tracks/dspath-python-unit-project/modules/dspath-brute-force-lr/informationals/pwp-linear-regression

The Question:

At “Part 2: Try a bunch of slopes and intercepts!” , In [8]

I define

possible_ms = [m **/10** for m in range(-100, 101)]

possible_bs = [b /10 for b in range(-200, 201)]

instead of

possible_ms = [m * 0.1 for m in range(-100, 101)]

possible_bs = [b * 0.1 for b in range(-200, 201)] ,

and it results `best_m`

, `best_b`

and `smallest_error`

become **0.4, 1.6, 5.0,** from **0.3, 1.7, 5.0** .

I am a green hand on coding, could anyone help to explain this difference, is it related to the data type? According to my understanding, when subdivide by a “int” the data type will be change to the “float” automatically, so what is the difference for /10 versus *0.1 ?