Currently going through the *Analyze Financial Data with Python* course and noticed a small difference in my code for an offline project vs the CC code that made a pretty big difference in the end result.

The focus of the project (*Reggie’s Linear Regression*) is running a linear regression on the *datapoints* list. Here’s my code:

This produces a best intercept of 1.6, a best slope of 0.4, and a smallest error of 5.

The only difference in the CC code is with the list comprehensions. Where I divided *i* by 10, they multiplied by 0.1.

This changed their final results to produce a best intercept of 1.7, a slope of 0.3, and a smallest error of 4.9 repeating.

So…why is this and how should I deal with this in the future?

As a side note, if you uncomment my little debugging block, you’ll see that 1.7 and 0.3 actually produce an error of 5.00000000001 in my version.