Liner Regression is supervised machine learning

Hi there,

I have had a bit of struggle per the below snip. For lines 33-38, why wouldn’t it return a list of [b.m] but instead it only gave one [b,m] at the 1000th time, given that every i is looped in the function.

Also, why would the b.m in the formula be updated each time instead of having b=0 m=0 each time when it loops. This is because I thought if it wanted to build cumulatively, it should have had a b+= b, m+=m?

Thanks,
Jane

Because each iteration of your for loop updates the values of your variables b and m, and you’re returning these one time after the completion of your loop. If, for some reason, you needed each value of b and m calculated during the loop, you would need to track these somewhere - like appending them to a separate list, and returning that.

Same answer - because your for loop is updating the values of b and m each time. I presume that your step_gradient function is taking the values of b and m, adjusting them based on your learning_rate and returning the values. So, when the adjusted values of b and m are returned from your step_gradient function, the variables are updated accordingly.

If b = 0 and m = 0 were true for each iteration of your loop, you’d only ever be running step_gradient effectively once and would not converge on a final result.

If these are confusing you, you can always review the Python material on loops and functions? :slight_smile:

Thanks a lot. It’s been helpful. The knowledge gap is probably that I forgot if we want to have a list of [b,m], we need to use list1.append. Since this is not the case here, it makes perfect sense to have returned only one.

1 Like

No problem, I thought it was unlikely you’d need the full list of b and m prior to the final calculated result. :slight_smile: