FAQ: Linear Regression - Put it Together II

Community%20FAQs%20on%20Codecademy%20Exercises

This community-built FAQ covers the “Put it Together II” exercise from the lesson “Linear Regression”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Data Science

Machine Learning

FAQs on the exercise Put it Together II

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

In this section of code on exercise 10, I don’t understand what the for loop is doing:

#Your gradient_descent function here:
def gradient_descent(x, y, learning_rate, num_iterations):
b = 0
m = 0
for i in range(num_iterations):
b, m = step_gradient(b, m, x, y, learning_rate)
return [b,m]

If I take out the for loop, the line no longer fits the data as well, but I don’t see how there is any change in each iteration through the step_gradient function. Could someone explain this to me?

1 Like

In the function def step_gradient, why do we define b_gradient and m_gradient again?

Unfortunately step 2 will not let me proceed, with this error: " Does gradient_descent take 4 parameters?"

1 Like

I actually had the same issue on a different step. You see, I had just the following simple code:

def gradient_descent(x, y, learning_rate, num_iterations):
  b = 0
  m = 0
  return b, m

And this passed the second step. But later, when I implement the correct code solution (take my word for it, I don’t know if it is allowed for me to just post my solution on the forum) I get stuck on step 4. I also produced the, “Does gradient_descent take 4 parameters?” error. But this makes no sense since I made it to step 4, and at no point do I change the number of parameters for that function.

I had to use the “View Full Solution” option just to proceed, which really sucked because their solution was virtually identical to mine (one tiny difference being that their solution used return [b, m] and my solution was return b, m, which are both valid ways to return two variables from a function as far as I know).

I filed a bug report for this, but it is frustrating that there isn’t way to view the normal console and the localhost at the same time, because maybe I would have had a more informative answer from the console if there was really an error or a failed test case of some sort, rather than that bogus error question that Codecademy produced. That, or maybe add a hint to Step 4 to show the correct “solution,” i.e. the exact identical syntax that Codecademy needs to allow you to proceed to Step 5.

EDIT
I was horribly mistaken. After trying again, and still being confused, I realized that I must be on auto-pilot and was being dumb. In my solution, for the loop, I defaulted to my normal style and used for x in range(num_iterations). This obviously cannot work since x is the parameter to take in the x-axis data.

I still think that the inability to view the console and the bad error message “Does gradient_descent() take 4 parameters?” is an issue of some sort that needs some kind of resolution. But, I was very wrong about my code being correct. Apologies.

I also only had the issue at step 4, but it turns out the mistake was that I had written len() instead of range(). Correct solution would be:

def gradient_descent(x, y, learning_rate, num_iterations):
b = 0
m = 0
for i in range(num_iterations):
b, m = step_gradient(b, m, x, y, learning_rate)
return [b, m]

Other than by setting a maximum number of iterations, how else can we guard our computer against blowing up from a gradient descent loop spiralling out into divergence?
As I understand it to ensure convergence we need each successive parameter increment to be smaller than the last (since it is determined by the gradient of the gradient descent function and that tends to 0 as loss is minimised).
So we could perhaps include an if-statement like this at the end of the step_gradient function? –

if get_gradient_at_b(b) < get_gradient_at_b(b_current): return b else: return "Divergence alert!"

(full condition would include m but keeping the example short)
This would then feed into gradient_descent via the call to step_descent in the for loop.

Hope this makes sense…

@alexanderloklindt thanks for your tip. I was having the same problem since I replaced the y parameter with learning_rate in the step_gradient function. The instructions were a little unclear to me as I took it to mean the last argument should be replaced with learning_rate while in fact it needs to be added as the last argument.

The entire thing about this course is to learn Python and learn very important finance topics, i got it.
But most of those things that i already learned in this course it can be done in excel in a much simple way. My intention is not offend anybody, Pyhton is a very powerful tool if you know how to use it, thats why i took the course.
Anyway, i did the regression in excel and in my private python that i have in my pc and i got some slight differences.
Python: b = 49.60215 and m = 10.463427
Excel: b = 50.2272 and m = 10.388
Someone else got this answers or it is just me?

This part is very confusing. I don’t understand what the for loop is for and I am also very confused with m and b. We have defined these variables using the same name but they don’t seem the same to me. Am I the only one who followed instructions and finished this but still doesn’t know what is happening at each step?

1 Like

Yes, same here. Gradient refers to slope, which is also what m is. So it is unclear why we are finding the gradient at m.

Isn’t that redundant?