FAQ: K-Nearest Neighbor Regressor - Weighted Regression

This community-built FAQ covers the “Weighted Regression” exercise from the lesson “K-Nearest Neighbor Regressor”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Get Started with Machine Learning
Data Science

FAQs on the exercise Weighted Regression

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in #get-help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to #get-help and #community:tips-and-resources. If you are wanting feedback or inspiration for a project, check out #project.

Looking for motivation to keep learning? Join our wider discussions in #community

Learn more about how to use this guide.

Found a bug? Report it online, or post in #community:Codecademy-Bug-Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

Why is the denominator the sum of 1/each distance?

1 Like

I think it’s something like the weighted average but inversed. I checked several webpages saying that it’s called “Inversed Distance Weighting”. I will use some pseudo-code to explain.

Imaging you’re doing a weighted average for a list of number and their counts, the weighted average should equal to
numbers = [the list of numbers]
counts = [the list of corresponding counts]
weighted_average = sum([number * count for (number, count) in zip(numbers, counts)] / sum(counts)

You could imagine sum(counts) as
sum = 0
for count in counts:
sum += count

Now we want to do a inversed weighted average, which means we could convert each count to 1/count. The larger count, the smaller 1/count.

Thus, the denominator becomes 1/count_1 + 1/count_2 + 1/count_3 + ... 1/count_n

That’s how I understand the inversed weighted average! Hope it could help you. :grinning:


Thanks! The key idea is to make the weights smaller when the distance is greater, so that’s why we take the inverse of each distance as the weight for each rating. Great explanation!


In exercise 1 we had a score of 6.86. With the weigthed we had a score of 6.849139678439045. The IMDB rating for The incredibles 2 is 7.8. The weighted algorithm actually performed even worse than the original.

Could someone explain us why. Than you.