FAQ: Logistic Regression - Logistic Regression

This community-built FAQ covers the “Logistic Regression” exercise from the lesson “Logistic Regression”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Machine Learning

FAQs on the exercise Logistic Regression

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

EDIT: I found the answer to my own inquiry below in slide 9/11 of this lesson. Basically it does exactly what i described in my last paragraph. This lesson doesn’t show to us explicitly the gradient descent being performed on the log loss because the concept is already covered in the Linear Regression Lesson.

If the feature coefficient is 0 and you multiply that to the feature value, wouldn’t that just give you 0? Which was in fact used in a later part of the lesson to show the worst possible log-loss?

Or are we supposed to start with the worst possible log loss, and then iterate again this time with new adjusted coefficients resulting in a log loss that is smaller and again and again until the log loss is minimized, thereby finally giving us the best coefficients for minimal log loss?

In the context of this excercise, what are the corect values of the question: " What is the lowest possible probability that can be predicted, and what is the highest possible probability that can be predicted? Enter your answer in the variables lowest and highest , respectively."