FAQ: Naive Bayes Classifier - Smoothing

This community-built FAQ covers the “Smoothing” exercise from the lesson “Naive Bayes Classifier”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Data Science

FAQs on the exercise Smoothing

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

I’m not able to understand why the smoothing equation provided in this excercise works…why do we add N to the denominator and 1 to the numerator?

1 Like

The wording is very confusing in this lesson. For P(review | positive) you state:
'if we assume that the review is positive,
what is the probability that the words “This”, “crib”, “was”, and “amazing”
are the only words in the review? ’

How is this a probability these are the only words in the review if those words literally are the review?
Isn’t this the probability that these are the only words in all reviews?