FAQ: Bag-of-Words Language Model - BoW Ow

This community-built FAQ covers the “BoW Ow” exercise from the lesson “Bag-of-Words Language Model”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Natural Language Processing

FAQs on the exercise BoW Ow

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

This sentence should be revised:

“The probability of the following word is always just the most frequently used words.”

When I read it the first time, I thought Codecademy was indeed telling me that the “The probability of the following word is always just the most frequently used words.”

Rather, Codecademy should have been telling me:

“The BoW models its prediction of future words based on frequency of past words.”

The word “probability” refers to an objective truth in the world. The word “prediction” refers to something that can be faulty. Illustration: If one were to say that the probability of flipping a head on a quarter is 20%, this would be a false statement. It’s their prediction though.

Also, the word, “perplexity” does not mean “it’s not a very accurate model for language prediction.” Perplexity might cause inaccuracy for language prediction. However, perplexity means “to make unable to grasp something clearly or to think logically and decisively about something.”

Now, you say why it’s bad at language prediction:

i)True probability of future words is not based on frequency of past words.

ii) It does not account for immediate context of words.

iii) New words and new synonyms in the test data are not trained for.

Not sure how these three things have to do with the word “perplexity.” It perplexes me why this word is used.