FAQ: Bag-of-Words Language Model - Building a BoW Vector

This community-built FAQ covers the “Building a BoW Vector” exercise from the lesson “Bag-of-Words Language Model”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Natural Language Processing

FAQs on the exercise Building a BoW Vector

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

Can someone explain why the inclusion of the zero index[0] in the function call below results in the returned tokens not being printed.
print(text_to_bow_vector(text, features_dictionary)[0])

I think it’s because the function returns both the BoW vector and the tokens as a tuple, and since we only want the BoW vector (which is at index 0), we add [0].

Ok, so if I’m understanding this correctly:

  1. a features_dictionary is an indexed list (dictionary of keys with indexes as values) of every word in a given training text
    a. it is a bag_of_words without the word count
  2. a bag_of_words_vector is the representation of the bag_of_words by the features dictionary
    a. so then is the bag_of_words_vector the same as a training_vector?
  3. a test_vector is the representation of a test text (must be preprocessed and essentially turned into another bag_of_words) by that same features dictionary

Ok so if the above is correct, then is what I put in bold also correct?

I’m doing an exercise right now that has a features dictionary containing words not found in a training text. Is it not true that a features dictionary will always account for every unique word in a training text?

(if you can’t tell, my brain is swimming here!)

Why do we add tokens to the return at the end of the function, I know it has an impact but… what kind of impact? and why does it help shape the vector sent back?