FAQ: Bag-of-Words Language Model - Spam A Lot No More

This community-built FAQ covers the “Spam A Lot No More” exercise from the lesson “Bag-of-Words Language Model”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Natural Language Processing

FAQs on the exercise Spam A Lot No More

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

A problem that I see with that exercise and with the previous one is that I do not know what data I am parsing and what is the result at each step. When I worked on it, I felt like a blind monkey following simple instructions and then got the result which was completely irrelevant because I had no idea about the data that the code was parsing.


Maybe like @fierynvova, I’m missing something here. The module seems to stop at creating training and test vectors. As for the necessary labels for the Naive Bayes classifier to actually use everything we’ve done for the last several steps, we’re not given any information on what those are or how to create them. But…I’m missing something?


@digital4879061526 If it’s any help, the data is referenced in the 11th lesson as

The spam data for this lesson were taken from the UCI Machine Learning Repository.

Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [archive.ics.uci.edu/ml/]. Irvine, CA: University of California, School of Information and Computer Science.

Maybe the labels come from emails that we are sure are spam (like from a database) and we are sure are not–somewhere there was a human validating that in the loop or creating synthetic data–writing “real” and “spammy” texts for exercises. There’s a whole industry of annotators through mechanical turk etc. and labeling huge datasets is often punted to some sort of crowd sourcing–it’s a real complex issue. (Data Engineer contractor currently working for Amazon AWS)

from sklearn.feature_extraction.text import CountVectorizer

training_documents = [“Five fantastic fish flew off to find faraway functions.”, “Maybe find another five fantastic fish?”, “Find my fish with a function please!”]
test_text = [“Another five fish find another faraway fish.”]

bow_vectorizer = CountVectorizer()
bow_vector = bow_vectorizer.transform(test_text)


[[2 0 1 1 2 1 0 0 0 0 0 0 0 0 0]]

In this example, in which order does “bow_vectorizer.fit(training_documents)” runs? I mean what is the index position of the unique words in this text [“Five fantastic fish flew off to find faraway functions.”, “Maybe find another five fantastic fish?”, “Find my fish with a function please!”] after this code bow_vectorizer.fit(training_documents) runs?