FAQ: Text Preprocessing - Tokenization

This community-built FAQ covers the “Tokenization” exercise from the lesson “Text Preprocessing”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Natural Language Processing

FAQs on the exercise Tokenization

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in Language Help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to Language Help and Tips and Resources. If you are wanting feedback or inspiration for a project, check out Projects.

Looking for motivation to keep learning? Join our wider discussions in Community

Learn more about how to use this guide.

Found a bug? Report it online, or post in Bug Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

https://www.codecademy.com/courses/natural-language-processing/lessons/text-preprocessing/exercises/tokenization

What is the difference between tokenizing from the “word level” versus the “sentence level?” In other words, when would I decide to tokenize at the sentence level instead of always tokenizing at the word level?

Why import only one function and not the whole library like the previous exercise about re.sub()?

from nltk.tokenize import word_tokenize

Why is one token using " and one ’ ?

Sentence Tokenization:
[“An electrocardiogram is used to record the electrical conduction through a person’s heart.”, ‘The readings can be used to diagnose cardiac arrhythmias.’]