FAQ: Text Preprocessing - Part-of-Speech Tagging

This community-built FAQ covers the “Part-of-Speech Tagging” exercise from the lesson “Text Preprocessing”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Natural Language Processing

FAQs on the exercise Part-of-Speech Tagging

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in Language Help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to Language Help and Tips and Resources. If you are wanting feedback or inspiration for a project, check out Projects.

Looking for motivation to keep learning? Join our wider discussions in Community

Learn more about how to use this guide.

Found a bug? Report it online, or post in Bug Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

I’ m having trouble understanding this line of code:

most_likely_part_of_speech = pos_counts.most_common(1)[0][0]

i know its supposed to set most_likely_part_of_speech to the most occurring part of speech of the synonyms of a word but i dont exactly know what argument 1 is for or what it returns. And I also am no sure what [0][0] is accessing. Any help would be appreciated!

1 Like

Probably a little late but here’s what I think after some research:

First, .most_common() is a method of the Counter which is a subclass of dict for counting elements inside. Counter can be made up by lists, strings, tuples and dicts; and in this case, the Counter was built by Step3 so apparently there’s a dict inside it.
The first number, (1), is something that .most_common() does to return the top x “most common” items in the Counter which means the most frequent pos here (actually, it would also work if you put ‘2’ here, I’ll explain). The second number, [0], is for selecting the first key-value pair in the dictionary–though only one now, it still needs to select it (therefore, it also works if you put ‘2’ in the first brackets). And the second [0] is for selecting the first (though, still, only one) key and get its value which is the count of the most frequent pos.

The lesson was a little vague on this one and I hope this helps!

3 Likes

I was just wondering, I get that “n” means a noun, “v” is a verb, but what does “r” and “a” and are there any others like it?

4 Likes

I too have the same problem.

I have absolutely no clue what this question wants me to do. The wording is all over the place. Where am i writing code(inside the function or outside)? What text is being processed for lemmatization?

Blockquote Navigate to the file script.py . At the top of the file, we imported get_part_of_speech for you. Use get_part_of_speech() to improve your lemmatizer.
Under the line with the lemmatized variable, use the get_part_of_speech() function in a list comprehension to lemmatize all the words in tokenized_string . Save the result to a new variable called lemmatized_pos .

1 Like
# Under the line with the lemmatized variable,
# use the get_part_of_speech() function in a list comprehension to lemmatize all
# the words in tokenized_string.
# Save the result to a new variable called lemmatized_pos.
lemmatized_pos = [lemmatizer.lemmatize(
    token, get_part_of_speech(token)) for token in tokenized_string]

First, thanks for clarifying that .pos and .most_common are methods from Counter! Got a bit confused about that. On the .most_common() method, I do understand that (1) returns the most common item from the counter, but I don’t get the two [0]s after that. I guess I’ll just print everything (added to the function get_part_of_speech()):

print(pos_counts.most_common(1))
  print(pos_counts.most_common(2))
  print(pos_counts.most_common(1)[0])
  print(pos_counts.most_common(1)[1])
  print(pos_counts.most_common(1)[0][1])
print(pos_counts.most_common(1)[0][0])

First outcome, error on 4th print: it seems like if we only ask for the top item (1), we can’t access a second one, which would be why it throws an error for [1][0]

[('n', 1)]
[('n', 1), ('v', 0)]
('n', 1)
Traceback (most recent call last):
  File "script.py", line 10, in <module>
    lemmatized_pos = [lemmatizer.lemmatize(token, get_part_of_speech(token)) for token in tokenized_string]
  File "script.py", line 10, in <listcomp>
    lemmatized_pos = [lemmatizer.lemmatize(token, get_part_of_speech(token)) for token in tokenized_string]
  File "/home/ccuser/workspace/text-preprocessing-part-of-speech-tagging/part_of_speech.py", line 19, in get_part_of_speech
    print(pos_counts.most_common(1)[1])
IndexError: list index out of range

If we fix that:

[('n', 1)]
[('n', 1), ('v', 0)]
('n', 1)
('v', 0)
1
n
[('v', 13)]
[('v', 13), ('n', 1)]
('v', 13)
('n', 1)
13
v
[('v', 3)]
[('v', 3), ('n', 0)]
('v', 3)
('n', 0)
3
v
[('n', 3)]
[('n', 3), ('r', 1)]
('n', 3)
('r', 1)
3
n
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('n', 1)]
[('n', 1), ('v', 0)]
('n', 1)
('v', 0)
1
n
[('v', 6)]
[('v', 6), ('n', 0)]
('v', 6)
('n', 0)
6
v
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('r', 3)]
[('r', 3), ('a', 2)]
('r', 3)
('a', 2)
3
r
[('v', 2)]
[('v', 2), ('n', 0)]
('v', 2)
('n', 0)
2
v
[('n', 2)]
[('n', 2), ('v', 0)]
('n', 2)
('v', 0)
2
n
[('n', 3)]
[('n', 3), ('r', 1)]
('n', 3)
('r', 1)
3
n
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('n', 8)]
[('n', 8), ('v', 0)]
('n', 8)
('v', 0)
8
n
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('n', 3)]
[('n', 3), ('v', 0)]
('n', 3)
('v', 0)
3
n
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('r', 5)]
[('r', 5), ('n', 1)]
('r', 5)
('n', 1)
5
r
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
[('n', 2)]
[('n', 2), ('v', 0)]
('n', 2)
('v', 0)
2
n
[('n', 4)]
[('n', 4), ('v', 2)]
('n', 4)
('v', 2)
4
n
[('n', 0)]
[('n', 0), ('v', 0)]
('n', 0)
('v', 0)
0
n
The lemmatized words are: ['Indonesia', 'be', 'found', 'in', '1945', '.', 'It', 'contain', 'the', 'most', 'populate', 'island', 'in', 'the', 'world', ',', 'Java', ',', 'with', 'over', '140', 'million', 'people', '.']

Hope this clarifies things for everyone :slight_smile:

1 Like

For folks who are unsure about the usage of thing like Counter it’s always best to check the docs-


Notably the return from .most_common() is a tuple nested within a list of the form [(element, count), ...]. @lucasvinzon’s print statements above shows the return is organised for this particular problem; the first index is the index of the list and the second is the index of the tuple.

Accessing [0][0] from the return of this method provides you with most common element in the Counter, [0][1] would be the count of this element and something like [1][0] would be the second most common element (assuming more than one was returned; see the docs/answers above for how to return multiple elements and their counts).

3 Likes

So…what does ‘r’ mean. n is noun, v is verb, I guess a is adjective. Not sure what ‘r’ would be, feel like it’s really dumb not to include that in the lesson, I’m a bit annoyed about that to be honest.

1 Like

If anybody finds this confusing like I did, pos_counts.most_common(1) is going to give you the most common part of speech and its count, from the dictionary that was created in the previous step (pos_counts), when you did pos_counts[‘n’] etc. Then, when you include the [0] after the (1), your accessing the first (and only) key value pair that you targeted with the (1). then the [0] after the first [0] is accessing the key, which in this case, corresponds to a part of speech (n, v, a, or r). That it how you get the most common part of speech.

this bit of the lesson was very confusing.
in the end, when i pressed “give me the solution”, task was marked as “completed” and “next” button turned yellow, but on the screen in the script.py absolutely nothing changed. weird…

Hi, @ruby4594022995, @abhinandangodara2225, @dripdroobul91,

I couldn’t find any documentation on what “n”, “v”, “a”, and “r” mean, but I find the following lines in the wordnet source code:

# { Part-of-speech constants
ADJ, ADJ_SAT, ADV, NOUN, VERB = “a”, “s”, “r”, “n”, “v”
# }

So it seems “r” means adverb and “a” means adjective.

The following Stack Overflow question may also be helpful:

1 Like

(This is regarding the Getting Started with Natural Language Processing class.)

I am in the first lesson of section 2; I’m not having a problem online, but when I try to run the same code in PyCharm, this line gives me an error:

from part_of_speech import get_part_of_speech

It appears there is no module “part_of_speech”? I tried installing it through the IDE settings and also using PIP.

Please advise.

1 Like