FAQ: Web Scraping with Beautiful Soup - The BeautifulSoup Object

This community-built FAQ covers the “The BeautifulSoup Object” exercise from the lesson “Web Scraping with Beautiful Soup”.

Paths and Courses
This exercise can be found in the following Codecademy content:

FAQs on the exercise The BeautifulSoup Object

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

Create a BeautifulSoup object out of the webpage content and call it soup . Use "html.parser" as the parser.
Print out soup ! Look at how it contains all of the HTML of the page! We will learn how to traverse this content and find what we need in the next exercises.

Is this correct? And why do we parse on the last line with the Beautiful Soup object rather than on the webpage response?

import requests
from bs4 import BeautifulSoup

webpage_response = requests.get(‘https://s3.amazonaws.com/codecademy-content/courses/beautifulsoup/shellter.html’)

webpage = webpage_response.content
soup = BeautifulSoup(webpage, “html.parser”)

1 Like

Because BeautifulSoup has special methods for parsing html that would take a lot work had we to perform them manually. Once we define a soup instance, it will have all those methods inherited.

What is a parser? Why do we use “html.parser” instead of the other options mentioned?
I’m not really understanding what this does exactly

This program has just scraped a site page from the web. Now we need to read it. That’s where the parsing begins. Strip away all the HTML and get to raw content. A parser can tell HTML tags from content text. Even though we are running a Python program, we are parsing HTML. Keep that in close sight.


Thanks for explaining @mtf

1 Like

Different Types of Parsers and their Advantages and Disadvantages

More info - Click Me


I am still not getting error, Is this not the correct way. Where am I going wrong?
import requests

from bs4 import BeautifulSoup

webpage_response = requests.get(“http://rainbow.com/rainbow.html”, “html.parser”)

webpage = webpage_response.content

soup = BeautifulSoup(webpage, “html.parser”)

I don’t see any difference between parsing or not parsing, the result still has html tags

1 Like

Yeah, i noticed this too.

‘requests.get’ shouldn’t have the ‘“html.parser”’. It belongs in the BeautifulSoup call instead.

I’ve just noticed the differences between the following codes:

1: soup = BeautifulSoup(webpage.content, “html.parser”)

2: soup = BeautifulSoup(webpage.content)

The first one has two arguments in BeautifulSoup, while the second has only one. Nevertheless, the results turn out the same. So, when can we omit the “html. parser”?

This is what I did and it worked for me, only that the webpage you used isn’t what’s required.
I think you should use this instead:

Yeah I don’t see what the difference was before and after using Beautiful Soup. Still looks like a bunch of HTML with no discernible difference