FAQ: Random Forests - Bagging

This community-built FAQ covers the “Bagging” exercise from the lesson “Random Forests”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Data Science

Machine Learning

FAQs on the exercise Bagging

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

The way we implemented the random numbers here does not guarantee that the rows we select in one set will be unique. Does that matter at all?

Why is it okay to use replacement? We’re creating fake subsets, so to speak, and will be training the model (in all likelihood) using non-factual duplicate datapoints.

How should that improve our predictive power compared to if we just split the dataset without replacement?


I guess if we split the data then we wont have enough different trees in a forest. Correct me if I am wrong.


From my understanding, using replacement here is 100% necessary in order to create the ‘forest’ of a given number of trees, such that each tree was created using the same size data set and has an equal size subset.

For example, say we have 1000 data points. If replacement isn’t used, and we consider a subset of 100 points to create each tree, then the next tree would be created from only 900 data points, and every subsequent tree would be created from a progressively smaller dataset.

This will not only limit the number of possible trees in the ‘forest’ but also degrade the validity of the model because those ‘removed’ data points could represent statistically relevant data points (perhaps even outliers) that would not be taken into consideration after each tree is created.

Hope that helps clarify a bit! :slight_smile:


For a dataset with n points. Option 1 is selecting n points with replacement as the training data for each tree. Option 2 is randomly selecting a fraction of n (say 50%) without replacement, in this case always using the entire dataset to start the random selection for each tree. Why is Option 1 preferable to Option 2? What problems does Option 2 incur? Any further reading on this? Thanks

why we use random.seed() i can notunderstanding. Can anyone explaiin it. thanks you:)