FAQ: Decision Trees - How a Decision Tree is Built

This community-built FAQ covers the “How a Decision Tree is Built” exercise from the lesson “Decision Trees”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Machine Learning Fundamentals

FAQs on the exercise How a Decision Tree is Built

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in #get-help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to #get-help and #community:tips-and-resources. If you are wanting feedback or inspiration for a project, check out #project.

Looking for motivation to keep learning? Join our wider discussions in #community

Learn more about how to use this guide.

Found a bug? Report it online, or post in #community:Codecademy-Bug-Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

Hello.
In the lesson you say:
" Now, let’s compare that with a different feature we could have split on first, persons_2. In this case, the left branch will have a Gini impurity of 1 - (505/917)^2 - (412/917)^2 = 0.4949 "
Where 505,412 and 917 comes from ?
And why its not 1 - ( (505/917)^2 + (412/917)^2) as per Gini impurity formula 1 - (P1^2 + P2^2)

Further in the exercises, after placing everything as written in hints into preloaded functions gini and info_gain I get different all three answers.

1. Calculate gini and info gain for a root node split at safety_low<=0.5

y_train_sub = y_train[x_train[‘safety_low’]==0]
x_train_sub = x_train[x_train[‘safety_low’]==0]
gi = gini(y_train_sub)
print(f’Gini impurity at root: {gi}')

2. Information gain when using feature persons_2

left = y_train[x_train[‘persons_2’]==0]
right = y_train[x_train[‘persons_2’]==1]
print(f’Information gain for persons_2: {info_gain(left, right, gi)}')

3. Which feature split maximizes information gain?

info_gain_list =
for i in x_train.columns:
left = y_train_sub[x_train_sub[i]==0]
right = y_train_sub[x_train_sub[i]==1]
info_gain_list.append([i, info_gain(left, right, gi)])
info_gain_table = pd.DataFrame(info_gain_list).sort_values(1,ascending=False)
print(f’Greatest impurity gain at:{info_gain_table.iloc[0,:]}')
print(info_gain_table)

Output:

Gini impurity at root: 0.49534472145275465
Information gain for persons_2: 0.16699155320608106
Greatest impurity gain at:0 persons_2
1 0.208137

  • 0.495 Doesn’t match 0.418
  • safety_low doesn’t give the largest information gain but person_2 gives