FAQ: Hypothesis Testing - Dangers of Multiple T-Tests


#1

This community-built FAQ covers the “Dangers of Multiple T-Tests” exercise from the lesson “Hypothesis Testing”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Data Science

FAQs on the exercise Dangers of Multiple T-Tests

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!


#2

Hi there, when I input tts, a_b_pval = ttest_ind(a, b), the result of the a_b_pval is 2.76676293987e-05, which clearly is not a p-value. What gone wrong?


#3

2.76676293987e-05 is scientific notation for 0.0000276676293987, which is a p-value showing that the result of that particular t-test has an exceptionally good chance at being significant.


#4

What error are we calculating using the error probability function provided? As I understand it, the p-value is essentially the probability of a type 1 error. So, over multiple t-tests:

p(type 1 error) = p_value_1 * p_value 2 * … * p_value_n

thus:

p(not(type 1 error)) = 1 - (p_value_1 * p_value 2 * … * p_value_n).

In this way, multiple t-tests would actually decrease your chance of a type 1 error.

I think my issue here actually boils down to two questions. First, what source of error does the provided error probability function calculate? Second, can p(statistical significance) decrease while p(type 1 error) also decreases?


#5

Why is the solution:
error_prob = (1-(0.95**3))

I thought it would be:
error_prob = (1-(a_b_pval * a_c_pval * b_c_pval)).

I.e. the total error is a function of the individual errors - not the threshold for for acceptable error?


#6

In agreement with the above, with one small correction:

error_prob = 1 - (1 - a_b_pval) * (1 - a_c_pval) * (1 - b_c_pval)

This does cause a higher error, of 0.07 roughly. But that is not the solution Codecademy went with.


#7

This is what I thought too. I found this entire exercise completely confusing.


#8

I think the main confusion here is due to these lessons using the term “p-value” interchangeably for both the significance value (the threshold at which we will determine the results are significant) and the actual p-value that is returned by running a T-test.

Here are the concepts to remember with T-tests:

  • We are comparing samples of different populations to see if the populations are significantly different

  • We determine a significance value (or p-value threshold) prior to conducting the T-tests that will act as a cut-off point for whether we will find significance

  • A T-test returns two values: a test statistic (tstat) and a p-value. The test statistic is basically a number that represents the difference between population means based on the variations in your sample. The larger it is, the less likely a null hypothesis is true. If it is closer to 0, it is more likely there isn’t a significant difference. The p-value is the likelihood of getting a test statistic of equal or higher value to the one returned, if the null-hypothesis is true.

The p-value itself is not the probability of a Type I error, but rather the probability of getting a test statistic (tstat) of equal or higher value if the null hypothesis is true (i.e., if the populations have the same mean and the observed differences were merely by chance). The smaller the p-value, the more likely there is significance.

Prior to running the T-tests, however, we decide that a p-value at .05 or less will indicate significance – thus we are accepting a risk of being wrong 5% of the time when we reject the null hypothesis. We would reject a null hypothesis equally if the p-value was .04 or .00004. Thus, we have a fixed risk of Type I error per T-test that is determined prior to running the experiment. This is the error the lesson is referring to.

This 5% accepted risk is compounded for each T-test we need to run during the experiment to compare each sample with each other sample, and that is why running multiple T-tests can be problematic.

Hope this helps!