FAQ: Significance Thresholds - Problems with Multiple Hypothesis Tests

This community-built FAQ covers the “Problems with Multiple Hypothesis Tests” exercise from the lesson “Significance Thresholds”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Master Statistics with Python

FAQs on the exercise Problems with Multiple Hypothesis Tests

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in Language Help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to Language Help and Tips and Resources. If you are wanting feedback or inspiration for a project, check out Projects.

Looking for motivation to keep learning? Join our wider discussions in Community

Learn more about how to use this guide.

Found a bug? Report it online, or post in Bug Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

Heya, I’m trying to understand why a test would necessarily have the probability of producing a false positive. Is it because we’re considering it improbable that the value in question comes from the Null Hypothesis’s binomial distribution, while it’s technically still possible that it does?

For instance, let’s say our significance threshold was 5%. That would mean we’re going to assume (with good warrant) that any outcome with a probability of 5% or less comes from a distribution with a different mean, but there’s still technically a 5% chance that it came from the distribution/mean of our Null Hypothesis. So, by supposing it came from another mean, we’re discounting the 5% probability that it did actually come from the Null Hypothesis, and so there’s still a 5% chance that our declaration of ‘significance’ is wrong? Is that right?

Hi!

I am trying to understand the following:

Change the code to create the plot so that it shows the probability of at least one type I error for multiple tests with a significance threshold of 0.10 (instead of 0.05).

Inspect your new plot. Now how many tests would lead to a probability of a type I error of 50%?

In the solution code :
num_tests_50percent = 15

however 0.90*6 = 0.5314
So it should be 6 right?

Because 0.90**15 is 0.2 app.

Can you please provide me an answer?