FAQ: Learn Sample Size Determination with SciPy - A/B Testing: Don't Interfere With Your Tests

This community-built FAQ covers the “A/B Testing: Don’t Interfere With Your Tests” exercise from the lesson “Learn Sample Size Determination with SciPy”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Data Science
Analyze Data with Python

FAQs on the exercise A/B Testing: Don’t Interfere With Your Tests

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in #get-help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to #get-help and #community:tips-and-resources. If you are wanting feedback or inspiration for a project, check out #project.

Looking for motivation to keep learning? Join our wider discussions in #community

Learn more about how to use this guide.

Found a bug? Report it online, or post in #community:Codecademy-Bug-Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

Can someone explain to me why taking more samples past what the calculator tells you should be the minimum amount of samples is improper?

Take for example the coin toss they mentioned as an example. Sure if you are flipping a coin 10 times, then extending the trials to 15 or 20 could skew the results to look like one side of the coin favors the other. However, isn’t this dealing with very small numbers? Say instead you flip the coin 100 times? Should you increase it to 500? the 100 flips would most likely be close to 50/50 but 500 flips would be even more indicative of the true probability of the outcome (50/50). And isn’t that the whole point of A/B testing? To say with statistical confidence that this is the result? Isn’t that the point of having a minimum number of trials? So that adding anymore trials wouldn’t lend anymore significance due to not fluctuating the overall outcome? Referencing the coin flip again, if you add more tosses to 500 trials of a coin flip you wouldn’t expect it to fluctuate hardly at all. Why would adding any more trials matter?

How can i replicate the graph from section 5 ( A/B Testing: Don’t Interfere With Your Tests)?

1 Like

I’m also having some trouble grasping the concept.

I’ll add to your remark the following question I have:

Why does the graph fluctuate that strongly if it shows the cumulative results? I understand that if it shows the n’th result that it should have an erratic kind of line. But the article states that it shows the cumulative results, and as such, I don’t understand why it jumps at certain points in the x-axis.

What exactly is meant then when it supposedly reaches “statistical significance” in the green portion of the graph? Does it mean that in those specific trials in reached an outcome of > 7.5% while simultaneously having past the 210 sample threashold? That seems nonsensical as, again, we’d expect them to be measuring the cumulative results.

In that sense, I join Jacob’s remark: does it mean istead that, in the green portion, the cumulative results have an outcome of > 7.5% ? Surely then, the more cumulative results we have, the more robust the results should be? But, then again, if they are indeed cumulative, we would not expect the line to be jumping up and down on the y-axis. It should be steadily converging to some stable outcome.

I’m confused…

To add to the earlier questions, are we basically seeing the same tendency towards error here as when we perform multiple t-tests?

I cant help but feel that the course content for hypothesis testing and A/B testing needs some improvement :

  1. The theoretical background for these topics is currently not sufficient and therefore, even in the forums I can see that there seems to be confusion regarding these topics
  2. The examples and the results need to be better explained. In the two mentioned topics, often I have seen that certain examples lack explanation for the results

What do you guys think ? I’m meaning to start the conversation so that it gets some traction and it might lead to improvement in the lessons for these topics.

1 Like

I also think that the materials provided by Codecademy regarding hypothesis testing are insufficient for proper understanding. In fact, I myself had to refer to various other resources (statistics textbooks, Wikipedia, etc) to learn. At least I would appreciate it if they could link us to such resources.

We can try the coin toss example by running code like this:

import random
from scipy.stats import binom_test

count = 0
for i in range(1, 2001):
  count += random.randint(0, 1)
  if binom_test(count, n=i, p=0.5) < 0.05:
    print("We find a significant difference in the " + str(i) + "th try.")

I’ve actually run it about a dozen times, and of course the results vary randomly. Sometimes it doesn’t detect any significant difference, but the interesting thing is that sometimes there can be found significant differences even after 1000 or more trials.

We cannot keep trying indefinitely. If we suspect that the coin is biased and decide to continue trying arbitrarily without deciding on the sample size in advance, we might mistakenly think that our suspition is true when a significant difference comes out, and might stop trying. By setting the sample size in advance, such a risk can be reduced (although it cannot be reduced to zero).