FAQ: Learn Sample Size Determination with SciPy - Differing Survey Results

This community-built FAQ covers the “Differing Survey Results” exercise from the lesson “Learn Sample Size Determination with SciPy”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Data Science
Analyze Data with Python

FAQs on the exercise Differing Survey Results

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in #get-help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to #get-help and #community:tips-and-resources. If you are wanting feedback or inspiration for a project, check out #project.

Looking for motivation to keep learning? Join our wider discussions in #community

Learn more about how to use this guide.

Found a bug? Report it online, or post in #community:Codecademy-Bug-Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

For the same numbers entered in other online calculators (eg. https://www.optimizely.com/sample-size-calculator/), why is the sample size different? Codacademy tool give n=170 for the question, and for the same definition for inputs, Optimizely calculator gives n=74.

1 Like

Apparently the two calculators are based on different formulas.

Codecademy tool:

import numpy as np
from scipy.stats import norm

def sample_size_abtest(baseline_rate, minimal_detectable_effect, confidence_level, beta=0.1):

  # Difference
  d = abs(baseline_rate * minimal_detectable_effect)

  # Type I error rate (significance level)
  alpha = 1 - confidence_level

  # Z values
  z_alpha = norm.isf(alpha / 2)
  z_beta = norm.isf(beta)

  # calculate sample size
  n = 2 * (((z_alpha + z_beta) * np.sqrt(baseline_rate * (1 - baseline_rate)) / d) ** 2)

  return round(n)

print(sample_size_abtest(0.35, 0.4, 0.85))  # 172.0

It seems that the result is rounded to 2 significant digits, so it is 170.

The online calculator (which you linked to):

import numpy as np

def sample_size_abtest(baseline_rate, minimal_detectable_effect, confidence_level):

  # Rates assumed by the null and alternative hypotheses
  p0 = baseline_rate
  p1 = baseline_rate * (1 + minimal_detectable_effect)

  # Sum of variances?
  o = p0 * (1 - p0) + p1 * (1 - p1)

  # Difference
  d = abs(p1 - p0)

  # calculate sample size
  n = 2 * confidence_level * o * np.log(1 + np.sqrt(o) / d) / (d ** 2)

  return round(n)


print(sample_size_abtest(0.35, 0.4, 0.85))  # 74.0

I will add a little more. The codes in the previous post is my rewrite (for Python), based on the original JavaScript code for each calculator.

The formula used by Codecademy’s tool is similar to the one in the statistics book I recently read, but it is a bit different (maybe Codecademy’s one is a simplified version). The code based on the formula in the book is:

import numpy as np
from scipy.stats import norm

def sample_size_abtest(baseline_rate, minimal_detectable_effect, confidence_level, beta=0.1, alternative='two-sided'):

  # Rates assumed by the null and alternative hypotheses
  p0 = baseline_rate
  p1 = baseline_rate * (1 + minimal_detectable_effect)
  # Midpoint
  p = (p0 + p1) / 2

  # Difference
  d = abs(p1 - p0)

  # Type I error rate (significance level)
  alpha = 1 - confidence_level

  # Z values
  if alternative == 'two-sided':
    z_alpha = norm.isf(alpha / 2)
  else:
    z_alpha = norm.isf(alpha)
  z_beta = norm.isf(beta)

  # standard deviations
  sigma0 = np.sqrt(2 * p * (1 - p))
  sigma1 = np.sqrt(p0 * (1 - p0) + p1 * (1 - p1))

  # calculate sample size
  n = ((z_alpha * sigma0 + z_beta * sigma1) / d) ** 2

  return round(n)


print(sample_size_abtest(0.35, 0.4, 0.85))  # 182.0

The way of Optimizely calculator seems to be Optimizely’s unique method, and I don’t know a theory which the formula come from. In addition, there was some oversight in the previous post. I modify the code as follows:

import numpy as np

def sample_size_abtest(baseline_rate, minimal_detectable_effect, confidence_level):

  # Rates assumed by the null and alternative hypotheses
  p0 = baseline_rate
  p1 = baseline_rate * (1 + minimal_detectable_effect)
  p2 = baseline_rate * (1 - minimal_detectable_effect)

  # Variances?
  o1 = p0 * (1 - p0) + p1 * (1 - p1)
  o2 = p0 * (1 - p0) + p2 * (1 - p2)

  # Difference
  d = abs(baseline_rate * minimal_detectable_effect)

  # calculate sample size
  n1 = 2 * confidence_level * o1 * np.log(1 + np.sqrt(o1) / d) / (d ** 2)
  n2 = 2 * confidence_level * o2 * np.log(1 + np.sqrt(o2) / d) / (d ** 2)


  return round(n1) if n1 > n2 else round(n2)


print(sample_size_abtest(0.35, 0.4, 0.85))  # 74.0