# Calculating Financial Statistics - Variance

When calculating variance via np.var vs. own function (def calculate_variance), results are same (as expected). However, results are different if I use statistics.variance:

import numpy as np
import statistics
returns_disney = [0.22, 0.12, 0.01, 0.05, 0.04]
returns_cbs = [-0.13, -0.15, 0.31, -0.06, -0.29]
variance_disney = np.var(returns_disney)
variance_cbs = np.var(returns_cbs)
def calculate_variance(dataset):
mean = sum(dataset) / len(dataset)
numerator = 0
for i in dataset:
numerator += (mean - i) ** 2
variance = numerator / len(dataset)
return variance

print(‘The np.var variance of Disney stock returns is’, variance_disney)
print(‘The ownformula variance of Disney stock returns is’, calculate_variance(returns_disney))
print(‘The statistics variance of Disney stock returns is’, statistics.variance(returns_disney))
print(‘The np.var variance of CBS stock returns is’, variance_cbs)
print(‘The ownformula variance of CBS stock returns is’, calculate_variance(returns_cbs))
print(‘The statistics variance of CBS stock returns is’, statistics.variance(returns_cbs))

I get:
The np.var variance of Disney stock returns is 0.0056560000000000004
The ownformula variance of Disney stock returns is 0.0056560000000000004
The statistics variance of Disney stock returns is 0.08408329203831164
The np.var variance of CBS stock returns is 0.04054399999999999
The ownformula variance of CBS stock returns is 0.04054399999999999
The statistics variance of CBS stock returns is 0.050679999999999996

Why are results using statistics.variance different?

As a follow up: I found that numpy.var uses n in calculating the variance, statistics.variance uses n-1