Please have a look at the following link:

https://www.codecademy.com/paths/data-science/tracks/learn-statistics-with-python/modules/variance-and-standard-deviation/lessons/standard-deviation/exercises/using-standard-deviation

It is written:

" *…By finding the number of standard deviations a data point is away from the mean, we can begin to investigate how unusual that datapoint truly is. In fact, you can usually expect around 68% of your data to fall within one standard deviation of the mean, 95% of your data to fall within two standard deviations of the mean, and 99.7% of your data to fall within three standard deviations of the mean… If you have a data point that is over three standard deviations away from the mean, that’s an incredibly unusual piece of data!* "

In the context of the exercise, after calculating the std’s and the means, looking back at the pumpkin array/dataset, you expect that 95% of the data will fall within 2 std’s from the mean, i.e. 95% of the pumpkins to weigh between (around) 209 & 2653.

So, although you might have been suspicious in the first place that the 1st datapoint equal to 68 (in pumpkin dataset) was not “normal” (it was minimum and particularly far away from the mean) , however now the statistical indicator std helps you prove that (68 is also very far away from 209, making us think that a pumpkin with weight equal to 68 is expected to belong not just to the 5% of data but to a much more lower %).

Maybe , after the above calculations, the judges are obliged to investigate if the pumpkin is “fake”, if it was put by the same team on purpose in order to manipulate the competition’s results (i.e. to increase the std and be the winners as the exercise’s instructions imply) !

Although I am not able to describe a “fake” pumpkin, this possibility is worth investigating