Sorry about the indentation. Here’s the code. I don’t understand the purpose of multiplying 1.0 with numberOfHits / NUMBER_OF_TRIALS. Does it have to do with decimal precision?
import random
NUMBER_OF_TRIALS = 1000000
numberOfHits = 0
for i in range(NUMBER_OF_TRIALS):
x = random.random() * 2.0 - 1;
y = random.random() * 2.0 - 1;
if x < 0:
numberOfHits += 1
elif not (x > 1 or x < 0 or y > 1 or y < 0):
slope = (1.0 - 0) / (0 - 1.0)
x1 = x + -y * slope
if x1 <= 1:
numberOfHits += 1
print("The probability in Region 1 and 3 is " +
str(1.0 * numberOfHits / NUMBER_OF_TRIALS))
this is depends on your python version, given numberOfHits and NUMBER_OF_TRIALS are both integers, in python2, this would result in 0:
print 624542 / 1000000 # outputs 0 in python2
given python2 rounds down if the division involves two integers
by including a float (1.0) in the equation, we ensure this doesn’t happen, so it ensures support for both python2 and python3 (without 1.0 it will work in python3 correctly, but not in python2)
The last one shows that we can reach and pass 1. So it will up to our code to detect when it is equal or greater, and swap it out for 1.0 or convert to integer.