In this exercise, we used the
Decimal data type to control how many decimal places a number has after two numbers are added/multiplied together. Here is the code I used for this exercise:
# Import Decimal below: from decimal import Decimal # Fix the floating point math below: two_decimal_points = Decimal('0.2') + Decimal('0.69') print(two_decimal_points) four_decimal_points = Decimal('0.53') * Decimal('0.65') print(four_decimal_points)
I don’t understand how using
Decimal in the
two_decimal_points expression results in a float with two decimal places, while the
four_decimal_points expression results in a float four decimal places. It appears as though we’re doing the same thing to both expressions, so how are they resulting in different decimal places??