How can we control the rounding of our equation?

If we want to round the final results to one decimal, how would we do that? Basically, is there a logic to how many decimal places the number is being rounded to, and/or is there a way for us to specify how many places we would like the number to be rounded to?

Do you mean using the Decimal module to do that?

Hi mtf, Yes…

If that’s the only way to round the number to a specific place. Do you know of a better way than Decimal, or is that the best way?

Not sure there is a qualifier for which is better, Decimal or round() other than the fact that the latter returns the same type it begins with. A Decimal object is always represented in string form.

Python round() does not round up or down the way we might expect. See,

When it comes to significant figures (sigdigs, to some) Decimal is a useful tool, and it uses the ‘5/4’ rule when rounding.

>>> from decimal import *
>>> from math import pi
>>> Decimal(pi)
>>> getcontext().prec = 6
>>> Decimal(pi) / Decimal(1)
>>> getcontext().prec = 7
>>> Decimal(pi) / Decimal(1)
Decimal('3.141593')          # rounded up on 6
1 Like

I believe the course could be slightly improved with additional information on the usage of Decimal().
I have three points to share.

  1. Decimal can use value types other than strings. The importance of strings as a value type for the exercise was not highlighted. So, consider my surprise when I forgot the quotes the first time.

To solve my problem, I referenced Decimal objects on the following page.

This was very enlightening and led to quite a few more questions and the following points 2&3.

  1. Where you place Decimal matters. Try:
Decimal(0.234 + 0.12)

You will return a long float.

Now, if you use the following, it will play nicely.

Decimal(str(0.234 + 0.12))
  1. Precision can be used to set the decimal places and round the value as identified by mtf. However, the placement of the precision statement matters. It must be used before Decimal is invoked.

If it is of use to anyone, I am placing a script I used to play with Decimal below.

from decimal import *
## Print default precision
print("Default Precision: " + str(getcontext().prec))
## Test 1. As intended by the excercise using string types for values.
print("Test 1") 
test01 = Decimal('0.2345') + Decimal('0.12')
## This will result in a four decimal place number: 0.3545. 
## Test 2
print("Test 2") 
test02 = Decimal(0.2345) + Decimal(0.12)
## This will result in a float with a precision of 28 decimal places. 
## Test 3
print("Test 3") 
test03 = Decimal(0.2345 + 0.12)
## This will result in a float of 54 decimal places.  
## It seems the Decimal precision didn't even affect this.
## Test
## Test 4
print("Test 4") 
test04 = Decimal(str(0.2345 + 0.12))
## Converting the sum to a string, and then we see something interesting.  
## Test 5
print("Test 5") 
print("Current Precision: " + str(getcontext().prec))
test05 = Decimal(0.2345) + Decimal(0.12)
## Following, precision is set after Decimal is invoked.  No change.
getcontext().prec = 4
print("Current Precision: " + str(getcontext().prec))
## Reset Precision to default value.
getcontext().prec = 28
print("Current Precision: " + str(getcontext().prec))
## Test 6
print("Test 6") 
## Precision is set before Decimal is used.  
## This results in the decimal places being set to 4 and rounding occurs.
getcontext().prec = 4
print("Current Precision: " + str(getcontext().prec))
test06 = Decimal(0.2345) + Decimal(0.12)
## Now we see why placement of precision is key.
getcontext().prec = 28
1 Like