If we want to round the final results to one decimal, how would we do that? Basically, is there a logic to how many decimal places the number is being rounded to, and/or is there a way for us to specify how many places we would like the number to be rounded to?

Do you mean using the Decimal module to do that?

Hi mtf, Yes…

If that’s the only way to round the number to a specific place. Do you know of a better way than Decimal, or is that the best way?

Not sure there is a qualifier for which is better, `Decimal`

or `round()`

other than the fact that the latter returns the same type it begins with. A Decimal object is always represented in string form.

Python `round()`

does not round up or down the way we might expect. See,

When it comes to significant figures (sigdigs, to some) Decimal is a useful tool, and it uses the ‘5/4’ rule when rounding.

```
>>> from decimal import *
>>> from math import pi
>>> Decimal(pi)
Decimal('3.141592653589793115997963468544185161590576171875')
>>> getcontext().prec = 6
>>> Decimal(pi) / Decimal(1)
Decimal('3.14159')
>>> getcontext().prec = 7
>>> Decimal(pi) / Decimal(1)
Decimal('3.141593') # rounded up on 6
>>>
```

I believe the course could be slightly improved with additional information on the usage of Decimal().

I have three points to share.

- Decimal can use value types other than strings. The importance of strings as a value type for the exercise was not highlighted. So, consider my surprise when I forgot the quotes the first time.

To solve my problem, I referenced Decimal objects on the following page.

https://docs.python.org/3/library/decimal.html

This was very enlightening and led to quite a few more questions and the following points 2&3.

- Where you place Decimal matters. Try:

```
Decimal(0.234 + 0.12)
```

You will return a long float.

Now, if you use the following, it will play nicely.

```
Decimal(str(0.234 + 0.12))
```

- Precision can be used to set the decimal places and round the value as identified by mtf. However, the placement of the precision statement matters. It must be used before Decimal is invoked.

If it is of use to anyone, I am placing a script I used to play with Decimal below.

```
from decimal import *
## Print default precision
print("Default Precision: " + str(getcontext().prec))
## Test 1. As intended by the excercise using string types for values.
print("Test 1")
test01 = Decimal('0.2345') + Decimal('0.12')
## This will result in a four decimal place number: 0.3545.
print(len(str(test01))-2)
print(test01)
## Test 2
print("Test 2")
test02 = Decimal(0.2345) + Decimal(0.12)
## This will result in a float with a precision of 28 decimal places.
print(len(str(test02))-2)
print(test02)
## Test 3
print("Test 3")
test03 = Decimal(0.2345 + 0.12)
## This will result in a float of 54 decimal places.
## It seems the Decimal precision didn't even affect this.
print(len(str(test03))-2)
print(test03)
## Test
## Test 4
print("Test 4")
test04 = Decimal(str(0.2345 + 0.12))
## Converting the sum to a string, and then we see something interesting.
print(len(str(test04))-2)
print(test04)
## Test 5
print("Test 5")
print("Current Precision: " + str(getcontext().prec))
test05 = Decimal(0.2345) + Decimal(0.12)
## Following, precision is set after Decimal is invoked. No change.
getcontext().prec = 4
print("Current Precision: " + str(getcontext().prec))
print(len(str(test05))-2)
print(test05)
## Reset Precision to default value.
getcontext().prec = 28
print("Current Precision: " + str(getcontext().prec))
## Test 6
print("Test 6")
## Precision is set before Decimal is used.
## This results in the decimal places being set to 4 and rounding occurs.
getcontext().prec = 4
print("Current Precision: " + str(getcontext().prec))
test06 = Decimal(0.2345) + Decimal(0.12)
## Now we see why placement of precision is key.
print(len(str(test06))-2)
print(test06)
getcontext().prec = 28
```

Thank you, much appreciated!

greetings. can someone help me understand why my code is not working? thanks much!

```
from decimal import Decimal
two_decimal_points = Decimal(str(0.20 + 0.69))
print(two_decimal_points)
four_decimal_points = Decimal(str(0.5300 * 0.6500))
print(four_decimal_points)
```

it’s giving me app. 13 decimal points.

This is a complex module to get our heads around. There are more requirements than just the Decimal class. We can get around this by importing the entire module, but how much memory does that take up? Below we’ve imported the basic necessary components…

```
>>> from decimal import Decimal, Context, setcontext, getcontext
>>> context = Context(prec=4)
>>> setcontext(context)
>>> Decimal(0.5300) * Decimal(0.6500)
Decimal('0.3445')
>>>
```

Note how the math is done on two Decimal instances, not on a single expression. I’m assuming this calls on the `__mul__()`

method of the Decimal class.

See below…

```
>>> Decimal(0.5300 * 0.6500)
Decimal('0.344500000000000028421709430404007434844970703125')
>>>
```

Let’s put this in dynamic terms.

```
>>> a = Decimal(0.5300)
>>> b = Decimal(0.6500)
>>> x = a * b
>>> type(x)
<class 'decimal.Decimal'>
>>> y = x.__float__() # another method of the Decimal class
>>> type(y)
<class 'float'>
>>>
```

It took some digging through `dir()`

to determine what methods are available. Still going to need to do some more reading to smooth out all the wrinkles in my understanding. Not sure this will help you, but pitch in and share anything new you discover or learn.

gotcha. just needed to separate them out and call Decimal on each one. thanks so much.