Why did we use a the "double" type instead of "float"?

[Lesson Page For Question](Learn C# | Codecademy)

In this lesson its pretty simple stuff, ive been doing C# coding in Unity for a few weeks but its been heavily focused on game development. So i wanted to go back and try and get a better grasp on the basic.

In this lesson we are asked to create variables, one of those being a decimal type variable. its a 2 decimal number so i thought float should work and is the clear choice for a small decimal number. So why is it that i get the question wrong unless i use double which i think is over shooting the amount of accuracy we need.

If im wrong could someone explain why it is that id use double instead of float here? Thanks!

Generally speaking, it allows us to refer to very small numbers, and very large numbers. float is single precision floating point, double is double precision.

Floating-point numeric types - C# reference - C# | Microsoft Learn

wait i dont get it. so float only goes as far as like 1.1 for example and double can do 1.11 and so on? if not what do you mean by “single precision floating point”? that arrangement of words is foreign to me, so sorry if its something i should know.

The difference between single and double precision is that the double uses twice as much memory. decimal uses twice as much as double.

single   4 bytes
double   8 bytes
decimal  16 bytes
1 Like

ahhh okay, i see so that clears that up for me, i appreciate that. So i guess to clarify when to use what. could you give me an example of when id use double instead of float? for example, in the lesson, we used “double weight = 65.22;” why did we use double? in my C# game development ive used float for basically every non-whole number with no issues to my knowledge.

1 Like

It would be a guess as to why, but I suspect the author makes a habit of working in that manner on all numbers float. The only time to really consider whether it is unnecessary would be when looking to optimize memory, and possibly speed up processing. It would clearly be speculation on my part to argue this.

ohhhhhhh, okay wow, that makes a ton of sense, im just so used to using float it really threw me off, thank you so much for all youre help. Im gonna mark your answer as the solution but as a send off. To optimize memory and possibly speed, if the number is relatively small, id use float to save memory? and the bigger it gets, to save memory use double and decimal? correct? but technically all 3 can be used regardless just the bigger the number the more accuracy and memory you would need for it to work smoothly.

Yes, if memory becomes a problem. You are not in error by using a lower precision as most numbers go. No point using higher precision if not necessary. Like I suggested above, it likely a habit for the author that way all floats are double (for consistency and even muscle memory (less typos))and there is no mixing of the nomenclature.

It won’t really be saving memory but reserving more, where needed.

I would cross that bridge when I got to it as far as worrying about memory, and it’s not about accuracy (though it is) as much as it is about precision.

Precision is the number of decimal places we wish to allow. Accuracy in science and maths is dictated by the numbers we are given from our measurements. We cannot have a number (result) more accurate than the lowest precision number in the equation.

2.0 * 3.14  =>  6.28  =>  6.3

Fantastic man, such a simple concept, but i feel a lot better now that i have a general understanding of the purposes. Thanks for your time and effort!

1 Like