Question about float / double / decimal

Hey there. Just started learning programming on codeacademy, decided to start with C#. In the course, i have to use writeline to print the weight of a dog, a fractional number , “65.22”, it should be small enough for it to be included in the float library and decimal should be big enough to include it as well, right? but i get an error using any of them, and i can only use double, anybody knows why that is?

Hello @py9357380041, welcome to the forums! Can you post your code with a screenshot of the error, please?

Sure !

same error if i use decimal, only works using double

I think this occurs because the default decimal number on C# is double (meaning to create floats or decimals, you have to do things with the number, such as add an f to the beginning of it), it wants to convert the number 65.22 to a float, but since that’s a lossy conversion (meaning data could be lost, as floats hold less information than doubles), you have to actually tell the code you want it converted.

1 Like

The Decimal, Double, and Float variable types are different in the way that they store the values. The main difference is Float and Double are binary floating point types and a Decimal will store the value as a floating decimal point type. So Decimals have much higher precision and are usually used within monetary (financial) applications that require a high degree of accuracy. But in performance wise Decimals are slower than double and float types. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.

  • Float - 7 digits (32 bit)
  • Double-15-16 digits (64 bit)
  • Decimal -28-29 significant digits (128 bit)

Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float. Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.