When to use float,double & decimal?
The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
Float - 32 bit (7 digits)
Double - 64 bit (15-16 digits)
Decimal - 128 bit (28-29 significant digits)
Decimal
In case of financial applications it is better to use Decimal types because it gives you a high level of accuracy and easy to avoid rounding errors.
A Decimal is not an intrinsic type. It is a 128-bit floating point value that can represent values in the range of positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335.
Double
Double Types are probably the most normally used data type for real values, except handling money.
A double is also a processor native type. It is a 64-bit floating point value that can represent values
in the range of negative 1.79769313486232e308 to positive 1.79769313486232e308.
Float
It is used mostly in graphic libraries because very high demands for processing powers, also used situations that can endure rounding errors.
A float is a processor native type. It is a 32-bit floating point value that can represent values in the range of negative 3.402823e38 to positive 3.402823e38.