Discussion:
Stongly typed numeric literals
(too old to reply)
Anton Shepelev
2017-08-21 09:36:18 UTC
Permalink
Hello, all

Does anybody know why C# requires that the program-
mer specify the exact type of all the numeric liter-
als, e.g.: 5.2f, 4.17m? Are there any drawbacks to
the approach taken by Pascal, where the compiler de-
cuces the type of the literal suitable for the ex-
peression in which it is used?
--
() ascii ribbon campaign - against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
Arne Vajhøj
2017-08-21 13:18:51 UTC
Permalink
Post by Anton Shepelev
Does anybody know why C# requires that the program-
mer specify the exact type of all the numeric liter-
als, e.g.: 5.2f, 4.17m? Are there any drawbacks to
the approach taken by Pascal, where the compiler de-
cuces the type of the literal suitable for the ex-
peression in which it is used?
To know you would probably need to ask Anders Hejlsberg.

But my guess is that method overload is one of the reasons.

void m(float v) { }
void m(double v) { }
void m(decimal v) { }
float f = 1.00;
double f = 1.00;
decimal f = 1.00;
m(1.00);

would be tough to make consistent.

And the introduction of:

var v = 1.00;

did not make it easier.

Arne
Marcel Mueller
2017-08-27 14:00:53 UTC
Permalink
Post by Anton Shepelev
Does anybody know why C# requires that the program-
mer specify the exact type of all the numeric liter-
als, e.g.: 5.2f, 4.17m?
It does not. Normally the implicit conversions are sufficient.
But there are some situations where the compiler cannot guess the right
value. E.g. fractional constants like 5.3 cannot be expressed as
floating point value and therefore have different approximations
depending on the exact type like single or double precision.
Post by Anton Shepelev
Are there any drawbacks to
the approach taken by Pascal, where the compiler de-
cuces the type of the literal suitable for the ex-
peression in which it is used?
Normally .NET does the same. Although it does not deduce the type but
convert the constant to the appropriate type.


Marcel
Anton Shepelev
2017-09-04 14:26:10 UTC
Permalink
Does anybody know why C# requires that the pro-
grammer specify the exact type of all the numeric
literals, e.g.: 5.2f, 4.17m?
It does not. Normally the implicit conversions are
sufficient.
C# can't use them even in the declaration of con-
stants of which the type is known at complie time:

const decimal NoWay = 3.58;
But there are some situations where the compiler
cannot guess the right value. E.g. fractional con-
stants like 5.3 cannot be expressed as floating
point value and therefore have different approxima-
tions depending on the exact type like single or
double precision.
Indeed, but the type is in most cases known before-
nand, so the right interpretation is obvious.
--
() ascii ribbon campaign - against html e-mail
/\ http://preview.tinyurl.com/qcy6mjc [archived]
Loading...