Krupski
March 16th, 2009, 05:24 AM
Hi all,
I just ran into something that *I* think is strange... maybe someone could shed some light on what's going on.
If I do something like this:
double n = 1e-9; // note: small number
printf("The number is %f\n", n);
...it outputs "The number is 0.000000" which is WRONG!
Now, if I force the matter like this:
double n = 1e-9;
printf("The number is %.20f\n", n);
...then it prints the proper number (albeit with lots of trailing zeros).
If I print a large POSITIVE number, the output grows accordingly without any changes to the format string. For example:
double n = 1e12; // note: large number
printf("The number is %f\n", n);
...properly prints "The number is 1000000000000.000000".
Is this normal? Why does the %f format "arbitrarily" choose to output 6 digits after the decimal point, yet "grow" as needed to display digits before the decimal point?
Thanks!
-- Roger
I just ran into something that *I* think is strange... maybe someone could shed some light on what's going on.
If I do something like this:
double n = 1e-9; // note: small number
printf("The number is %f\n", n);
...it outputs "The number is 0.000000" which is WRONG!
Now, if I force the matter like this:
double n = 1e-9;
printf("The number is %.20f\n", n);
...then it prints the proper number (albeit with lots of trailing zeros).
If I print a large POSITIVE number, the output grows accordingly without any changes to the format string. For example:
double n = 1e12; // note: large number
printf("The number is %f\n", n);
...properly prints "The number is 1000000000000.000000".
Is this normal? Why does the %f format "arbitrarily" choose to output 6 digits after the decimal point, yet "grow" as needed to display digits before the decimal point?
Thanks!
-- Roger