Floating point precision in .NET

It is quite astonishing how many people are still getting confused by floating point precision...

... which is why I think I need to write a few lines about it in the context of .NET.

This short article will not talk about the known format for 32-bit floating point numbers stored and used according to the IEEE-754 specification, but about the magic of .NET to let such numbers look as expected (without errors).

Let's start by considering the following piece of code:

Single s = 1.22F;
Debug.Assert(s == 1.22F);
Debug.Assert(s == 1.22);

While the first assert is certainly true, the second one is false. Why is that? Well, here we are comparing the value of a single precision floating point (Single) with the value of double precision floating point (Double) value.

For such a comparison both values have to be of the same type. This would be Double in this case. Now we know that Double has more precise and the information theorem tells us that information cannot be restored. Hence we see know that the lost precision cannot be regained. In this unlucky case we have hit a sweep spot and this will actually result in a problem.

So far, so good. If we want to debug this we will actually face another problem. In the debugger both values appear to be equal (to be precise: at 1.22). How can that be?

The answer is quite simple: In the output everything is displayed as a string (we do not have a bytes view, unfortunately). The string representation is .NET specific (while the number format is not!). This will cause some confusion.

Let's write a simple C program to see what is exactly (on the machine) going on:

#include <stdio.h>

int main() {
        float s = 0.0;
        unsigned char *p;

        while(s < 1.0) {
                s += 0.01f;
                p = (unsigned char*)&s;
                printf("%f (%d%d%d%d)\n", s, p[0], p[1], p[2], p[3]);
        }

        return 0;
}

Great! Now let's let it run ...

0.010000 (102153560)
0.020000 (1021516360)
0.030000 (14319424560)
0.040000 (102153561)
0.050000 (2042047661)
0.060000 (14219411761)
0.070000 (409214361)
0.080000 (921516361)
0.090000 (2348118461)
0.100000 (20320420461)
[...]

Well, so far no rounding errors (at least not visible with the C representation of numbers). Smart people would now go for more digits (and we could then most probably already see some rounding error), but here we do not care about this.

We will now write a similar program in C#:

void Main()
{
	Num num = new Num();
	float s = 0f;
	
	while(s < 1f)
	{
		s += 0.01f;
		num.Number = s;
		String.Format("{0} ({1})", s, num).Dump();
	}
}

[StructLayout(LayoutKind.Explicit, Pack = 1)]
struct Num
{
	[FieldOffset(0)] public Single Number;
	[FieldOffset(0)] public Byte A;
	[FieldOffset(1)] public Byte B;
	[FieldOffset(2)] public Byte C;
	[FieldOffset(3)] public Byte D;
	
	public override String ToString()
	{
		return A.ToString() + B.ToString() + C.ToString() + D.ToString();
	}
}

We could have also used an unsafe context, but on the other side we could have also used an union in C. Nevertheless the code does pretty much exactly the same. What is the outcome here?

0,01 (102153560)
0,02 (1021516360)
0,03 (14319424560)
0,04 (102153561)
0,05 (2042047661)
0,05999999 (14219411761)
0,06999999 (409214361)
0,07999999 (921516361)
0,08999999 (2348118461)
0,09999999 (20320420461)

Aha! The digit representation is different. However, (and this is really important!) we have the same checksum (which is the concatenation of the integer values of the bytes).

So what did we learn? Casting a double to a single precision value will result in truncation, which will eventually be some eps different than the original single precision value. So one should make sure to never use equality in cases where values might differ by some eps - always include some tolerance is such cases.

If we require fixed precision then we might want to think about using Decimal. This is a fixed precision type and it could store 1.22 precisely.

The number of digits (here we had 2 digits) does not say anything about the possible accuracy. Floating point numbers do not represents true arithmetic operations - since they cannot store all numbers (like 1.22).

Created . Last updated .

References

Sharing is caring!