In C# there're 3 floating-point numeric types:
- Single precision 32-bit/4-byte
float
- Double precision 64-bit/8-byte
double
- 128-bit/16-byte
decimal
As long as you don't need extended precision, will 32-bit floats perform better? Intuition suggests that smaller types will be faster.
I tested this hypothesis with Mandelbrot calculation. I.e. I have implemented the same algorithm twice, the only difference being different variable types:
...
public struct Complex
{
public double Real { get; set; }
public double Imaginary { get; set; }
...
}
...
public struct ComplexF
{
public float Real { get; set; }
public float Imaginary { get; set; }
...
}
...
// Part of calculation
private static int[] MandelbrotF()
{
var output = new int[Height * Width];
for (int h = 0, idx = 0; h < Height; h++, idx += Width)
{
float cy = MinYf + h * ScaleYf;
for (int w = 0; w < Width; w++)
{
float cx = MinXf + w * ScaleXf;
output[idx + w] = Mandelbrot_0F(new ComplexF(cx, cy));
}
}
return output;
}
..
Turns out, float
is slower. Tested it many times on different machines and it was always behind double
.
dotnet run -c Release
Platform | Precision | Avg Time (ms) | StdDev (%) | Sum |
---|---|---|---|---|
Apple M1 Pro | double | 392.70 | 0.14 | 78513425 |
Apple M1 Pro | float | 395.49 | 0.38 | 78520223 |
Intel® Core™ i5-8257U CPU @ 1.40GHz | double | 355.99 | 13.13 | 78513425 |
Intel® Core™ i5-8257U CPU @ 1.40GHz | float | 372.59 | 1.13 | 78520223 |
AMD 5900x | double | 185.80 | 0.97 | 78513425 |
AMD 5900x | float | 207.49 | 0.75 | 78520223 |
dotnet build -c Release
Platform | Precision | Avg Time (ms) | StdDev (%) | Sum |
---|---|---|---|---|
Apple M1 Pro | double | 392.42 | 0.11 | 78513425 |
Apple M1 Pro | float | 395.35 | 0.13 | 78520223 |
Intel® Core™ i5-8257U CPU @ 1.40GHz | double | 327.30 | 3.82 | 78513425 |
Intel® Core™ i5-8257U CPU @ 1.40GHz | float | 370.90 | 1.48 | 78520223 |
- Tested with .NET 7.0.307
- Sum column shows how different types produced different results (summarising resulting array elements)