# Ticket #4562: mc_paste_happening.txt

File mc_paste_happening.txt, 8.1 KB (added by newbie-02, 8 weeks ago)

paste as it happens with inserted nonsense:

Line
1Comparing Floating Point Numbers, 2012 Edition
2Posted on February 25, 2012 by brucedawson
3This post is a more carefully thought out and peer reviewed version of a floating-point comparison article I wrote many years ago. This one gives solid advice and some surprising observations about the tricky subject of comparing floating-point numbers. A compilable source file with license is available.
4
5We’ve finally reached the point in this series that I’ve been waiting for. In this post I am going to share the most crucial piece of floating-point math knowledge that I have. Here it is:
6
7
8[Floating-point] math is hard.
9
10You just won’t believe how vastly, hugely, mind-bogglingly hard it is. I mean, you may think it’s difficult to calculate when trains from Chicago and Los Angeles will collide, but that’s just peanuts to floating-point math.
11
12Seriously. Each time I think that I’ve wrapped my head around the subtleties and implications of floating-point math I find that I’m wrong and that there is some extra confounding factor that I had failed to consider. So, the lesson to remember is that floating-point math is always more complex than you think it is. Keep that in mind through the rest of the post where we talk about the promised topic of comparing floats, and understand that this post gives some suggestions on techniques, but no silver bullets.
13
14Previously on this channel…
15This is the fifth chapter in a long series. The first couple in the series are particularly important for understanding this point. A (mostly) complete list of the other posts includes:
16
171: Tricks With the Floating-Point Format – an overview of the float format
182: Stupid Float Tricks – incrementing the integer representation
193: Don’t Store That in a Float – a cautionary tale about time
203b: They sure look equal… – ranting about Visual Studio’s float failings
214: Comparing Floating Point Numbers, 2012 Edition (return *this;)
225: Float Precision–From Zero to 100+ Digits – non-obvious answers to how many digits of precision a float has
236: C++ 11 std::async for Fast Float Format Finding – running tests on all floats in just a few minutes
247: Intermediate Floating-Point Precision – the surprising complexities of how expressions can be evaluated
258: Floating-point complexities – some favorite quirks of floating-point math
269: Exceptional Floating Point – using floating point exceptions for fun and profit
2710: That’s Not Normal–the Performance of Odd Floats – the performance implications of infinities, NaNs, and denormals
2811: Doubles are not floats, so don’t compare them – a common type of float comparison mistake
2912: Float Precision Revisited: Nine Digit Float Portability – moving floats between gcc and VC++ through text
3013: Floating-Point Determinism – what does it take to get bit-identical results
3114: There are Only Four Billion Floats–So Test Them All! – exhaustive testing to avoid embarrassing mistakes
3215: Please Calculate This Circle’s Circumference – the intersection of C++, const, and floats
3316: Intel Underestimates Error Bounds by 1.3 quintillion – the headline is not an exaggeration, but it’s not as bad as it sounds
34Comparing for equality
35Floating point math is not exact. Simple values like 0.1 cannot be precisely represented using binary floating point numbers, and the limited precision of floating point numbers means that slight changes in the order of operations or the precision of intermediates can change the result. That means that comparing two floats to see if they are equal is usually not what you want. GCC even has a (well intentioned but misguided) warning for this: “warning: comparing floating point with == or != is unsafe”.
36
37Here’s one example of the inexactness that can creep in:
38
39float f = 0.1f;
40float sum;
41sum = 0;
42
43for (int i = 0; i < 10; ++i)
44    sum += f;
45float product = f * 10;
46printf("sum = %1.15f, mul = %1.15f, mul2 = %1.15f\n",
47(xn) −
48f((x))
49f
500
51(x)
52
53.
54If the initial interval x0 is large enough to contain the root , any subsequent interval
55x is guaranteed to contain  too, thanks to a combination of both the containment
56property and the mean-value theorem applied to  around (x):
570 =  () =  ((x)) + ( − (x)) ·
580
59() with  ∈ x.
604.2. Error-free transformations
61An immediate consequence of the properties of Section 2.8 about correcting terms
62is that, more often than not, the error is also an FP number that can be computed
63with FP operations, with no need for multiple-precision software. The sequence of
64operations that returns both the result of a floating-point operation and the error of
65that operation is called an error-free transformation (EFT). The first ideas seem to
66go back to Gill (1951), in the context of fixed-point arithmetic.
67Algorithms 1 (page 219) and 2 (below) are explicitly mentioned by Møller(1965).
68Algorithm 1 also appears in the summation algorithm of Kahan (1965), which is
69Algorithm 20 (page 274). Several EFTs ar        sum, product, f * 10);
70(xn) −
71f((x))
72f
730
74(x)
75
76.
77If the initial interval x0 is large enough to contain the root , any subsequent interval
78x is guaranteed to contain  too, thanks to a combination of both the containment
79property and the mean-value theorem applied to  around (x):
800 =  () =  ((x)) + ( − (x)) ·
810
82() with  ∈ x.
834.2. Error-free transformations
84An immediate consequence of the properties of Section 2.8 about correcting terms
85is that, more often than not, the error is also an FP number that can be computed
86with FP operations, with no need for multiple-precision software. The sequence of
87operations that returns both the result of a floating-point operation and the error of
88that operation is called an error-free transformation (EFT). The first ideas seem to
89go back to Gill (1951), in the context of fixed-point arithmetic.
90Algorithms 1 (page 219) and 2 (below) are explicitly mentioned by Møller(1965).
91Algorithm 1 also appears in the summation algorithm of Kahan (1965), which is
92Algorithm 20 (page 274). Several EFTs arThis code tries to calculate ‘one’ in three different ways: repeated adding, and two slight variants of multiplication. Naturally we get three different results, and only one of them is 1.0:
93(xn) −
94f((x))
95f
960
97(x)
98
99.
100If the initial interval x0 is large enough to contain the root , any subsequent interval
101x is guaranteed to contain  too, thanks to a combination of both the containment
102property and the mean-value theorem applied to  around (x):
1030 =  () =  ((x)) + ( − (x)) ·
1040
105() with  ∈ x.
1064.2. Error-free transformations
107An immediate consequence of the properties of Section 2.8 about correcting terms
108is that, more often than not, the error is also an FP number that can be computed
109with FP operations, with no need for multiple-precision software. The sequence of
110operations that returns both the result of a floating-point operation and the error of
111that operation is called an error-free transformation (EFT). The first ideas seem to
112go back to Gill (1951), in the context of fixed-point arithmetic.
113Algorithms 1 (page 219) and 2 (below) are explicitly mentioned by Møller(1965).
114Algorithm 1 also appears in the summation algorithm of Kahan (1965), which is
115Algorithm 20 (page 274). Several EFTs ar
116(xn) −
117f((x))
118f
1190
120(x)
121
122.
123If the initial interval x0 is large enough to contain the root , any subsequent interval
124x is guaranteed to contain  too, thanks to a combination of both the containment
125property and the mean-value theorem applied to  around (x):
1260 =  () =  ((x)) + ( − (x)) ·
1270
128() with  ∈ x.
1294.2. Error-free transformations
130An immediate consequence of the properties of Section 2.8 about correcting terms
131is that, more often than not, the error is also an FP number that can be computed
132with FP operations, with no need for multiple-precision software. The sequence of
133operations that returns both the result of a floating-point operation and the error of
134that operation is called an error-free transformation (EFT). The first ideas seem to
135go back to Gill (1951), in the context of fixed-point arithmetic.
136Algorithms 1 (page 219) and 2 (below) are explicitly mentioned by Møller(1965).
137Algorithm 1 also appears in the summation algorithm of Kahan (1965), which is
138Algorithm 20 (page 274). Several EFTs ar
139
140