Chữ số có nghĩa trong python

Số học dấu phẩy động được coi là một chủ đề bí truyền của nhiều người. Điều này khá ngạc nhiên vì dấu chấm động phổ biến trong các hệ thống máy tính. Hầu hết mọi ngôn ngữ đều có kiểu dữ liệu dấu chấm động; . Bài báo này trình bày một hướng dẫn về các khía cạnh của dấu phẩy động có tác động trực tiếp đến các nhà thiết kế hệ thống máy tính. Nó bắt đầu với thông tin cơ bản về biểu diễn dấu phẩy động và lỗi làm tròn, tiếp tục với phần thảo luận về tiêu chuẩn dấu phẩy động IEEE và kết thúc bằng nhiều ví dụ về cách các nhà chế tạo máy tính có thể hỗ trợ dấu phẩy động tốt hơn.

Danh mục và mô tả chủ đề. (Chính) C. 0 [Tổ chức hệ thống máy tính]. Chung -- thiết kế tập lệnh; . 3. 4 [Ngôn ngữ lập trình]. Bộ xử lý -- trình biên dịch, tối ưu hóa; . 1. 0 [Phân tích số]. Chung -- số học máy tính, phân tích lỗi, thuật toán số (Trung học)

D. 2. 1 [Kỹ thuật phần mềm]. Yêu cầu/Thông số kỹ thuật -- ngôn ngữ; . 3. 4 Ngôn ngữ lập trình]. Định nghĩa và lý thuyết chính thức -- ngữ nghĩa; . 4. 1 Hệ điều hành]. Quản lý quy trình -- đồng bộ hóa

Điều khoản chung. Algorithms, Design, Languages

Các từ và cụm từ chính bổ sung. Số không chuẩn hóa, ngoại lệ, dấu phẩy động, tiêu chuẩn dấu phẩy động, tràn dần, số bảo vệ, NaN, tràn, lỗi tương đối, lỗi làm tròn, chế độ làm tròn, ulp, tràn

Introduction

Các nhà xây dựng hệ thống máy tính thường cần thông tin về số học dấu phẩy động. Tuy nhiên, có rất ít nguồn thông tin chi tiết về nó. One of the few books on the subject, Floating-Point Computation by Pat Sterbenz, is long out of print. This paper is a tutorial on those aspects of floating-point arithmetic (floating-point hereafter) that have a direct connection to systems building. It consists of three loosely connected parts. Phần đầu tiên, , thảo luận về ý nghĩa của việc sử dụng các chiến lược làm tròn khác nhau cho các phép toán cơ bản cộng, trừ, nhân và chia. Nó cũng chứa thông tin cơ bản về hai phương pháp đo lỗi làm tròn, ulps và n = n/2 2 n = n/2 3. Phần thứ hai thảo luận về tiêu chuẩn dấu phẩy động IEEE, đang được các nhà sản xuất phần cứng thương mại chấp nhận nhanh chóng. Bao gồm trong tiêu chuẩn IEEE là phương pháp làm tròn cho các hoạt động cơ bản. Thảo luận về tiêu chuẩn dựa trên tài liệu trong phần. Phần thứ ba thảo luận về các kết nối giữa dấu phẩy động và thiết kế các khía cạnh khác nhau của hệ thống máy tính. Các chủ đề bao gồm thiết kế tập lệnh, tối ưu hóa trình biên dịch và xử lý ngoại lệ

I have tried to avoid making statements about floating-point without also giving reasons why the statements are true, especially since the justifications involve nothing more complicated than elementary calculus. Những giải thích không phải là trọng tâm của lập luận chính đã được nhóm lại thành một phần gọi là "Chi tiết" để chúng có thể được bỏ qua nếu muốn. Đặc biệt, phần chứng minh của nhiều định lý xuất hiện trong phần này. The end of each proof is marked with the z symbol. When a proof is not included, the z appears immediately following the statement of the theorem

Rounding Error

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation. The section describes how it is measured

Since most floating-point calculations have rounding error anyway, does it matter if the basic arithmetic operations introduce a little bit more rounding error than necessary? That question is a main theme throughout this section. The section discusses guard digits, a means of reducing the error when subtracting two nearby numbers. Guard digits were considered sufficiently important by IBM that in 1968 it added a guard digit to the double precision format in the System/360 architecture (single precision already had a guard digit), and retrofitted all existing machines in the field. Two examples are given to illustrate the utility of guard digits

The IEEE standard goes further than just requiring the use of a guard digit. It gives an algorithm for addition, subtraction, multiplication, division and square root, and requires that implementations produce the same result as that algorithm. Thus, when a program is moved from one machine to another, the results of the basic operations will be the same in every bit if both machines support the IEEE standard. This greatly simplifies the porting of programs. Other uses of this precise specification are given in

Floating-point Formats

Several different representations of real numbers have been proposed, but by far the most widely used is the floating-point representation. Floating-point representations have a base

(which is always assumed to be even) and a precision p. If
= 10 and p = 3, then the number 0. 1 is represented as 1. 00 × 10-1. If
= 2 and p = 24, then the decimal number 0. 1 cannot be represented exactly, but is approximately 1. 10011001100110011001101 × 2-4.

In general, a floating-point number will be represented as ± d. dd. d ×

e, where d. dd. d is called the significand and has p digits. More precisely ± d0 . d1 d2 . dp-1 ×
e represents the number

(1)

The term floating-point number will be used to mean a real number that can be exactly represented in the format under discussion. Two other parameters associated with floating-point representations are the largest and smallest allowable exponents, emax and emin. Since there are

p possible significands, and emax - emin + 1 possible exponents, a floating-point number can be encoded in

bits, where the final +1 is for the sign bit. The precise encoding is not important for now

There are two reasons why a real number might not be exactly representable as a floating-point number. The most common situation is illustrated by the decimal number 0. 1. Although it has a finite decimal representation, in binary it has an infinite repeating representation. Thus when

= 2, the number 0. 1 lies strictly between two floating-point numbers and is exactly representable by neither of them. A less common situation is that a real number is out of range, that is, its absolute value is larger than
×or smaller than 1. 0 ×. Most of this paper discusses issues due to the first reason. However, numbers that are out of range will be discussed in the sections and .

Floating-point representations are not necessarily unique. For example, both 0. 01 × 101 and 1. 00 × 10-1 represent 0. 1. If the leading digit is nonzero (d0

0 in equation above), then the representation is said to be normalized. The floating-point number 1. 00 × 10-1 is normalized, while 0. 01 × 101 is not. When
 = 2, p = 3, emin = -1 and emax = 2 there are 16 normalized floating-point numbers, as shown in . The bold hash marks correspond to numbers whose significand is 1. 00. Requiring that a floating-point representation be normalized makes the representation unique. Unfortunately, this restriction makes it impossible to represent zero. A natural way to represent 0 is with 1. 0 ×, since this preserves the fact that the numerical ordering of nonnegative real numbers corresponds to the lexicographic ordering of their floating-point representations. When the exponent is stored in a k bit field, that means that only 2k - 1 values are available for use as exponents, since one must be reserved to represent 0.

Note that the × in a floating-point number is part of the notation, and different from a floating-point multiply operation. The meaning of the × symbol should be clear from the context. For example, the expression (2. 5 × 10-3) × (4. 0 × 102) involves only a single floating-point multiplication


FIGURE D-1 Normalized numbers when
= 2, p = 3, emin = -1, emax = 2

Relative Error and Ulps

Since rounding error is inherent in floating-point computation, it is important to have a way to measure this error. Consider the floating-point format with

 = 10 and p = 3, which will be used throughout this section. If the result of a floating-point computation is 3. 12 × 10-2, and the answer when computed to infinite precision is . 0314, it is clear that this is in error by 2 units in the last place. Similarly, if the real number . 0314159 is represented as 3. 14 × 10-2, then it is in error by . 159 units in the last place. In general, if the floating-point number d. d. d ×
e is used to represent z, then it is in error by
d. d. d - (z/
e)
p-1 units in the last place. , The term ulps will be used as shorthand for "units in the last place. " If the result of a calculation is the floating-point number nearest to the correct result, it still might be in error by as much as . 5 ulp. Another way to measure the difference between a floating-point number and the real number it is approximating is relative error, which is simply the difference between the two numbers divided by the real number. For example the relative error committed when approximating 3. 14159 by 3. 14 × 100 is . 00159/3. 14159 
. 0005.

To compute the relative error that corresponds to . 5 ulp, observe that when a real number is approximated by the closest possible floating-point number d. dd. dd ×

e, the error can be as large as 0. 00. 00
' ×
e, where
' is the digit
/2, there are p units in the significand of the floating-point number, and p units of 0 in the significand of the error. This error is ((
/2)
-p) ×
e. Vì các số có dạng d. đ. dd ×
e đều có cùng một lỗi tuyệt đối, nhưng có các giá trị nằm trong khoảng từ
e đến
×
e, the relative error ranges between ((
/2)
-p) ×
e/
e and ((
/2)
-p) ×
e/
e+1. That is,

(2)

In particular, the relative error corresponding to . 5 ulp can vary by a factor of

. This factor is called the wobble. Setting
= (
/2)
-p to the largest of the bounds in above, we can say that when a real number is rounded to the closest floating-point number, the relative error is always bounded by e, which is referred to as machine epsilon.

In the example above, the relative error was . 00159/3. 14159

. 0005. In order to avoid such small numbers, the relative error is normally written as a factor times
, which in this case is
= (
/2)
-p = 5(10)-3 = . 005. Thus the relative error would be expressed as (. 00159/3. 14159)/. 005)
0. 1
.

To illustrate the difference between ulps and relative error, consider the real number x = 12. 35. It is approximated by= 1. 24 × 101. The error is 0. 5 ulps, the relative error is 0. 8

. Next consider the computation 8. The exact value is 8x = 98. 8, while the computed value is 8= 9. 92 × 101. The error is now 4. 0 ulps, but the relative error is still 0. 8
. The error measured in ulps is 8 times larger, even though the relative error is the same. In general, when the base is
, a fixed relative error expressed in ulps can wobble by a factor of up to
. And conversely, as equation above shows, a fixed error of . 5 ulps results in a relative error that can wobble by
.

The most natural way to measure rounding error is in ulps. For example rounding to the nearest floating-point number corresponds to an error of less than or equal to . 5 ulp. However, when analyzing the rounding error caused by various formulas, relative error is a better measure. A good illustration of this is the analysis in the section . Since

can overestimate the effect of rounding to the nearest floating-point number by the wobble factor of
, error estimates of formulas will be tighter on machines with a small
.

When only the order of magnitude of rounding error is of interest, ulps and

may be used interchangeably, since they differ by at most a factor of
. For example, when a floating-point number is in error by n ulps, that means that the number of contaminated digits is log
n. If the relative error in a computation is n
, then

(3) contaminated digits
log
n.

Guard Digits

One method of computing the difference between two floating-point numbers is to compute the difference exactly and then round it to the nearest floating-point number. This is very expensive if the operands differ greatly in size. Assuming p = 3, 2. 15 × 1012 - 1. 25 × 10-5 would be calculated as

x = 2. 15 × 1012
y = . 0000000000000000125 × 1012
x - y = 2. 1499999999999999875 × 1012

which rounds to 2. 15 × 1012. Rather than using all these digits, floating-point hardware normally operates on a fixed number of digits. Suppose that the number of digits kept is p, and that when the smaller operand is shifted right, digits are simply discarded (as opposed to rounding). Then 2. 15 × 1012 - 1. 25 × 10-5 becomes

x = 2. 15 × 1012
y = 0. 00 × 1012
x - y = 2. 15 × 1012

The answer is exactly the same as if the difference had been computed exactly and then rounded. Take another example. 10. 1 - 9. 93. This becomes

x = 1. 01 × 101
y = 0. 99 × 101
x - y = . 02 × 101

The correct answer is . 17, so the computed difference is off by 30 ulps and is wrong in every digit. How bad can the error be?

Theorem 1

Using a floating-point format with parameters
and p, and computing differences using p digits, the relative error of the result can be as large as
- 1.

Proof

Một lỗi tương đối của
- 1 trong biểu thức x - y xảy ra khi x = 1. 00. 0 and y = .
.
, trong đó
=
- 1. Here y has p digits (all equal to
). The exact difference is x - y =
-p. However, when computing the answer using only p digits, the rightmost digit of y gets shifted off, and so the computed difference is
-p+1. Thus the error is
-p -
-p+1 =
-p (
- 1), and the relative error is
-p(
- 1)/
-p =
- 1. z

When

=2, the relative error can be as large as the result, and when
=10, it can be 9 times larger. Hay nói cách khác, khi
=2, phương trình cho thấy số chữ số bị nhiễm là log2(1/
) = log2(2p) = p. Tức là tất cả các chữ số p trong kết quả đều sai. Suppose that one extra digit is added to guard against this situation (a guard digit). That is, the smaller number is truncated to p + 1 digits, and then the result of the subtraction is rounded to p digits. With a guard digit, the previous example becomes

x = 1. 010 × 101
y = 0. 993 × 101
x - y = . 017 × 101

and the answer is exact. With a single guard digit, the relative error of the result may be greater than

, as in 110 - 8. 59.

x = 1. 10 × 102
y = . 085 × 102
x - y = 1. 015 × 102

This rounds to 102, compared with the correct answer of 101. 41, for a relative error of . 006, which is greater than

= . 005. In general, the relative error of the result can be only slightly larger than
. More precisely,

Theorem 2

If x and y are floating-point numbers in a format with parameters
and p, and if subtraction is done with p + 1 digits (i. e. one guard digit), then the relative rounding error in the result is less than 2
.

This theorem will be proven in . Addition is included in the above theorem since x and y can be positive or negative

Cancellation

The last section can be summarized by saying that without a guard digit, the relative error committed when subtracting two nearby quantities can be very large. In other words, the evaluation of any expression containing a subtraction (or an addition of quantities with opposite signs) could result in a relative error so large that all the digits are meaningless (Theorem 1). When subtracting nearby quantities, the most significant digits in the operands match and cancel each other. There are two kinds of cancellation. catastrophic and benign

Catastrophic cancellation occurs when the operands are subject to rounding errors. For example in the quadratic formula, the expression b2 - 4ac occurs. The quantities b2 and 4ac are subject to rounding errors since they are the results of floating-point multiplications. Suppose that they are rounded to the nearest floating-point number, and so are accurate to within . 5 ulp. When they are subtracted, cancellation can cause many of the accurate digits to disappear, leaving behind mainly digits contaminated by rounding error. Hence the difference might have an error of many ulps. For example, consider b = 3. 34, a = 1. 22, and c = 2. 28. The exact value of b2 - 4ac is . 0292. But b2 rounds to 11. 2 and 4ac rounds to 11. 1, hence the final answer is . 1 which is an error by 70 ulps, even though 11. 2 - 11. 1 is exactly equal to . 1. The subtraction did not introduce any error, but rather exposed the error introduced in the earlier multiplications

Benign cancellation occurs when subtracting exactly known quantities. If x and y have no rounding error, then by Theorem 2 if the subtraction is done with a guard digit, the difference x-y has a very small relative error (less than 2

).

A formula that exhibits catastrophic cancellation can sometimes be rearranged to eliminate the problem. Again consider the quadratic formula

(4)

When, thendoes not involve a cancellation and

.

But the other addition (subtraction) in one of the formulas will have a catastrophic cancellation. To avoid this, multiply the numerator and denominator of r1 by

(and similarly for r2) to obtain

(5)

Ifand, then computing r1 using formula will involve a cancellation. Therefore, use formula for computing r1 and for r2. On the other hand, if b < 0, use for computing r1 and for r2.

The expression x2 - y2 is another formula that exhibits catastrophic cancellation. It is more accurate to evaluate it as (x - y)(x + y). Unlike the quadratic formula, this improved form still has a subtraction, but it is a benign cancellation of quantities without rounding error, not a catastrophic one. By Theorem 2, the relative error in x - y is at most 2

. The same is true of x + y. Multiplying two quantities with a small relative error results in a product with a small relative error (see the section ).

In order to avoid confusion between exact and computed values, the following notation is used. Whereas x - y denotes the exact difference of x and y, xy denotes the computed difference (i. e. , with rounding error). Similarly

,
, anddenote computed addition, multiplication, and division, respectively. All caps indicate the computed value of a function, as in n = n/2 4 or n = n/2 5. Lowercase functions and traditional mathematical notation denote their exact values as in ln(x) and.

Although (xy)

(x
y) is an excellent approximation to x2 - y2, the floating-point numbers x and y might themselves be approximations to some true quantitiesand. For example,andmight be exactly known decimal numbers that cannot be expressed exactly in binary. In this case, even though x  y is a good approximation to x - y, it can have a huge relative error compared to the true expression, and so the advantage of (x + y)(x - y) over x2 - y2 is not as dramatic. Since computing (x + y)(x - y) is about the same amount of work as computing x2 - y2, it is clearly the preferred form in this case. In general, however, replacing a catastrophic cancellation by a benign one is not worthwhile if the expense is large, because the input is often (but not always) an approximation. But eliminating a cancellation entirely (as in the quadratic formula) is worthwhile even if the data are not exact. Throughout this paper, it will be assumed that the floating-point inputs to an algorithm are exact and that the results are computed as accurately as possible.

The expression x2 - y2 is more accurate when rewritten as (x - y)(x + y) because a catastrophic cancellation is replaced with a benign one. We next present more interesting examples of formulas exhibiting catastrophic cancellation that can be rewritten to exhibit only benign cancellation

The area of a triangle can be expressed directly in terms of the lengths of its sides a, b, and c as

(6)

(Suppose the triangle is very flat; that is, a

b + c. Then s
a, and the term (s - a) in formula subtracts two nearby numbers, one of which may have rounding error. For example, if a = 9. 0, b = c = 4. 53, giá trị đúng của s là 9. 03 and A is 2. 342. Mặc dù giá trị tính toán của s (9. 05) is in error by only 2 ulps, the computed value of A is 3. 04, lỗi 70 ulps.

Có một cách để viết lại công thức sao cho nó sẽ trả về kết quả chính xác ngay cả đối với tam giác phẳng [Kahan 1986]. Nó là

(7)

Nếu a, b và c không thỏa mãn a

b
c, đổi tên . Dễ dàng kiểm tra xem các vế phải của và có bằng nhau về mặt đại số không. Sử dụng các giá trị của a, b và c ở trên sẽ cho diện tích tính toán là 2. 35, sai 1 ulp và chính xác hơn nhiều so với công thức đầu tiên.

Mặc dù công thức chính xác hơn nhiều so với ví dụ này, nhưng thật tuyệt khi biết hiệu suất nói chung tốt như thế nào

Định lý 3

Sai số làm tròn phát sinh khi sử dụng để tính diện tích tam giác tối đa là 11
, miễn là phép trừ được thực hiện với chữ số bảo vệ, e 
 .005, and that square roots are computed to within 1/2 ulp.

Điều kiện e 0. Signed zero provides a perfect way to resolve this problem. Các số dạng x + i(+0) có một dấu và các số dạng x + i(-0) ở phía bên kia của vết cắt nhánh có dấu khác. In fact, the natural formulas for computingwill give these results.

Back to. If z =1 = -1 + i0, then

1/z = 1/(-1 + i0) = [(-1- i0)]/[(-1 + i0)(-1 - i0)] = (-1 -- i0)/((-1)2 - 02) = -1 + i(-0),

and so, while. Thus IEEE arithmetic preserves this identity for all z. Some more sophisticated examples are given by Kahan [1987]. Although distinguishing between +0 and -0 has advantages, it can occasionally be confusing. For example, signed zero destroys the relation x = y 

 1/x = 1/y, which is false when x = +0 and y = -0. However, the IEEE committee decided that the advantages of utilizing the sign of zero outweighed the disadvantages.

Denormalized Numbers

Consider normalized floating-point numbers with

= 10, p = 3, and emin = -98. The numbers x = 6. 87 × 10-97 and y = 6. 81 × 10-97 appear to be perfectly ordinary floating-point numbers, which are more than a factor of 10 larger than the smallest floating-point number 1. 00 × 10-98. They have a strange property, however. xy = 0 even though x
y. The reason is that x - y = . 06 × 10 -97  = 6. 0 × 10-99 is too small to be represented as a normalized number, and so must be flushed to zero. How important is it to preserve the property

(10) x = y
x - y = 0 ?

It's very easy to imagine writing the code fragment, while (n is even) { 02  while (n is even) { 03 

  while (n is even) { 09  while (n is even) { 10  while (n is even) { 11  while (n is even) { 04  while (n is even) { 13, and much later having a program fail due to a spurious division by zero. Tracking down bugs like this is frustrating and time consuming. On a more philosophical level, computer science textbooks often point out that even though it is currently impractical to prove large programs correct, designing programs with the idea of proving them often results in better code. For example, introducing invariants is quite useful, even if they aren't going to be used as part of a proof. Floating-point code is just like any other code. it helps to have provable facts on which to depend. For example, when analyzing formula , it was very helpful to know that x/2 
 x y = x - y. Similarly, knowing that is true makes writing reliable floating-point code easier. If it is only true for most numbers, it cannot be used to prove anything.

The IEEE standard uses denormalized numbers, which guarantee , as well as other useful relations. They are the most controversial part of the standard and probably accounted for the long delay in getting 754 approved. Most high performance hardware that claims to be IEEE compatible does not support denormalized numbers directly, but rather traps when consuming or producing denormals, and leaves it to software to simulate the IEEE standard. The idea behind denormalized numbers goes back to Goldberg [1967] and is very simple. When the exponent is emin, the significand does not have to be normalized, so that when

= 10, p = 3 and emin = -98, 1. 00 × 10-98 is no longer the smallest floating-point number, because 0. 98 × 10-98 is also a floating-point number.

There is a small snag when

= 2 and a hidden bit is being used, since a number with an exponent of emin will always have a significand greater than or equal to 1. 0 because of the implicit leading bit. The solution is similar to that used to represent 0, and is summarized in . The exponent emin is used to represent denormals. More formally, if the bits in the significand field are b1, b2, . , bp -1, and the value of the exponent is e, then when e > emin - 1, the number being represented is 1. b1b2. bp - 1 × 2e whereas when e = emin - 1, the number being represented is 0. b1b2. bp - 1 × 2e + 1. The +1 in the exponent is needed because denormals have an exponent of emin, not emin - 1.

Recall the example of

= 10, p = 3, emin = -98, x = 6. 87 × 10-97 and y = 6. 81 × 10-97 presented at the beginning of this section. With denormals, x - y does not flush to zero but is instead represented by the denormalized number . 6 × 10-98. This behavior is called gradual underflow. It is easy to verify that always holds when using gradual underflow.


FIGURE D-2 Flush To Zero Compared With Gradual Underflow

illustrates denormalized numbers. The top number line in the figure shows normalized floating-point numbers. Notice the gap between 0 and the smallest normalized number. If the result of a floating-point calculation falls into this gulf, it is flushed to zero. The bottom number line shows what happens when denormals are added to the set of floating-point numbers. The "gulf" is filled in, and when the result of a calculation is less than, it is represented by the nearest denormal. When denormalized numbers are added to the number line, the spacing between adjacent floating-point numbers varies in a regular way. adjacent spacings are either the same length or differ by a factor of

. Without denormals, the
spacing abruptly changes fromto, which is a factor of, rather than the orderly change by a factor of
. Because of this, many algorithms that can have large relative error for normalized numbers close to the underflow threshold are well-behaved in this range when gradual underflow is used.

Without gradual underflow, the simple expression x - y can have a very large relative error for normalized inputs, as was seen above for x = 6. 87 × 10-97 and y = 6. 81 × 10-97. Large relative errors can happen even without cancellation, as the following example shows [Demmel 1984]. Consider dividing two complex numbers, a + ib and c + id. The obvious formula

· i

suffers from the problem that if either component of the denominator c + id is larger than, the formula will overflow, even though the final result may be well within range. A better method of computing the quotients is to use Smith's formula

(11)

Applying Smith's formula to (2 · 10-98 + i10-98)/(4 · 10-98 + i(2 · 10-98)) gives the correct answer of 0. 5 with gradual underflow. It yields 0. 4 with flush to zero, an error of 100 ulps. It is typical for denormalized numbers to guarantee error bounds for arguments all the way down to 1. 0 x.

Exceptions, Flags and Trap Handlers

When an exceptional condition like division by zero or overflow occurs in IEEE arithmetic, the default is to deliver a result and continue. Typical of the default results are NaN for 0/0 and, and

for 1/0 and overflow. The preceding sections gave examples where proceeding from an exception with these default values was the reasonable thing to do. When any exception occurs, a status flag is also set. Implementations of the IEEE standard are required to provide users with a way to read and write the status flags. The flags are "sticky" in that once set, they remain set until explicitly cleared. Testing the flags is the only way to distinguish 1/0, which is a genuine infinity from an overflow.

Đôi khi việc tiếp tục thực thi khi đối mặt với các điều kiện ngoại lệ là không phù hợp. The section gave the example of x/(x2 + 1). When x >, the denominator is infinite, resulting in a final answer of 0, which is totally wrong. Although for this formula the problem can be solved by rewriting it as 1/(x + x-1), rewriting may not always solve the problem. The IEEE standard strongly recommends that implementations allow trap handlers to be installed. Sau đó, khi một ngoại lệ xảy ra, trình xử lý bẫy được gọi thay vì đặt cờ. Giá trị được trả về bởi trình xử lý bẫy sẽ được sử dụng làm kết quả của thao tác. Trình xử lý bẫy có trách nhiệm xóa hoặc đặt cờ trạng thái;

Tiêu chuẩn IEEE chia ngoại lệ thành 5 lớp. tràn, tràn, chia cho 0, hoạt động không hợp lệ và không chính xác. Có một cờ trạng thái riêng cho từng loại ngoại lệ. Ý nghĩa của ba trường hợp ngoại lệ đầu tiên là hiển nhiên. Hoạt động không hợp lệ bao gồm các tình huống được liệt kê trong và bất kỳ so sánh nào liên quan đến NaN. The default result of an operation that causes an invalid exception is to return a NaN, but the converse is not true. When one of the operands to an operation is a NaN, the result is a NaN but no invalid exception is raised unless the operation also satisfies one of the conditions in

TABLE D-4   Exceptions in IEEE 754*ExceptionResult when traps disabledArgument to trap handleroverflow±
or ±xmaxround(x2-
)underflow0,or denormalround(x2
)divide by zero±
operandsinvalidNaNoperandsinexactround(x)round(x)

*x is the exact result of the operation,

= 192 for single precision, 1536 for double, and xmax = 1. 11 . 11 ×.

The inexact exception is raised when the result of a floating-point operation is not exact. In the

= 10, p = 3 system, 3. 5
4. 2 = 14. 7 is exact, but 3. 5 
 4. 3 = 15. 0 is not exact (since 3. 5 · 4. 3 = 15. 05), and raises an inexact exception. discusses an algorithm that uses the inexact exception. A summary of the behavior of all five exceptions is given in .

There is an implementation issue connected with the fact that the inexact exception is raised so often. If floating-point hardware does not have flags of its own, but instead interrupts the operating system to signal a floating-point exception, the cost of inexact exceptions could be prohibitive. This cost can be avoided by having the status flags maintained by software. The first time an exception is raised, set the software flag for the appropriate class, and tell the floating-point hardware to mask off that class of exceptions. Then all further exceptions will run without interrupting the operating system. When a user resets that status flag, the hardware mask is re-enabled

Trap Handlers

One obvious use for trap handlers is for backward compatibility. Old codes that expect to be aborted when exceptions occur can install a trap handler that aborts the process. This is especially useful for codes with a loop like while (n is even) { 14  while (n is even) { 15  while (n is even) { 16  while (n is even) { 03 while (n is even) { 18 while (n is even) { 19. Since comparing a NaN to a number with , , or = (but not ) always returns false, this code will go into an infinite loop if while (n is even) { 06 ever becomes a NaN.

, >,
, or = (but not
) always returns false, this code will go into an infinite loop if while (n is even) { 06 ever becomes a NaN.

There is a more interesting use for trap handlers that comes up when computing products such asthat could potentially overflow. One solution is to use logarithms, and compute expinstead. The problem with this approach is that it is less accurate, and that it costs more than the simple expression, even if there is no overflow. There is another solution using trap handlers called over/underflow counting that avoids both of these problems [Sterbenz 1974]

The idea is as follows. There is a global counter initialized to zero. Whenever the partial productoverflows for some k, the trap handler increments the counter by one and returns the overflowed quantity with the exponent wrapped around. In IEEE 754 single precision, emax = 127, so if pk = 1. 45 × 2130, it will overflow and cause the trap handler to be called, which will wrap the exponent back into range, changing pk to 1. 45 × 2-62 (see below). Similarly, if pk underflows, the counter would be decremented, and negative exponent would get wrapped around into a positive one. When all the multiplications are done, if the counter is zero then the final product is pn. If the counter is positive, the product overflowed, if the counter is negative, it underflowed. If none of the partial products are out of range, the trap handler is never called and the computation incurs no extra cost. Even if there are over/underflows, the calculation is more accurate than if it had been computed with logarithms, because each pk was computed from pk - 1 using a full precision multiply. Barnett [1987] discusses a formula where the full accuracy of over/underflow counting turned up an error in earlier tables of that formula

IEEE 754 specifies that when an overflow or underflow trap handler is called, it is passed the wrapped-around result as an argument. The definition of wrapped-around for overflow is that the result is computed as if to infinite precision, then divided by 2

, and then rounded to the relevant precision. For underflow, the result is multiplied by 2
. The exponent
is 192 for single precision and 1536 for double precision. This is why 1. 45 x 2130 was transformed into 1. 45 × 2-62 in the example above.

Rounding Modes

In the IEEE standard, rounding occurs whenever an operation has a result that is not exact, since (with the exception of binary decimal conversion) each operation is computed exactly and then rounded. By default, rounding means round toward nearest. The standard requires that three other rounding modes be provided, namely round toward 0, round toward +

, and round toward -
. When used with the convert to integer operation, round toward -
causes the convert to become the floor function, while round toward +
is ceiling. The rounding mode affects overflow, because when round toward 0 or round toward -
is in effect, an overflow of positive magnitude causes the default result to be the largest representable number, not +
. Similarly, overflows of negative magnitude will produce the largest negative number when round toward +
or round toward 0 is in effect.

One application of rounding modes occurs in interval arithmetic (another is mentioned in ). When using interval arithmetic, the sum of two numbers x and y is an interval, whereis x

y rounded toward -
, andis x
y rounded toward +
. Kết quả chính xác của phép cộng nằm trong khoảng. Without rounding modes, interval arithmetic is usually implemented by computingand, whereis machine epsilon. This results in overestimates for the size of the intervals. Since the result of an operation in interval arithmetic is an interval, in general the input to an operation will also be an interval. If two intervals, and, are added, the result is, whereiswith the rounding mode set to round toward -
, andiswith the rounding mode set to round toward +
.

When a floating-point calculation is performed using interval arithmetic, the final answer is an interval that contains the exact result of the calculation. This is not very helpful if the interval turns out to be large (as it often does), since the correct answer could be anywhere in that interval. Interval arithmetic makes more sense when used in conjunction with a multiple precision floating-point package. The calculation is first performed with some precision p. If interval arithmetic suggests that the final answer may be inaccurate, the computation is redone with higher and higher precisions until the final interval is a reasonable size

Flags

The IEEE standard has a number of flags and modes. As discussed above, there is one status flag for each of the five exceptions. underflow, overflow, division by zero, invalid operation and inexact. There are four rounding modes. round toward nearest, round toward +

, round toward 0, and round toward -
. It is strongly recommended that there be an enable mode bit for each of the five exceptions. This section gives some simple examples of how these modes and flags can be put to good use. A more sophisticated example is discussed in the section .

Consider writing a subroutine to compute xn, where n is an integer. When n > 0, a simple routine like

PositivePower(x,n) { while (n is even) { x = x*x n = n/2 } u = x while (true) { n = n/2 if (n==0) return u x = x*x while (n is even) { 0 }

If n < 0, then a more accurate way to compute xn is not to call while (n is even) { 21 while (n is even) { 22 but rather while (n is even) { 23 while (n is even) { 22, because the first expression multiplies n quantities each of which have a rounding error from the division (i. e. , 1/x). In the second expression these are exact (i. e. , x), and the final division commits just one additional rounding error. Unfortunately, these is a slight snag in this strategy. If while (n is even) { 25 while (n is even) { 22 underflows, then either the underflow trap handler will be called, or else the underflow status flag will be set. This is incorrect, because if x-n underflows, then xn will either overflow or be in range. But since the IEEE standard gives the user access to all the flags, the subroutine can easily correct for this. It simply turns off the overflow and underflow trap enable bits and saves the overflow and underflow status bits. It then computes while (n is even) { 23 while (n is even) { 22. Nếu cả bit trạng thái tràn và tràn không được đặt, nó sẽ khôi phục chúng cùng với các bit kích hoạt bẫy. Nếu một trong các bit trạng thái được đặt, nó sẽ khôi phục các cờ và thực hiện lại phép tính bằng cách sử dụng while (n is even) { 21 while (n is even) { 22, điều này khiến các ngoại lệ chính xác xảy ra

Một ví dụ khác về việc sử dụng cờ xảy ra khi tính toán arccos thông qua công thức

arccos x = 2 arctan
.

If arctan(

) evaluates to
/2, then arccos(-1) will correctly evaluate to 2·arctan(
) =
, because of infinity arithmetic. However, there is a small snag, because the computation of (1 - x)/(1 + x) will cause the divide by zero exception flag to be set, even though arccos(-1) is not exceptional. The solution to this problem is straightforward. Simply save the value of the divide by zero flag before computing arccos, and then restore its old value after the computation.

Systems Aspects

The design of almost every aspect of a computer system requires knowledge about floating-point. Computer architectures usually have floating-point instructions, compilers must generate those floating-point instructions, and the operating system must decide what to do when exception conditions are raised for those floating-point instructions. Computer system designers rarely get guidance from numerical analysis texts, which are typically aimed at users and writers of software, not at computer designers. As an example of how plausible design decisions can lead to unexpected behavior, consider the following BASIC program

while (n is even) { 2 while (n is even) { 3 while (n is even) { 4

When compiled and run using Borland's Turbo Basic on an IBM PC, the program prints while (n is even) { 31 while (n is even) { 32. This example will be analyzed in the next section

Incidentally, some people think that the solution to such anomalies is never to compare floating-point numbers for equality, but instead to consider them equal if they are within some error bound E. This is hardly a cure-all because it raises as many questions as it answers. Giá trị của E phải là bao nhiêu? . a - b. < E, is not an equivalence relation because a ~ b and b ~ c does not imply that a ~ c.

|a - b| < E, is not an equivalence relation because a ~ b and b ~ c does not imply that a ~ c.

Instruction Sets

It is quite common for an algorithm to require a short burst of higher precision in order to produce accurate results. One example occurs in the quadratic formula ()/2a. As discussed in the section , when b2

4ac, rounding error can contaminate up to half the digits in the roots computed with the quadratic formula. By performing the subcalculation of b2 - 4ac in double precision, half the double precision bits of the root are lost, which means that all the single precision bits are preserved.

The computation of b2 - 4ac in double precision when each of the quantities a, b, and c are in single precision is easy if there is a multiplication instruction that takes two single precision numbers and produces a double precision result. In order to produce the exactly rounded product of two p-digit numbers, a multiplier needs to generate the entire 2p bits of product, although it may throw bits away as it proceeds. Thus, hardware to compute a double precision product from single precision operands will normally be only a little more expensive than a single precision multiplier, and much cheaper than a double precision multiplier. Despite this, modern instruction sets tend to provide only instructions that produce a result of the same precision as the operands

If an instruction that combines two single precision operands to produce a double precision product was only useful for the quadratic formula, it wouldn't be worth adding to an instruction set. However, this instruction has many other uses. Consider the problem of solving a system of linear equations,

a11x1 + a12x2 + · · · + a1nxn= b1
a21x1 + a22x2 + · · · + a2nxn= b2
· · ·
an1x1 + an2x2 + · · ·+ annxn= bn

which can be written in matrix form as Ax = b, where

Suppose that a solution x(1) is computed by some method, perhaps Gaussian elimination. Có một cách đơn giản để cải thiện độ chính xác của kết quả được gọi là cải tiến lặp đi lặp lại. First compute

(12)
= Ax(1) - b

and then solve the system

(13) Ay =

Note that if x(1) is an exact solution, then

is the zero vector, as is y. In general, the computation of
and y will incur rounding error, so Ay 
 
 
 Ax(1) - b = A(x(1) - x), where x is the (unknown) true solution. Then y 
 x(1) - x, so an improved estimate for the solution is

(14) x(2) = x(1) - y

The three steps , , and can be repeated, replacing x(1) with x(2), and x(2) with x(3). This argument that x(i + 1) is more accurate than x(i) is only informal. For more information, see [Golub and Van Loan 1989]

When performing iterative improvement,

is a vector whose elements are the difference of nearby inexact floating-point numbers, and so can suffer from catastrophic cancellation. Thus iterative improvement is not very useful unless
= Ax(1) - b is computed in double precision. Once again, this is a case of computing the product of two single precision numbers (A and x(1)), where the full double precision result is needed.

To summarize, instructions that multiply two floating-point numbers and return a product with twice the precision of the operands make a useful addition to a floating-point instruction set. Some of the implications of this for compilers are discussed in the next section

Languages and Compilers

The interaction of compilers and floating-point is discussed in Farnum [1988], and much of the discussion in this section is taken from that paper

Ambiguity

Ideally, a language definition should define the semantics of the language precisely enough to prove statements about programs. While this is usually true for the integer part of a language, language definitions often have a large grey area when it comes to floating-point. Perhaps this is due to the fact that many language designers believe that nothing can be proven about floating-point, since it entails rounding error. If so, the previous sections have demonstrated the fallacy in this reasoning. This section discusses some common grey areas in language definitions, including suggestions about how to deal with them

Remarkably enough, some languages don't clearly specify that if while (n is even) { 06 is a floating-point variable (with say a value of while (n is even) { 34), then every occurrence of (say) while (n is even) { 35 must have the same value. For example Ada, which is based on Brown's model, seems to imply that floating-point arithmetic only has to satisfy Brown's axioms, and thus expressions can have one of many possible values. Thinking about floating-point in this fuzzy way stands in sharp contrast to the IEEE model, where the result of each floating-point operation is precisely defined. In the IEEE model, we can prove that while (n is even) { 36 evaluates to while (n is even) { 37 (Theorem 7). In Brown's model, we cannot

Another ambiguity in most language definitions concerns what happens on overflow, underflow and other exceptions. The IEEE standard precisely specifies the behavior of exceptions, and so languages that use the standard as a model can avoid any ambiguity on this point

Một khu vực màu xám khác liên quan đến việc giải thích các dấu ngoặc đơn. Due to roundoff errors, the associative laws of algebra do not necessarily hold for floating-point numbers. For example, the expression while (n is even) { 38 has a totally different answer than while (n is even) { 39 when x = 1030, y = -1030 and z = 1 (it is 1 in the former case, 0 in the latter). The importance of preserving parentheses cannot be overemphasized. The algorithms presented in theorems 3, 4 and 6 all depend on it. For example, in Theorem 6, the formula xh = mx - (mx - x) would reduce to xh = x if it weren't for parentheses, thereby destroying the entire algorithm. A language definition that does not require parentheses to be honored is useless for floating-point calculations

Subexpression evaluation is imprecisely defined in many languages. Suppose that while (n is even) { 40 is double precision, but while (n is even) { 06 and while (n is even) { 42 are single precision. Then in the expression while (n is even) { 40  while (n is even) { 44  while (n is even) { 45 is the product performed in single or double precision? Another example. in while (n is even) { 06 while (n is even) { 44 while (n is even) { 48 where while (n is even) { 49 and while (n is even) { 50 are integers, is the division an integer operation or a floating-point one? There are two ways to deal with this problem, neither of which is completely satisfactory. The first is to require that all variables in an expression have the same type. This is the simplest solution, but has some drawbacks. First of all, languages like Pascal that have subrange types allow mixing subrange variables with integer variables, so it is somewhat bizarre to prohibit mixing single and double precision variables. Another problem concerns constants. In the expression while (n is even) { 51, most languages interpret 0. 1 to be a single precision constant. Now suppose the programmer decides to change the declaration of all the floating-point variables from single to double precision. If 0. 1 is still treated as a single precision constant, then there will be a compile time error. The programmer will have to hunt down and change every floating-point constant

The second approach is to allow mixed expressions, in which case rules for subexpression evaluation must be provided. There are a number of guiding examples. The original definition of C required that every floating-point expression be computed in double precision [Kernighan and Ritchie 1978]. This leads to anomalies like the example at the beginning of this section. The expression while (n is even) { 52 is computed in double precision, but if while (n is even) { 53 is a single-precision variable, the quotient is rounded to single precision for storage. Vì 3/7 là một phân số nhị phân lặp lại, nên giá trị được tính toán của nó ở độ chính xác kép khác với giá trị được lưu trữ ở độ chính xác đơn. Thus the comparison q = 3/7 fails. This suggests that computing every expression in the highest precision available is not a good rule

Another guiding example is inner products. If the inner product has thousands of terms, the rounding error in the sum can become substantial. One way to reduce this rounding error is to accumulate the sums in double precision (this will be discussed in more detail in the section ). If while (n is even) { 54 is a double precision variable, and while (n is even) { 55 and while (n is even) { 56 are single precision arrays, then the inner product loop will look like while (n is even) { 54  while (n is even) { 04 while (n is even) { 54 while (n is even) { 44 while (n is even) { 61. Nếu phép nhân được thực hiện với độ chính xác đơn, thì phần lớn lợi thế của tích lũy độ chính xác kép sẽ bị mất đi, bởi vì sản phẩm bị cắt bớt thành độ chính xác đơn ngay trước khi được thêm vào biến chính xác kép

A rule that covers both of the previous two examples is to compute an expression in the highest precision of any variable that occurs in that expression. Then while (n is even) { 53  while (n is even) { 04  while (n is even) { 52 will be computed entirely in single precision and will have the boolean value true, whereas while (n is even) { 54 while (n is even) { 04 while (n is even) { 54 while (n is even) { 44 while (n is even) { 61 will be computed in double precision, gaining the full advantage of double precision accumulation. However, this rule is too simplistic to cover all cases cleanly. If while (n is even) { 70 and while (n is even) { 71 are double precision variables, the expression while (n is even) { 42  while (n is even) { 04  while (n is even) { 06  while (n is even) { 44  while (n is even) { 76 contains a double precision variable, but performing the sum in double precision would be pointless, because both operands are single precision, as is the result

Một quy tắc đánh giá biểu thức con phức tạp hơn như sau. Trước tiên, gán cho mỗi thao tác một độ chính xác dự kiến, là độ chính xác tối đa của các toán hạng của nó. Phép gán này phải được thực hiện từ lá đến gốc của cây biểu thức. Sau đó thực hiện chuyền thứ hai từ gốc đến lá. Trong bước này, hãy chỉ định cho mỗi thao tác độ chính xác dự kiến ​​tối đa và độ chính xác mà cấp độ gốc mong đợi. Trong trường hợp của while (n is even) { 53  while (n is even) { 04  while (n is even) { 52, mỗi chiếc lá đều có độ chính xác duy nhất, vì vậy tất cả các thao tác đều được thực hiện với độ chính xác duy nhất. Trong trường hợp của while (n is even) { 54  while (n is even) { 04  while (n is even) { 54  while (n is even) { 44  while (n is even) { 61, độ chính xác dự kiến ​​của phép toán nhân là độ chính xác đơn, nhưng ở bước thứ hai, phép toán này được tăng lên độ chính xác kép vì phép toán mẹ của nó yêu cầu một toán hạng có độ chính xác kép. Và trong while (n is even) { 42  while (n is even) { 04  while (n is even) { 06  while (n is even) { 44  while (n is even) { 76, phép cộng được thực hiện với độ chính xác duy nhất. Farnum [1988] trình bày bằng chứng rằng thuật toán này không khó thực hiện

Nhược điểm của quy tắc này là việc đánh giá một biểu thức con phụ thuộc vào biểu thức mà nó được nhúng vào. This can have some annoying consequences. For example, suppose you are debugging a program and want to know the value of a subexpression. You cannot simply type the subexpression to the debugger and ask it to be evaluated, because the value of the subexpression in the program depends on the expression it is embedded in. A final comment on subexpressions. since converting decimal constants to binary is an operation, the evaluation rule also affects the interpretation of decimal constants. This is especially important for constants like while (n is even) { 90 which are not exactly representable in binary

Another potential grey area occurs when a language includes exponentiation as one of its built-in operations. Unlike the basic arithmetic operations, the value of exponentiation is not always obvious [Kahan and Coonen 1982]. Nếu while (n is even) { 91 là toán tử lũy thừa, thì chắc chắn while (n is even) { 92 có giá trị -27. However, while (n is even) { 93 is problematical. If the while (n is even) { 91 operator checks for integer powers, it would compute while (n is even) { 93 as -3. 03 = -27. On the other hand, if the formula xy = eylogx is used to define while (n is even) { 91 for real arguments, then depending on the log function, the result could be a NaN (using the natural definition of log(x) = while (n is even) { 97 when x < 0). If the FORTRAN while (n is even) { 98 function is used however, then the answer will be -27, because the ANSI FORTRAN standard defines while (n is even) { 99 to be i

+ log 3 [ANSI 1978]. The programming language Ada avoids this problem by only defining exponentiation for integer powers, while ANSI FORTRAN prohibits raising a negative number to a real power.

In fact, the FORTRAN standard says that

Any arithmetic operation whose result is not mathematically defined is prohibited

Unfortunately, with the introduction of ±

by the IEEE standard, the meaning of not mathematically defined is no longer totally clear cut. One definition might be to use the method shown in section . For example, to determine the value of ab, consider non-constant analytic functions f and g with the property that f(x)
a and g(x)
b as x
0. If f(x)g(x) always approaches the same limit, then this should be the value of ab. This definition would set 2
 = 
which seems quite reasonable. In the case of 1. 0
, when f(x) = 1 and g(x) = 1/x the limit approaches 1, but when f(x) = 1 - x and g(x) = 1/x the limit is e-1. So 1. 0
, should be a NaN. In the case of 00, f(x)g(x) = eg(x)log f(x). Since f and g are analytic and take on the value 0 at 0, f(x) = a1x1 + a2x2 + . and g(x) = b1x1 + b2x2 + . Thus limx
0g(x) log f(x) = limx 
 0x log(x(a1 + a2x + . )) = limx
0x log(a1x) = 0. So f(x)g(x)
e0 = 1 for all f and g, which means that 00 = 1. Using this definition would unambiguously define the exponential function for all arguments, and in particular would define while (n is even) { 93 to be -27.

The IEEE Standard

The section ," discussed many of the features of the IEEE standard. However, the IEEE standard says nothing about how these features are to be accessed from a programming language. Thus, there is usually a mismatch between floating-point hardware that supports the standard and programming languages like C, Pascal or FORTRAN. Some of the IEEE capabilities can be accessed through a library of subroutine calls. For example the IEEE standard requires that square root be exactly rounded, and the square root function is often implemented directly in hardware. This functionality is easily accessed via a library square root routine. However, other aspects of the standard are not so easily implemented as subroutines. For example, most computer languages specify at most two floating-point types, while the IEEE standard has four different precisions (although the recommended configurations are single plus single-extended or single, double, and double-extended). Infinity provides another example. Constants to represent ±

could be supplied by a subroutine. But that might make them unusable in places that require constant expressions, such as the initializer of a constant variable.

A more subtle situation is manipulating the state associated with a computation, where the state consists of the rounding modes, trap enable bits, trap handlers and exception flags. One approach is to provide subroutines for reading and writing the state. In addition, a single call that can atomically set a new value and return the old value is often useful. As the examples in the section show, a very common pattern of modifying IEEE state is to change it only within the scope of a block or subroutine. Thus the burden is on the programmer to find each exit from the block, and make sure the state is restored. Language support for setting the state precisely in the scope of a block would be very useful here. Modula-3 is one language that implements this idea for trap handlers [Nelson 1991]

There are a number of minor points that need to be considered when implementing the IEEE standard in a language. Since x - x = +0 for all x, (+0) - (+0) = +0. However, -(+0) = -0, thus -x should not be defined as 0 - x. The introduction of NaNs can be confusing, because a NaN is never equal to any other number (including another NaN), so x = x is no longer always true. In fact, the expression x

x is the simplest way to test for a NaN if the IEEE recommended function x = x*x 01 is not provided. Furthermore, NaNs are unordered with respect to all other numbers, so x
y cannot be defined as not x > y. Since the introduction of NaNs causes floating-point numbers to become partially ordered, a x = x*x 02 function that returns one of

Although the IEEE standard defines the basic floating-point operations to return a NaN if any operand is a NaN, this might not always be the best definition for compound operations. For example when computing the appropriate scale factor to use in plotting a graph, the maximum of a set of values must be computed. In this case it makes sense for the max operation to simply ignore NaNs

Cuối cùng, làm tròn có thể là một vấn đề. The IEEE standard defines rounding very precisely, and it depends on the current value of the rounding modes. This sometimes conflicts with the definition of implicit rounding in type conversions or the explicit x = x*x 03 function in languages. This means that programs which wish to use IEEE rounding can't use the natural language primitives, and conversely the language primitives will be inefficient to implement on the ever increasing number of IEEE machines

Optimizers

Compiler texts tend to ignore the subject of floating-point. For example Aho et al. [1986] mentions replacing x = x*x 04 with x = x*x 05, leading the reader to assume that x = x*x 06 should be replaced by while (n is even) { 51. However, these two expressions do not have the same semantics on a binary machine, because 0. 1 cannot be represented exactly in binary. This textbook also suggests replacing x = x*x 08 by x = x*x 09, even though we have seen that these two expressions can have quite different values when y

z. Although it does qualify the statement that any algebraic identity can be used when optimizing code by noting that optimizers should not violate the language definition, it leaves the impression that floating-point semantics are not very important. Whether or not the language standard specifies that parenthesis must be honored, while (n is even) { 38 can have a totally different answer than while (n is even) { 39, as discussed above. There is a problem closely related to preserving parentheses that is illustrated by the following code

while (n is even) { 5 while (n is even) { 6


:

This is designed to give an estimate for machine epsilon. If an optimizing compiler notices that eps + 1 > 1

eps > 0, the program will be changed completely. Instead of computing the smallest number x such that 1
x is still greater than x (x
e
), it will compute the largest number x for which x/2 is rounded to 0 (x
). Avoiding this kind of "optimization" is so important that it is worth presenting one more very useful algorithm that is totally ruined by it.

Many problems, such as numerical integration and the numerical solution of differential equations involve computing sums with many terms. Because each addition can potentially introduce an error as large as . 5 ulp, a sum involving thousands of terms can have quite a bit of rounding error. A simple way to correct for this is to store the partial summand in a double precision variable and to perform each addition using double precision. If the calculation is being done in single precision, performing the sum in double precision is easy on most computer systems. However, if the calculation is already being done in double precision, doubling the precision is not so simple. One method that is sometimes advocated is to sort the numbers and add them from smallest to largest. However, there is a much more efficient method which dramatically improves the accuracy of sums, namely

Theorem 8 (Kahan Summation Formula)

Suppose thatis computed using the following algorithm while (n is even) { 7 while (n is even) { 8 while (n is even) { 9 x = x*x 0 x = x*x 1 x = x*x 2 x = x*x 3 x = x*x 4
Then the computed sum S is equal towhere

Using the naive formula, the computed sum is equal towhere .

j.  

An optimizer that believed floating-point arithmetic obeyed the laws of algebra would conclude that C = [T-S] - Y = [(S+Y)-S] - Y = 0, rendering the algorithm completely useless. These examples can be summarized by saying that optimizers should be extremely cautious when applying algebraic identities that hold for the mathematical real numbers to expressions involving floating-point variables

Another way that optimizers can change the semantics of floating-point code involves constants. Trong biểu thức x = x*x 12, có một phép toán chuyển đổi số thập phân thành nhị phân ngầm để chuyển đổi số thập phân thành hằng số nhị phân. Because this constant cannot be represented exactly in binary, the inexact exception should be raised. In addition, the underflow flag should to be set if the expression is evaluated in single precision. Since the constant is inexact, its exact conversion to binary depends on the current value of the IEEE rounding modes. Thus an optimizer that converts x = x*x 13 to binary at compile time would be changing the semantics of the program. However, constants like 27. 5 which are exactly representable in the smallest available precision can be safely converted at compile time, since they are always exact, cannot raise any exception, and are unaffected by the rounding modes. Constants that are intended to be converted at compile time should be done with a constant declaration, such as x = x*x 14 x = x*x 15 while (n is even) { 04 x = x*x 17

Common subexpression elimination is another example of an optimization that can change floating-point semantics, as illustrated by the following code

x = x*x 5 x = x*x 6 x = x*x 7

Although x = x*x 18 can appear to be a common subexpression, it is not because the rounding mode is different at the two evaluation sites. Three final examples. x = x cannot be replaced by the boolean constant x = x*x 19, because it fails when x is a NaN; -x = 0 - x fails for x = +0; and x < y is not the opposite of x

y, because NaNs are neither greater than nor less than ordinary floating-point numbers.

Despite these examples, there are useful optimizations that can be done on floating-point code. First of all, there are algebraic identities that are valid for floating-point numbers. Some examples in IEEE arithmetic are x + y = y + x, 2 ×  x = x + x, 1 × x = x, and 0. 5× x = x/2. However, even these simple identities can fail on a few machines such as CDC and Cray supercomputers. Instruction scheduling and in-line procedure substitution are two other potentially useful optimizations

As a final example, consider the expression while (n is even) { 70  while (n is even) { 04  while (n is even) { 45, where while (n is even) { 06 and while (n is even) { 42 are single precision variables, and while (n is even) { 70 is double precision. On machines that have an instruction that multiplies two single precision numbers to produce a double precision number, while (n is even) { 70  while (n is even) { 04  while (n is even) { 45 can get mapped to that instruction, rather than compiled to a series of instructions that convert the operands to double and then perform a double to double precision multiply

Some compiler writers view restrictions which prohibit converting (x + y) + z to x + (y + z) as irrelevant, of interest only to programmers who use unportable tricks. Perhaps they have in mind that floating-point numbers model real numbers and should obey the same laws that real numbers do. The problem with real number semantics is that they are extremely expensive to implement. Every time two n bit numbers are multiplied, the product will have 2n bits. Every time two n bit numbers with widely spaced exponents are added, the number of bits in the sum is n + the space between the exponents. The sum could have up to (emax - emin) + n bits, or roughly 2·emax + n bits. An algorithm that involves thousands of operations (such as solving a linear system) will soon be operating on numbers with many significant bits, and be hopelessly slow. The implementation of library functions such as sin and cos is even more difficult, because the value of these transcendental functions aren't rational numbers. Exact integer arithmetic is often provided by lisp systems and is handy for some problems. However, exact floating-point arithmetic is rarely useful

The fact is that there are useful algorithms (like the Kahan summation formula) that exploit the fact that (x + y) + z

x + (y + z), and work whenever the bound

a
b = (a + b)(1 +
)

holds (as well as similar bounds for -, × and /). Since these bounds hold for almost all commercial hardware, it would be foolish for numerical programmers to ignore such algorithms, and it would be irresponsible for compiler writers to destroy these algorithms by pretending that floating-point variables have real number semantics

Exception Handling

The topics discussed up to now have primarily concerned systems implications of accuracy and precision. Trap handlers also raise some interesting systems issues. The IEEE standard strongly recommends that users be able to specify a trap handler for each of the five classes of exceptions, and the section , gave some applications of user defined trap handlers. Trong trường hợp phép toán không hợp lệ và phép chia cho ngoại lệ bằng 0, trình xử lý phải được cung cấp toán hạng, nếu không, kết quả được làm tròn chính xác. Depending on the programming language being used, the trap handler might be able to access other variables in the program as well. For all exceptions, the trap handler must be able to identify what operation was being performed and the precision of its destination

The IEEE standard assumes that operations are conceptually serial and that when an interrupt occurs, it is possible to identify the operation and its operands. On machines which have pipelining or multiple arithmetic units, when an exception occurs, it may not be enough to simply have the trap handler examine the program counter. Hardware support for identifying exactly which operation trapped may be necessary

Another problem is illustrated by the following program fragment

x = x*x 8 x = x*x 9 n = n/2 0 n = n/2 1

Suppose the second multiply raises an exception, and the trap handler wants to use the value of if (n==0) return u 3. On hardware that can do an add and multiply in parallel, an optimizer would probably move the addition operation ahead of the second multiply, so that the add can proceed in parallel with the first multiply. Thus when the second multiply traps, if (n==0) return u 3  while (n is even) { 04  x = x*x 32  while (n is even) { 44  x = x*x 34 has already been executed, potentially changing the result of if (n==0) return u 3. It would not be reasonable for a compiler to avoid this kind of optimization, because every floating-point operation can potentially trap, and thus virtually all instruction scheduling optimizations would be eliminated. This problem can be avoided by prohibiting trap handlers from accessing any variables of the program directly. Instead, the handler can be given the operands or result as an argument

But there are still problems. In the fragment

the two instructions might well be executed in parallel. If the multiply traps, its argument while (n is even) { 11 could already have been overwritten by the addition, especially since addition is usually faster than multiply. Computer systems that support the IEEE standard must provide some way to save the value of while (n is even) { 11, either in hardware or by having the compiler avoid such a situation in the first place

W. Kahan has proposed using presubstitution instead of trap handlers to avoid these problems. In this method, the user specifies an exception and the value he wants to be used as the result when the exception occurs. As an example, suppose that in code for computing (sin x)/x, the user decides that x = 0 is so rare that it would improve performance to avoid a test for x = 0, and instead handle this case when a 0/0 trap occurs. Using IEEE trap handlers, the user would write a handler that returns a value of 1 and install it before computing sin x/x. Using presubstitution, the user would specify that when an invalid operation occurs, the value 1 should be used. Kahan calls this presubstitution, because the value to be used must be specified before the exception occurs. When using trap handlers, the value to be returned can be computed when the trap occurs

The advantage of presubstitution is that it has a straightforward hardware implementation. As soon as the type of exception has been determined, it can be used to index a table which contains the desired result of the operation. Although presubstitution has some attractive attributes, the widespread acceptance of the IEEE standard makes it unlikely to be widely implemented by hardware manufacturers

The Details

A number of claims have been made in this paper concerning properties of floating-point arithmetic. We now proceed to show that floating-point is not black magic, but rather is a straightforward subject whose claims can be verified mathematically. This section is divided into three parts. The first part presents an introduction to error analysis, and provides the details for the section . The second part explores binary to decimal conversion, filling in some gaps from the section . The third part discusses the Kahan summation formula, which was used as an example in the section

Rounding Error

Trong cuộc thảo luận về lỗi làm tròn, người ta đã nói rằng một chữ số bảo vệ duy nhất là đủ để đảm bảo rằng phép cộng và phép trừ sẽ luôn chính xác (Định lý 2). We now proceed to verify this fact. Theorem 2 has two parts, one for subtraction and one for addition. The part for subtraction is

Theorem 9

If x and y are positive floating-point numbers in a format with parameters

and p, and if subtraction is done with p + 1 digits (i. e. one guard digit), then the relative rounding error in the result is less than

e
2e.

Proof

Interchange x and y if necessary so that x > y. It is also harmless to scale x and y so that x is represented by x0. x1 . xp - 1 ×
0. If y is represented as y0. y1 . yp-1, then the difference is exact. If y is represented as 0. y1 . yp, then the guard digit ensures that the computed difference will be the exact difference rounded to a floating-point number, so the rounding error is at most e. In general, let y = 0. 0 . 0yk + 1 . yk + p andbe y truncated to p + 1 digits. Then(15) y -< (
- 1)(
-p - 1 +
-p - 2 + . +
-p - k).
From the definition of guard digit, the computed value of x - y is x -rounded to be a floating-point number, that is, (x -) +
, where the rounding error
satisfies(16) .
.
(
/2)
-p.
The exact difference is x - y, so the error is (x - y) - (x -+
) =- y +
. There are three cases. If x - y
1 then the relative error is bounded by(17)
-p [(
- 1)(
-1 + . +
-k) +
/2] <
-p(1 +
/2) .
Secondly, if x -< 1, then
= 0. Vì x - y nhỏ nhất có thể là
> (
- 1)(
-1 +. +
-k), trong đó
=
- 1,
trong trường hợp này, lỗi tương đối được giới hạn bởi(18)
.
Trường hợp cuối cùng là khi x - y < 1 nhưng x -
1. Cách duy nhất điều này có thể xảy ra là nếu x - = 1, trong trường hợp đó
= 0. Nhưng nếu
= 0, thì áp dụng, do đó, lỗi tương đối lại bị giới hạn bởi
-p <
-p( . z
/2). z

Khi

= 2, giới hạn chính xác là 2e và giới hạn này đạt được cho x= 1 + 22 - p và y = 21 - p - 21 - 2p trong giới hạn . Khi cộng các số cùng dấu, không cần thiết phải có chữ số bảo vệ để đạt được độ chính xác cao, như kết quả sau đây cho thấy.
. When adding numbers of the same sign, a guard digit is not necessary to achieve good accuracy, as the following result shows.

Theorem 10

If x

0 and y
0, then the relative error in computing x + y is at most 2
, even if no guard digits are used.

Proof

The algorithm for addition with k guard digits is similar to that for subtraction. If x 
 y, shift y right until the radix points of x and y are aligned. Discard any digits shifted past the p + k position. Compute the sum of these two p + k digit numbers exactly. Then round to p digits. We will verify the theorem when no guard digits are used; the general case is similar. There is no loss of generality in assuming that x
y
0 and that x is scaled to be of the form d. đ. d ×
0. First, assume there is no carry out. Then the digits shifted off the end of y have a value less than
-p + 1, and the sum is at least 1, so the relative error is less than
-p+1/1 = 2e. If there is a carry out, then the error from shifting must be added to the rounding error of.

The sum is at least

, so the relative error is less than

2
. z

It is obvious that combining these two theorems gives Theorem 2. Theorem 2 gives the relative error for performing one operation. Comparing the rounding error of x2 - y2 and (x + y) (x - y) requires knowing the relative error of multiple operations. The relative error of xy is

1 = [(xy) - (x - y)] / (x - y), which satisfies .
1.  
 2e. Or to write it another way

(19) xy = (x - y) (1 +
1), .
1.
2e

Similarly

(20) x
y = (x + y) (1 +
2), .
2.
2e

Assuming that multiplication is performed by computing the exact product and then rounding, the relative error is at most . 5 ulp, so

(21) u
v = uv (1 +
3), .
3.
e

for any floating-point numbers u and v. Putting these three equations together (letting u = xy and v = x

y) gives

(22) (xy)
(x
y) = (x - y) (1 +
1) (x + y) (1 +
2) (1 +
3)

So the relative error incurred when computing (x - y) (x + y) is

(23)

This relative error is equal to

1 +
2 +
3 +
1
2 +
1
3 +
2
3 +
1
2
3, which is bounded by 5
+ 8
2. In other words, the maximum relative error is about 5 rounding errors (since e is a small number, e2 is almost negligible).

A similar analysis of (x

x)(y
y) cannot result in a small value for the relative error, because when two nearby values of x and y are plugged into x2 - y2, the relative error will usually be quite large. Another way to see this is to try and duplicate the analysis that worked on (xy)
(x
y), yielding

(x
x)(y
y) = [x2(1 +
1) - y2(1 +
2)] (1 +
3)
= ((x2 - y2) (1 +
1) + (
1 -
2)y2) (1 +
3)

When x and y are nearby, the error term (

1 -
2)y2 can be as large as the result x2 - y2. These computations formally justify our claim that (x - y) (x + y) is more accurate than x2 - y2.

We next turn to an analysis of the formula for the area of a triangle. In order to estimate the maximum error that can occur when computing with , the following fact will be needed

Theorem 11

If subtraction is performed with a guard digit, and y/2
x
2y, then x - y is computed exactly.

Proof

Note that if x and y have the same exponent, then certainly xy is exact. Otherwise, from the condition of the theorem, the exponents can differ by at most 1. Scale and interchange x and y if necessary so that 0
y
x, and x is represented as x0. x1 . xp - 1 and y as 0. y1 . yp. Then the algorithm for computing xy will compute x - y exactly and round to a floating-point number. If the difference is of the form 0. d1. dp, the difference will already be p digits long, and no rounding is necessary. Since x
2y, x - y
 y, and since y is of the form 0. d1 . dp, so is x - y. z

When

> 2, the hypothesis of Theorem 11 cannot be replaced by y/
y; the stronger condition y/2
x
2y is still necessary. The analysis of the error in (x - y) (x + y), immediately following the proof of Theorem 10, used the fact that the relative error in the basic operations of addition and subtraction is small (namely equations and ). This is the most common kind of error analysis. However, analyzing formula requires something more, namely Theorem 11, as the following proof will show.

Theorem 12

If subtraction uses a guard digit, and if a,b and c are the sides of a triangle (a 
 b
 c), then the relative error in computing (a + (b + c))(c - (a - b))(c + (a - b))(a +(b - c)) is at most 16
, provided e < . 005.

Proof

Let's examine the factors one by one. From Theorem 10, b 
 c = (b + c) (1 + 
1), where
1 is the relative error, and .
1.
2
. Then the value of the first factor is(a
(b
c)) = (a + (b
c)) (1 +
2) = (a + (b + c) (1 +
1))(1 +
2),
and thus(a + b + c) (1 - 2
)2
[a + (b + c) (1 - 2
)] · (1-2
)
a
(b
c)
[a + (b + c) (1 + 2
)] (1 + 2
)
(a + b + c) (1 + 2
)2
This means that there is an
1 so that(24) (a
(b
c)) = (a + b + c) (1 +
1)2, .
1.
2
.
The next term involves the potentially catastrophic subtraction of c and a   x = x*x 32, because ab may have rounding error. Because a, b and c are the sides of a triangle, a
b+ c, and combining this with the ordering c
b
a gives a
b + c
2b
2a. So a - b satisfies the conditions of Theorem 11. This means that a - b = ab is exact, hence c(a - b) is a harmless subtraction which can be estimated from Theorem 9 to be(25) (c(ab)) = (c - (a - b)) (1 +
2), .
2.
2

The third term is the sum of two exact positive quantities, so(26) (c
(ab)) = (c + (a - b)) (1 +
3), .
3.
2

Finally, the last term is(27) (a
(bc)) = (a + (b - c)) (1 +
4)2, .
4.
2
,
using both Theorem 9 and Theorem 10. Nếu phép nhân được giả định là làm tròn chính xác, sao cho x
y = xy(1 +
) với.
.
, then combining , , and gives(a
(b
c)) (c(ab)) (c
(ab)) (a
(bc))
(a + (b + c)) (c - (a - b)) (c + (a - b)) (a + (b - c)) E
whereE = (1 +
1)2 (1 +
2) (1 +
3)
4)2 (1 +
1)(1 +
2) (1 +
3)
An upper bound for E is (1 + 2
)6(1 +
)3, which expands out to 1 + 15
 + O(
2). Some writers simply ignore the O(e2) term, but it is easy to account for it. Viết (1 + 2
)6(1 +
)3 = 1 + 15
+
R(
), R(
) is a polynomial in e with positive coefficients, so it is an increasing function of
. Since R(.005) = .505, R(
) < 1 cho tất cả
 (1 + 2
)6(1 + 
)3 
. To get a lower bound on E, note that 1 - 15
 - 
R(
) < E, v.v. . 005, 1 - 16
< .005, 1 - 16
< (1 - 2
)6(1 -
)3. Kết hợp hai giới hạn này mang lại 1 - 16
< E
. Do đó, sai số tương đối nhiều nhất là 16
. z

Định lý 12 chắc chắn chỉ ra rằng không có sự triệt tiêu thảm khốc trong công thức. So although it is not necessary to show formula is numerically stable, it is satisfying to have a bound for the entire formula, which is what Theorem 3 of gives

Chứng minh Định lý 3

Letq = (a + (b + c)) (c - (a - b)) (c + (a - b)) (a + (b - c))
andQ = (a
(b
c))
(c(ab))
(c
(ab))
(a
(bc)).
Then, Theorem 12 shows that Q = q(1 +
), with
16
. It is easy to check that(28)
provided
. 04/(. 52)2
. 15, and since .
.
16
16(. 005) = . 08,
does satisfy the condition. Thus,
with .
1.
 . 52.
.
8. 5
. If square roots are computed to within . 5 ulp, then the error when computingis (1 + 
1)(1 +
2), with .
2.
. If
= 2, then there is no further error committed when dividing by 4. Otherwise, one more factor 1 +
3 with .
3.  
 
is necessary for the division, and using the method in the proof of Theorem 12, the final error bound of (1 +
1) (1 +
2) (1 +
3) is dominated by 1 +
4, with .
4.
11
. z

To make the heuristic explanation immediately following the statement of Theorem 4 precise, the next theorem describes just how closely µ(x) approximates a constant

Theorem 13

If µ(x) = ln(1 + x)/x, then for 0
x
,
µ(x)
1 and the derivative satisfies . µ'(x).  
 .

Proof

Note that µ(x) = 1 - x/2 + x2/3 - . is an alternating series with decreasing terms, so for x
1, µ(x)
1 - x/2
1/2. It is even easier to see that because the series for µ is alternating, µ(x)
1. The Taylor series of µ'(x) is also alternating, and if x
has decreasing terms, so -
µ'(x)
-+ 2x/3, or -
µ'(x)
0, thus . µ'(x).
. z

Proof of Theorem 4

Since the Taylor series for ln

is an alternating series, 0 < x - ln(1 + x) < x2/2, the relative error incurred when approximating ln(1 + x) by x is bounded by x/2. If 1
x = 1, then . x.  
, so the relative error is bounded by
/2. When 1
x
1, definevia 1
x = 1 +. Then since 0
x < 1, (1
x)1 =. If division and logarithms are computed to within ulp, then the computed value of the expression ln(1 + x)/((1 + x) - 1) is(29)(1 +
1) (1 +
2) =(1 +
1) (1 +
2) = µ() (1 +
1) (1 +
2)

where .

1.
and .
2.
. To estimate µ(), use the mean value theorem, which says that

(30) µ() - µ(x) = (- x)µ'(
)
for some
between x and. From the definition of, it follows that . - x.
, and combining this with Theorem 13 gives . µ() - µ(x).
/2, or . µ()/µ(x) - 1.
/(2. µ(x). )
which means that µ() = µ(x) (1 +
3), with .
3.
. Finally, multiplying by x introduces a final
4, so the computed value ofx·ln(1 
x)/((1
x)1)

is


It is easy to check that if
< 0. 1, then(1 +
1) (1 +
2) (1 +
3) (1 +
4) = 1 + 
,

with .

.  
5
. z

An interesting example of error analysis using formulas , , and occurs in the quadratic formula. The section , explained how rewriting the equation will eliminate the potential cancellation caused by the ± operation. But there is another potential cancellation that can occur when computing d = b2 - 4ac. This one cannot be eliminated by a simple rearrangement of the formula. Roughly speaking, when b2

4ac, rounding error can contaminate up to half the digits in the roots computed with the quadratic formula. Here is an informal proof (another approach to estimating the error in the quadratic formula appears in Kahan [1972]).

If b2

4ac, rounding error can contaminate up to half the digits in the roots computed with the quadratic formula.

Proof. Write (b

b)(4a
c) = (b2(1 +
1) - 4ac(1 +
2)) (1 +
3), where .
i.
 
. Sử dụng d = b2 - 4ac, điều này có thể được viết lại thành (d(1 ​​+
1) - 4ac(
2 -
1)) (1 +
3). To get an estimate for the size of this error, ignore second order terms in
i, in which case the absolute error is d(
1 +
3) - 4ac
4, where .
4. =.
1 - 
2.
2
. Vì, số hạng đầu tiên d(
1 +
3) có thể bỏ qua. Để ước tính số hạng thứ hai, hãy sử dụng công thức ax2 + bx + c = a(x - r1) (x - r2), do đó ar1r2 = c. Since b2
4ac, then r1
r2, so the second error term is. Thus the computed value ofis

.

The inequality

shows that

,

where

,

so the absolute error ina is about. Since

4
-p,, and thus the absolute error ofdestroys the bottom half of the bits of the roots r1
r2. In other words, since the calculation of the roots involves computing with, and this expression does not have meaningful bits in the position corresponding to the lower order half of ri, then the lower order bits of ri cannot be meaningful. z

Finally, we turn to the proof of Theorem 6. It is based on the following fact, which is proven in the section

Theorem 14

Let 0 < k < p, and set m =
k + 1, and assume that floating-point operations are exactly rounded. Then (m
x)(m
xx) is exactly equal to x rounded to p - k significant digits. More precisely, x is rounded by taking the significand of x, imagining a radix point just left of the k least significant digits and rounding to an integer.

Proof of Theorem 6

By Theorem 14, xh is x rounded to p - k =places. If there is no carry out, then certainly xh can be represented withsignificant digits. Suppose there is a carry-out. If x = x0. x1 . xp - 1 ×
e, then rounding adds 1 to xp - k - 1, and the only way there can be a carry-out is if xp - k - 1 =
- 1, but then the low order digit of xh is 1 + xp - k- 1 = 0, and so again xh is representable indigits. To deal with xl, scale x to be an integer satisfying
p - 1
x
p - 1. Letwhereis the p - k high order digits of x, andis the k low order digits. There are three cases to consider. If, then rounding x to p - k places is the same as chopping and, and. Sincehas at most k digits, if p is even, thenhas at most k ==digits. Otherwise,
= 2 andis representable with k - 1
significant bits. The second case is when, and then computing xh involves rounding up, so xh =+
k, and xl = x - xh = x - -
k =-
k. Once again,has at most k digits, so is representable with
p/2
digits. Finally, if= (
/2)
k - 1, then xh =or + 
k depending on whether there is a round up. So xl is either (
/2)
k - 1 or (
/2)
k - 1 - 
k = -
k/2, both of which are represented with 1 digit. z

Theorem 6 gives a way to express the product of two working precision numbers exactly as a sum. There is a companion formula for expressing a sum exactly. If . x.

. y. then x + y = (x
y) + (x(x
y))
y [Dekker 1971; Knuth 1981, Theorem C in section 4. 2. 2]. However, when using exactly rounded operations, this formula is only true for
= 2, and not for
= 10 as the example x = . 99998, y = . 99997 shows.

Binary to Decimal Conversion

Since single precision has p = 24, and 224 < 108, you might expect that converting a binary number to 8 decimal digits would be sufficient to recover the original binary number. However, this is not the case

Theorem 15

When a binary IEEE single precision number is converted to the closest eight digit decimal number, it is not always possible to uniquely recover the binary number from the decimal one. However, if nine decimal digits are used, then converting the decimal number to the closest binary number will recover the original floating-point number

Proof

Binary single precision numbers lying in the half open interval [103, 210) = [1000, 1024) have 10 bits to the left of the binary point, and 14 bits to the right of the binary point. Thus there are (210 - 103)214 = 393,216 different binary numbers in that interval. If decimal numbers are represented with 8 digits, then there are (210 - 103)104 = 240,000 decimal numbers in the same interval. There is no way that 240,000 decimal numbers could represent 393,216 different binary numbers. So 8 decimal digits are not enough to uniquely represent each single precision binary number. To show that 9 digits are sufficient, it is enough to show that the spacing between binary numbers is always greater than the spacing between decimal numbers. This will ensure that for each decimal number N, the interval[N -ulp, N +ulp]
contains at most one binary number. Thus each binary number rounds to a unique decimal number which in turn rounds to a unique binary number. To show that the spacing between binary numbers is always greater than the spacing between decimal numbers, consider an interval [10n, 10n + 1]. On this interval, the spacing between consecutive decimal numbers is 10(n + 1) - 9. On [10n, 2m], where m is the smallest integer so that 10n < 2m, the spacing of binary numbers is 2m - 24, and the spacing gets larger further on in the interval. Do đó, chỉ cần kiểm tra rằng 10(n + 1) - 9 < 2m - 24 là đủ. But in fact, since 10n < 2m, then 10(n + 1) - 9 = 10n10-8 < 2m10-8 < 2m2-24. z

Đối số tương tự được áp dụng cho độ chính xác kép cho thấy cần có 17 chữ số thập phân để khôi phục số có độ chính xác kép

Binary-decimal conversion also provides another example of the use of flags. Nhớ lại từ phần , rằng để khôi phục số nhị phân từ phần mở rộng thập phân của nó, chuyển đổi thập phân sang nhị phân phải được tính toán chính xác. Việc chuyển đổi đó được thực hiện bằng cách nhân các đại lượng N với 10. P. (cả hai đều chính xác nếu p

Chủ đề