next up previous
Next: Treatment of domains Up: Expression evaluation with mixed Previous: Expression evaluation with mixed

Precisions

How should precision of operations and intermediate results be determined?

possible rules

(i)
If a and b are of different widths, say a wider than b, c := a + b is done by converting b to width a, and adding, then storing (perhaps narrowing or widening) to width of c.

(ii)
if a and b are of widths less than c, c := a + b is done by converting a and b to the width of c, adding, then storing in c.

(iii)
Regardless of the widths of a, b, and c, c := a + b is done by adding in the machine's widest format, then storing (perhaps narrowing) in c.

(iv)
Require everything that is not obvious to be explicit, e.g. something like
 storeasdouble
  (&c, doubleadd(coercetodouble(a),
       coercetodouble(b)))
(v)
Runtime typed-value system. Variables like a,b,c have no types, but every number value has a type. Compute the appropriate type of each operation at runtime, as needed.

The old C rules were (iii). This is better than (i), but perhaps wasteful compared to (ii) which is hardly ever (never) done.

Rule (iv) has the advantage of allowing for the possibility that a low precision input to a function may in fact give a high accuracy answer. Log(1.0[+-.1]*10e600) is a low precision input, uncertain even in its 2nd decimal digit, but the answer is 1385.55+-0.09, correct to about 5 decimal places.

Rule (v) is usually available only in interpreted languages with a very loose or non-existent type system. If used to full generality, it is not efficient in time or space since numbers have to be tagged as to length (etc), and repeatedly checked.

IS THERE A WAY TO DO THIS RIGHT? Figuring out the best width for preserving what appears to be the most reasonable precision intended by a computation may require from the compiler, two passes over an expression. This allows propagation upward and downward in an expression tree. This is not an intolerable burden for the compiler, and in fact some languages (Ada) already require this kind of activity.

If methods are overloaded, there is a potential for additional scans to achieve method resolution.

More eloquently, from ``Java Hurts'' (Kahan/Darcy)

By themselves, numbers possess neither precision nor accuracy. In context, a number can be less accurate or ( like integers ) more accurate than the precision of the format in which it is stored. Anyway, to achieve results at least about as accurate as data deserve, arithmetic precision well beyond the precision of data and of many intermediate results is often the most efficient choice albeit not the choice made automatically by programming languages like Java. Ideally, arithmetic precision should be determined not bottom-up (solely from the operand's precisions) but rather top-down from the provenance of the operands and the purposes to which the operation's result, an operand for subsequent operations, will be put. Besides, in isolation that intermediate result's ``accuracy'' is often irrelevant no matter how much less than its precision. What matters in floating-point computation is how closely a web of mathematical relationships can be maintained in the face of roundoff, and whether that web connects the program's output strongly enough to its input no matter how far the web sags in between. A web of relationships just adequate for reliable numerical output is no more visible to the untrained eye than is a spider's web to a fly. Under these circumstances, we must expect most programmers to leave the choice of every floating-point operation's precision to a programming language rather than infer a satisfactory choice from a web invisible without an error-analysis unlikely to be attempted by most programmers. Error-analysis is always tedious, often fruitless; without it programmers who despair of choosing precision well, but have to choose it somehow, are tempted to opt for speed because they know benchmarks offer no reward for accuracy. The speed-accuracy trade-off is so tricky we would all be better off if the choice of precision could be automated, but that would require error-analysis to be automated, which is provably impossible in general.

next up previous
Next: Treatment of domains Up: Expression evaluation with mixed Previous: Expression evaluation with mixed

Richard J. Fateman
Thu Aug 13 13:55:33 PDT 1998