Instead of the conventional representation of a real number by an approximation: the nearest exact floating-point number, represent a real number by TWO fp numbers: a lower and an upper bound.
A large body of literature with many variations has emerged on numerical computing with real intervals. This describes operations on such numbers (+,*,/, and more elaborate ones such as elem. transcendental functions, Newton iteration-based root-finding, etc. are possible.)
Sample rules:
[a,b] + [c,d] = [a+c, b+d]
[a,b] * [c,d] = [ min(a*c, a*d, b*c, b*d), max(a*c, a*d, b*c, b*d)]
To do this right, in the lower bound calc. we round all ops ToNEGV, and in the upper bound calc., round all ops ToPOSV.
Careful programs can produce results with guaranteed [but rarely tight] error bounds. We are in an era when a factor of 4 or even 10 in speed and a factor of 2 in memory may be a fair trade for some assurance that the result is not totally bogus.
Aside: Can you simulate rounding modes without hardware support? If you assumed that all operations were accurate to within 1/2 ULP, then one could ``round directionally after the fact'' by taking the nearly correct answer and adding or subtracting 1 ULP, thereby getting a lower or upper bound for sure. This is slow and not a good idea for interval arithmetic.