UCLA Computer Science Department


D. Stott Parker, Jr.
UCLA Computer Science Dept.
3532 Boelter Hall
(310) 825-6871 (ofc)
(310) 825-1322 (sec)
(310) 794-5056 (fax)

Monte Carlo Arithmetic


Monte Carlo Arithmetic: exploiting randomness in floating-point arithmetic

(Technical Report CSD-970002) (abstract) (ps) (dvi)


Monte Carlo Arithmetic: a framework for the statistical analysis of roundoff error

Revised March 1999.
(Technical Report CSD-970014) (abstract) (ps)


Monte Carlo Arithmetic

Seminar slides from a talk given at Hong Kong University of Science and Technology, April 1997, and thereafter; revised April 1999.


Monte Carlo Arithmetic demo distribution

a semi-portable ANSI C implementation of all examples of MCA in the papers above. It requires support only for both single- and double-precision rounded floating-point arithmetic.

For futher information about implementations of MCA for C, C++, and Fortran please contact Stott Parker.


Monte Carlo Arithmetic
exploiting randomness in floating-point arithmetic

D. Stott Parker, CSD-970002.ps.gz

Monte Carlo Arithmetic (MCA) is an extension of standard floating-point arithmetic that exploits randomness in basic floating-point operations. MCA includes random rounding --- which forces roundoff errors to be randomly distributed --- and precision bounding --- which limits the number of significant digits in a given value by random perturbation. Random rounding can be used to produce roundoff errors that are truly random and uncorrelated, and that have zero expected bias. Precision bounding can be used to vary precision dynamically, to implement inexact values (values known to only a few significant digits), and most importantly to detect catastrophic cancellation, which is the primary way that significant digits are lost in numerical computation.

Randomization has both theoretical and practical benefits. It has the effect of transforming any floating-point computation into a Monte Carlo computation, and roundoff analysis into statistical analysis. Unlike much previous work in this area, MCA makes no assumptions about the resulting roundoff error distributions, such as that they are normal. By running a program multiple times, one directly measures the sensitivity of particular outputs to random perturbations of particular inputs. MCA thus gives a way to implement roundoff analysis, using random variables for roundoff errors, so that the roundoff distributions can be studied explicitly. It encourages an empirical approach to evaluating numerical quality, and gives a way to exploit the Monte Carlo method in numerical computation.

MCA also generally gives a different perspective on the study of error. For example, while floating-point summation is not associative, Monte Carlo summation is ``statistically associative'' up to the standard error of the sum. A statistical approach avoids anomalies of floating-point arithmetic.

This work summarizes ways in which MCA has promise as a tool in numerical computation. It seems particularly promising as a way for the person on the street to estimate the number of significant digits in a floating-point value, and to experiment with the effect of changing the precision used in numerical computation. Numerical modeling is becoming a part of life for more and more people, and few of these people either enjoy or are skilled at formal error analysis; MCA gives them a way to estimate the quality of their numerical models.


Monte Carlo Arithmetic: a framework for the statistical analysis of roundoff error

D. Stott Parker, Brad Pierce, Paul R. Eggert

IEEE Computation in Science and Engineering, 2001.
CSD-970014.ps.gz (revised June 1999).

Monte Carlo Arithmetic (MCA) is an extension of standard floating-point arithmetic that exploits randomness in basic floating-point operations. MCA use s randomization to implement random rounding -- which forces rounding errors to be randomly distributed -- and random unrounding -- which randomly extends the inputs of arithmetic operations to arbitrary precision. In computation with MCA, one can use either random rounding or random unrounding, or both. Using neither gives ordinary floating-point arithmetic.

Random rounding can be used to produce rounding errors that are genuinely random, and that have zero expected bias. Random unrounding detects catastrophic cancellation, which is the primary way that significant digits are lost in numerical computation. Randomization also can be used to vary precision dynamically, giving what we call `virtual precision', and to implement inexact values (values known to only a few significant digits).

Randomization has various strengths, both theoretical and practical. It can transform any floating-point computation into a Monte Carlo computation, and transform roundoff analysis into statistical analysis. Running a program multiple times measures the sensitivity of particular outputs to perturbations in the computation. Also, MCA gives a way to avoid some anomalies of floating-point arithmetic. For example, while floating-point summation is not associative, Monte Carlo summation is `statistically associative' up to the standard error of the sum. Generally, MCA gives a different perspective on the study of error.

Figures from this paper, showing how randomized floating-point arithmetic can both expose instability and find useful applications in numerical applications.








D. Stott Parker (stott@cs.ucla.edu)
Mon Oct 13 23:22:39 PDT 2003