Random testing:
this technical sense; however, it is certainly not the most used method.) If the technical meaning contrasts "random" with "systematic," it is in the sense that fluctuations in physical measurements are random (unpredictable or chaotic) vs. systematic (causal or lawful). Why is it desirable to be "unsystematic" on purpose in selecting test data for a program? (1) Because there are efficient methods of selecting random points algorithmically, by computing pseudorandom numbers; thus a vast number of tests can be easily defined. (2) Because statistical independence among test points allows statistical prediction of significance in the observed results. In the sequel it will be seen that (1) may be compromised because the required result of an easily generated test is not so easy to generate. (2) is the more important quality of random testing, both in practice and for the theory of software testing. To make an analogy with the case of physical measurement, it is only random fluctuations that can be "averaged out" to yield an improved measurement over many trials; systematic fluctuations might in principle be eliminated, but if their cause (or even their existence) is unknown, they forever invalidate the measurement. The analogy is better than it seems: in program testing, with systematic methods we know what we are doing, but not what it means; only by giving up all systematization can the significance of testing be known. Random testing at its best can be illustrated by a simple example. Suppose that a subroutine is written to compute the (floating-point) cube root of an integer parameter. The method to be used has been shown to be accurate to within 2x10