Maxwell changed the paradigm of his time. In like manner by introducing relativity (among other things), Albert Einstein showed that a preferred frame of reference didn't exist and caused a paradigm shift in our global view of the universe. In the 1920s, quantum mechanics introduced another paradigm shift when observing the extremely small. Science is still wrestling with the task of integrating these three paradigm shifts into a single "Unified Theory of Everything." When they succeed, the scientific paradigm will change again.
We work in a computational paradigm that reflects a direct translation of the infinitesimal used by Isaac Newton in his formulation of calculus. Newton based calculus on the concept of infinitely precise real numbers, manipulated symbolically or used with the precision necessary to perform satisfactory approximations. The presently accepted numerical computational paradigm says that the IEEE-754 standard for floating-point arithmetic contains enough precision to meet Newton's criteria. Unfortunately, that particular paradigm is not true under many circumstances.
The flaw is not a secret. It is well-known. Symbolic software packages such as Derive, Maple, Mathematica, MACSYMA, and others have been developed to circumvent these problems. These packages not only allows symbolic calculations, but also let users perform numerical calculations in arbitrary precision as a better approximation to Newton's infinitesimals.
Some languages have been extended to allow arbitrarily precise approximations. LISP allows arbitrarily large integers. SCHEME (a dialect of LISP) uses arbitrarily precise rational numbers. REXX lets users specify the precision of the floating-point numbers used. Packages for Ada allow arbitrarily precise rationals. A FORTRAN extension called ACRITH allows high precision. A technique called interval arithmetic allows the estimation of errors.
Why then are all the large-scale programs written without these aids? Why is every significant computation performed on a supercomputer done without any means of even estimating the amount of error in the computations? Why, even though scientists and engineers know their calculations are guaranteed to be inaccurate at some point in the approximation, do we invest billions of dollars in these systems?
A second reason is that some of the problems being attacked may not have numerical solutions. At the turn of the century, Henri Poincare showed that many real-world problems have an extraordinary sensitivity to initial conditions and computational perturbations. A couple of decades later, Kurt Godel showed that a number of problems can be properly formulated in any mathematical system that can have no solutions within that system. Building on that concept, Alan Turing showed that some computer programs will have no predictable result. (His halting problem shows that a program may halt or not, but we are unable to say if a program that has not halted in a finite period of time will not halt in some subsequent period of time.)
All these observations of the world indicate that we need to investigate the solvability of our problems before we leap in to solve them. Once again, the people in charge are reluctant to go to their funding sources and say: "We need additional money to determine if the problems we have been working on the past decade are solvable."
It is always easier to claim that the next generation of supercomputer will have the power necessary to make the existing code perform the miraculous transformation to accuracy. More FLOPS is a cry Congress can rally around. As each new generation of hardware comes on-line, the excuse can always be, "Just three orders of magnitude more."
Does a conspiracy of silence exist? We are running code we can't verify for any but a trivial subset of operating parameters. We are attempting to solve problems we have not verified as solvable within our numerical framework. We continue to look for faster hardware to answer our problems. You decide.