A friend of mine who now works with parallel supercomputers related an anecdote from his days in the aerospace industry. (Names are withheld to protect the guilty.) One of the aerospace corporation's bread-and-butter engineering codes (about a million lines of dusty-deck FORTRAN) had an interesting quirk that returned nonsense results on seemingly random occasions.
My friend dove into the poorly annotated code and found a simple problem in one of the subroutines. He went to his supervisor to show him the problem and explained that it was an easy problem to fix. The supervisor forbade repair since he had no idea what the fix might do to the rest of the tangled program. That program was obvious and identifiable, but how many more subtle silent problems lurked in that maze of subroutines and GOTOs?
As far as my friend knows, that unchanged code is still designing the airplanes we fly. His story is not unique, and anyone who has been in the industry for any length of time will have similar stories. (Send them in and we might fill Feedback with numerical horror stories -- names withheld, of course.) Over the past few years, readers have heard the recurrent theme of numerical instability form P.J. Plauger and me. Many other sources echo our concerns. You need only open a book on numerical analysis to find the first chapter or two devoted to computational reliability, numerical representations, and numerical stability.
Why then are unreliable methods embedded deeply in huge numerical systems used by universities, government agencies, and Fortune 500 companies? Why are supercomputers spitting our billions of numbers per second without any assessment of the accuracy of those numbers? The answer is complicated. We like to think of scientists and engineers as open-minded people striving to use the best methods for solving their problems. In fact, scientists are, generally, viewing the world through the same filters we all use. These filters are known as the current scientific "paradigm" or model. We are taught certain facts as we develop, and we use these facts to filter information as it comes in.
If we did not have this paradigm to serve as a basis for our selection of what is important, we would be unable to function. If we did not have an accumulated background of knowledge from which we could start our analysis of the world, each individual would have to start over and rediscover every piece of necessary knowledge. In this way, our paradigm is indispensable. If, on the other hand, a blind reliance on the paradigm exists and is imbued with infallibility, we are in danger of missing important concepts. Past paradigm include the concept of a universal, frictionless, massless fluid; the ether that filled space and let electromagnetic radiation propagate. Without the ether as a medium, scientists could not explain how light waves traveled. The ether theory persisted until James Maxwell showed that perpendicular electric and magnetic fields did not need a substance for propagation.