High-performance computing (HPC) typically involves running mathematical simulations on computer systems. A few examples of commercial HPC are the simulation of car crashes for structural design, molecular interaction for new drug design, and the airflow over automobiles or airplanes. In government and research institutions, scientists are simulating galaxy creation, fusion energy, and global warming, as well as working to create more accurate short- and long-term weather forecasts.
For most HPC simulations that are run, the CPU instruction mix contains significant amounts of floating-point calculations, and less integer calculations. A principle often associated with CPUs is Moore's Law, which states that the density of transistors that can be put onto a chip doubles about every two years. Recently, the increase in CPU speeds has slowed in terms of single processor performance. Most commercial applications running today do not take advantage of the new multi-core designs, and thus are being over-served by the increasing density associated with Moore's Law. These types of applications just don't need the increase amounts of computing power that is provided today.
Despite this, HPC users are still demanding more performance as they try to solve more complex problems and desire faster turnaround. Other markets demanding more performance include those that are involved with delivering compelling, new content over the Web, and enterprise customers who are expanding their offerings to either employees or customers. Figure 1 illustrates this trend. In this article, I delve further into the concepts and issues surrounding HPC, specifically taking a look at CPU and RAM in relation to HPC.