O(N): Scalable Throughput And the Free Lunch
The key to scalable throughput is to express lots of latent concurrency that isn't explicitly coded in the program and that scales to match its inputs (number of messages, size of data, and so on) and that can be efficiently mapped down at execution time to the variable number N
cores available on a given machine.
We find the scalable concurrency opportunities principally by exploiting natural parallelism:
- Exploit parallelism in algorithm structures: For example, recursive sorting can exploit the natural parallelism in its divide-and-conquer structure.
- Exploit parallelism in data structures: For example, tree traversal can often exploit the independence in each node's subtrees. Compilation can exploit independence at several levels in the structure of source code, from coarse-grained independence among source files to finer-grained independence among classes or methods within a file.
This lets us decompose the application into a "sea of chores"expressing independent chunks of work that are big or small, blocking or nonblocking, structured subdivisions or independent; but we often want to look first for the CPU-bound workand rely on the runtime system to "rightsize" the application by assigning those chores efficiently to whatever hardware parallelism is available on a given user's system.
OpenMP supports some constrained O(N)
styles, but it is primarily intended for use with integer-indexed loops over arrays and doesn't work well with iteration-based containers in STL, .NET, or Java. Instead, as mentioned in last month's column [2], today, one common idiom for expressing such a sea of work items is to explicitly schedule chores for execution on a thread pool (for example, using Java ThreadPoolExecutor
or .NET BackgroundWorker
). Unfortunately, this incurs significant context-switch overhead and so you have to ensure that a work item will be worth shipping over to a worker thread. In the future, this constraint will be relaxed as languages and runtimes improve.
O(N)
is the key to re-enabling the free lunch and keeping lots of cores busy, because it lets us express applications that can run on yesterday's single-core machine, run better on today's four-core machine, run better still on tomorrow's 64-core machine, and so on until we exhaust the inherent limit of CPU-boundedness in the application. For a thread pool-driven program, on a single-core machine the system can spin up a single worker thread that runs the program by serially pulling chore after chore from the sea and executing it; on an eight-core machine, it can spin up eight threads; and so on [4].
As with O(K)
, O(N)
can have costs at runtime on even a single-core machine. In addition to the costs mentioned for O(K)
, there can be extra work inherent in divide-and-conquer techniques (e.g., reductions such as piecing together a grand total from intermediate results), and the cost of locking shared data can now also increase. However, by applying concurrency, we can often get good scalability that far offsets the overhead.
Using O(N)
is highly desirable for software that expects to run on a variety of current and future hardware having variable amounts of hardware parallelismwhich happens to now be the target for all mainstream desktop and server software. If your application doesn't currently have key CPU-bound operations that are amenable to full O(N)
parallelization, don't give up: Consider finding new desirable features that are amenable to O(N)
, or at least more O(K
), and you will still be able to deliver software that runs well on today's hardware, better on tomorrow's hardware, and better still on future systems.
Notes
[1] I use "cores" as a simple shorthand measure of execution hardware parallelism. For applications running on one local machine the appropriate measure is usually "total hardware threads," meaning #sockets× #cores/socket×#threads/core; and for distributed applications the appropriate measure is usually "nodes."
[2] H. Sutter. "The Pillars of Concurrency" (DDJ, 32(7), August 2007, http://ddj.com/dept/64bit/200001985).
[3] H. Sutter. "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software" (DDJ, 30(3), March 2005, http://ddj.com/dept/webservices/184405990).
[4] The details are more complex. For example, a pool should spin up extra worker threads to keep the system busy whenever one chore blocks to wait for some event and so temporarily idles its worker thread.
Herb is a software architect at Microsoft and chair of the ISO C++ Standards committee. He can be contacted at www.gotw.ca.