Reducing Context Switches
We can eliminate some of the context switches by being smarter about the last line that combines the results, because just calling value()
three times may wake up the calling thread twice only to immediately have to sleep again; instead, use a "wait for a group of futures" facility if your futures library has one, as a well-written version can eliminate the needless wakeups.
We can also avoid a switch by observing that the calling thread isn't going to do anything anyway besides wait for the results, and so it's often a good idea to keep the tail chunk of work and do it ourselves instead of needlessly idling the original thread.
Example 4 shows both techniques in action:
// Example 4: Improved parallel code for MyApp 2.0 // int NetSales() { // perform the subcomputations concurrently future<int> wholesale = pool.run( [] { CalcWholesale(); } ); future<int> retail = pool.run( [] { CalcRetail(); } ); int returns = TotalReturns(); // keep the tail work // now block for the results-wait once, not twice wait_all( wholesale, retail ); return wholesale.value() + retail.value() - returns; }
Naturally, what matters most is not the total overhead, but how big it is compared to the total work. We want the cost of each chunk of work to be significantly larger than the cost of performing it asynchronously instead of synchronously.
The Cost of Unrealized Concurrency
A key cost in today's environments is the cost of unrealized concurrency. What's the cost of our parallel code compared to the sequential code in the case when the parallel algorithm actually ends up running sequentially? For example, what happens if our parallel-ready code executes on a machine with just one core, so that we don't actually get to realize the concurrency because the tasks end up running sequentially anyway (e.g., there's only one pool thread)? We've added the overhead to express concurrency, and we pay for that overhead even if we don't get to benefit from it on a particular system.
If Example 1 is the code we shipped in MyApp version 1.0 and Example 4 is what we'll ship in MyApp 2.0, then an existing customer with a legacy single-core machine may find that the new application is actually slower than the old one, even though the new application will run better when more cores are available. To mitigate this on low- and single-core machines, we can reduce the overhead by adjusting granularity to use fewer and larger chunks of work, or even switch to a sequential implementation.