In the first two Effective Concurrency columns"The Pillars of Concurrency" (DDJ, August 2007) and "How Much Scalability Do You Have or Need?" (DDJ, September 2007)we saw the three pillars of concurrency and what kinds of concurrency they express:
- Pillar 1: Isolation via asynchronous agents. This is all about doing work asynchronously to gain isolation and responsiveness, using techniques like messaging. Some Pillar 1 techniques, such as pipelining, also enable limited scalability, but this category principally targets isolation and responsiveness.
- Pillar 2: Scalability via concurrent collections. This is all about reenabling the free lunch of being able to ship applications that get faster on machines with more cores, using more cores to get the answer faster by exploiting parallelism in algorithms and data.
- Pillar 3: Consistency via safely shared resources. While doing all of the above, we need to avoid races and deadlocks when using shared objects in memory or any other shared resources.
My last few columns have focused on Pillar 3, and I've referred to lock levels or lock hierarchies as a way to control deadlock. A number of readers have asked me to explain what those are and how they work, so I'll write one more column about Pillar 3 (for now).