Different Ways to Spell "critical section"
A "critical section" is a region of code that shall execute in isolation with respect to some or all other code in the program, and every kind of synchronization you've ever heard of is a way to express critical sections. Today's most common synchronization tool is the humble lock, and every region of code that executes under a lock is a critical section. For example, consider again the earlier code:
mut.lock(); // enter critical section (acquire lock) ... read/write x ... mut.unlock(); // exit critical section (release lock)
The next most common synchronization tools, used by wizard guru experts only, are varieties of lock-free coding. Beyond a handful of well-known patterns such as Double-Checked Locking, lock-free styles are typically too hard to use directly in normal programming, and they are usually used inside the implementations of other abstractions (for example, in the implementation of a mutex class, an internally synchronized lock-free hashed container, or an OS kernel facility).
The first lock-free style uses atomic variables (Java/.NET volatile, C++0x atomic) that enjoy special semantics with compiler and processor support. Consider an example similar to the aforementioned code, but written in a lock-free style, where myTurn is an atomic variable protecting x:
while( !myTurn ) { } // enter critical section (spin read) ... read/write x ... myTurn = false; // exit critical section (write)
The second lock-free coding style is to use explicit fences (also called "barriers") such as Linux mb(), or special functions with ordering semantics such as Win32 InterlockedCompareExchange. These tools express ordering constraints by placing explicit checkpoints where key variables are used in your code. Protecting a shared variable using these tools is difficult because the fences have to be written correctly at every point you use the variable (as opposed to once at the declaration of the variable to declare it inherently atomic), and their semantics are subtle and tend to vary from platform to platform. Here is one way to write the corresponding example, where myTurn is now just an ordinary variable (that must still be readable and writable atomically) and is given special semantics by applying a simple fence at each point of use so that it still correctly protects x:
while( !myTurn ) { } // enter critical section mb(); // (atomic spin read + fence) ... read/write x ... mb(); // exit critical section myTurn = false; // (fence + atomic write)
The key point is that all lock-based and lock-free styles are just different ways to express the same fundamental conceptthe exclusively held critical section.
Memory Reordering
Figure 1 shows the canonical form and rules governing a critical section. Like any transaction, a critical section follows the basic acquire-work-release structure.
Compilers and processors routinely execute your code out of the order you specified in your source file, in order to make it run faster. For example, compiler optimizers want to help you by hoisting invariant calculations out of a loop; that means moving the instructions out of the loop body and actually executing them before the beginning of the loop. Processors want to help you by hiding the cost of accessing memory; that means moving expensive memory instructions ahead so that they can start sooner and overlap by having several in flight at the same time.
All of this reordering is fine as long as your program can't tell the differenceand the definition of "the program can't tell the difference" is "everybody respects the critical sections." Specifically:
- The programmer shall correctly eliminate races using critical sections.
- All reordering transformations shall respect the critical sections (and normal sequential control/flow dependencies as has always been done for plain old sequential optimizations).
So, for a reordering transformation to be valid, it must respect the program's critical sections by obeying the one key rule of critical sections: Code can't move out of a critical section. (It's always okay for code to move in.) We enforce this golden rule by requiring symmetric one-way fence semantics for the beginning and end of any critical section, illustrated by the arrows in Figure 1:
- Entering a critical section is an acquire operation, or an implicit acquire fence: Code can never cross the fence upward, that is, move from an original location after the fence to execute before the fence. Code that appears before the fence in source code order, however, can happily cross the fence downward to execute later.
- Exiting a critical section is a release operation, or an implicit release fence: This is just the inverse requirement that code can't cross the fence downward, only upward. It guarantees that any other thread that sees the final release write will also see all of the writes before it. [2]