What's Under the Hood
The Cell contains:
- One general-purpose 64-bit processor, the Power Processing Element (PPE).
- Eight simpler processors, the Synergistic Processing Elements (SPE).
- And a bus, the Element Interconnect Bus (EIB) that connects the PPE and SPEs.
The PPE is a 64-bit processor with a PowerPC instruction set, 64 KB of L1 cache memory, and 512K L2. Like Intel's HyperThreading, it supports simultaneous multithreading, but is remarkably simpler than Pentiums or Opterons.
SPEs are different. They have 128-bit registers and SIMD (single instruction, multiple data) instructions that can simultaneously process the four 32-bit words inside each register. Plus, there are so many registers (128) that you can unroll loops many times before running out of them. This is ideal for dataflow-based applications.
But the most radical peculiarity for programmers is that SPEs have no cache memory. Rather, they have a 256-KB-scratchpad memory called "local store" (LS). This makes SPEs small and efficient because caches cost silicon area and electrical power. Still, it complicates things for programmers. All the variables you declare are allocated in the LS and must fit there. Larger data structures in main memory can be accessed one block at a time; it is your responsibility to load/store blocks from/to main memory via explicit DMA transfers. You have to design your algorithms to operate on a small block of data at a time, fitting in the LS. When they are finished with a block, they commit the results to main memory, and fetch the next block. In a way, this feels like the old DOS days, when everything had to fit in the (in)famous 640 KB. On the other hand, an SPE's local storage (256 KB) is so much larger than most L1 data caches (a Xeon has just 32 KB). This is one of the reasons why a single SPE outperforms the highest-clocked Pentium Xeon core by a factor of three on many benchmarks.
The PPE, SPEs, and memory controllers are connected by the EIB bus. The EIB contains four data rings, two of which run clockwise and two counter-clockwise. It operates at 1.6 GHz and reaches aggregate transfer rates in excess of 200 GB/second. It employs point-to-point connections, similar to networks in high-performance clusters and supercomputers. Therefore, Cell programmers face issues of process mapping and congestion controltraditional problems of parallel computing. Additionally, the larger the blocks are, the higher their EIB transfer efficiency. So programmers are pressured to keep data structures small enough to fit the LS, but large enough to be transferred efficiently on the EIB.
Unfortunately, the compiler won't help you with parallelization, choice of optimal data structure size, scheduling of transfers, SIMDization, loop unrolling, and the like. You have to do that manually.
The quickest way to get started with Cell programming is with the Cell SDK (www.ibm.com/developerworks/power/cell), which contains a full system simulator. To profile applications (including data transfers), you need a real systema Mercury Computer Systems development board (www.mc.com) or Sony PlayStation 3. Mercury's board has two DD3 Cell processors clocked at 3.2 GHz, running a Linux kernel 2.6.16 with the GCC 4 compiler set. The PlayStation 3 has a single Cell, and the Fedora Core 5 distribution reportedly has been running on it (ps3.qj.net).