A Simple Example
Consider this pseudo-C example function:
T8 f0( T1 t1, T2 t2 ) { T3 t3; T4 t4; T5 t5; T6 t6; T7 t7; T8 t8; t3 = f1 ( t1 ); t4 = f2 ( t2 ); t5 = f3 ( t2 ); t6 = f4 ( t3, t4, t5 ); t7 = f5 ( t3, t4 ); t8 = f6 ( t6, t7 ); return t8; }
If you analyze the code in the function and assume for now that these functions have no relevant side-effects, then you can produce an informal dependency diagram such as Figure 4.
In Figure 4:
- "Boxes" represent data objects
- Circles represent sequential user functions
- Arrows represent data flow
In this informal example:
- Diverging arrows represent "distribution"
- Converging arrows represent "collection"
Functions become executable when all of their inputs are collected. So in this example, the f4 function cannot execute until f1, f2, and f3 have all executed, but the order in which they complete doesn't concern f4. Anecdotally at least, most people would appear to be able to appreciate the logical dependency, and hence logical concurrency from Figure 4 more rapidly and readily than they could from the sequential text that it represents.
Separation of Concerns
More importantly, information concerning the data objects, and the operations performed by the functions f1...f6, is not required in order to do this; likewise, the data objects and functions that operate on them are not concerned with scheduling. This highlights the fact that at a high enough level of abstraction, algorithmic logic and scheduling logic are separable -- and in this instance, one is textual and the other visual.
The concurrency description is localized rather than being dispersed across the application, and is uncluttered by algorithmic information that is not relevant to concurrency. When we combine the two descriptions we have a concurrent description that makes no assumptions about its target hardware (this will be addressed later). Figure 5 shows a formal, machine translatable, Blueprint implementation of the informal dependency diagram.
Among other things, the symbolism disambiguates branching and merging. Branching typically infers "sharing" but could equally well mean "competing"; merging may infer "collection" (wait for all), but could also mean "multiplexing" (wait for any). It also addresses issues like destructive/non-destructive reads and so on. Figure 5 illustrates how the symbolism uses "event operators" (e.g., collectors and distributors) to precisely describe the required execution constraints.
The translator compiles the diagrams into runtime calls, in the same way that high level language compilers generate assembler/object code, but since developer code and generated code are separated, the details of the generated code are not relevant to the developer. Because locks and other synchronization mechanisms are machine managed, their granularity is not limited by what humans can conjure with; the generated code is therefore able to lock at a very fine granularity and minimize effects due to Amdahl's law.
Conclusion
What Blueprint's visual approach aims to do is separate an application's scheduling logic from its algorithmic/business logic. It replaces "order" with "connectivity" and thus exposes concurrency in a way that avoids "reading". In particular, it allows for branching and merging operations to be unambiguously specified, which in turn allows the scheduling logic to be decomposed into simultaneously visible, but independently analyzable, strands of execution. The order with which your eyes scan a Blueprint circuit is only important if you are trying to extract some particular information such as data-flow or event-flow; the parallelism and dependencies can be seen at a glance (the branching and merging is obvious).
For More Information
- Multi-Core OO: Part 1
- Multi-Core OO: Part 2
- Multi-Core OO: Part 3
- Multi-Core OO: Part 4
- Multi-Core OO: Part 5
Related Reading
More Insights
To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy. | |