While interest in aspect-oriented software development (AOSD) has grown substantially since I first heard of it in former PARC scientist Gregor Kiczales’s keynote at OOPSLA ’97, it’s still primarily a topic for researchers rather than in-the-trenches developers. Most proponents tout the value of aspects for adding cross-cutting features—security, logging, persistence, debugging, tracing, distribution, performance monitoring—to a base system. These are certainly laudable properties. However, a far greater purpose for aspects lies just ahead.
Aspects could prove most powerful in application development, allowing us to
add new user features or use cases to an application for many years to come.
I believe the AOSD concept will let us gracefully extend an existing software
system iteration-by-iteration and release-by-release for the system’s
entire lifetime. This will substantially improve software quality, and will
also have a dramatic impact on the way we develop. In fact, it will have dramatic
impact on software in all traditional measures: cost, quality and time to market.
As early as 1978, when I still worked as a software architect, I suggested that
techniques that we now call aspect-oriented would cut lifecycle costs by more
than 20 percent. This is a huge reduction, and to be honest, I think this number
is a bit too humble.
Thus, it’s not just for posterity’s sake that I now present a case study I made many years ago. My conclusions then suggested that we were missing programming language support for ideas now underlying aspect orientation. In part 2 of this article, I’ll discuss how “the missing link” that has now been realized in tools from Kiczales’s PARC team (AspectJ) and Harold Oscher’s IBM team (Hyper/J) could seamlessly link to use case–driven development and provide us with a mature AOSD approach.
The Case Study
In 1978, while I was at Ericsson, we successfully designed a system that could be modified for years to come: It was composed entirely of customizable components interconnected through well-defined interfaces. To introduce a new feature or use case, all we usually had to do was add a new component or change an existing one. Our telecommunications switching system was considered superior to any other competitive product, and everyone seemed happy about it.However, I wasn’t satisfied, for two reasons: One, a component usually contained not only code to realize part of a dominant use case, but also small pieces of many other use cases; and two, a use case was usually realized by code allocated to several interconnected components.
These two decomposition effects are now referred to as tangling and scattering. Indeed, in 1967, these well-known effects were used as ammunition against component technology. Fortunately, components won out. Still, tangling and scattering had to be dealt with. Every time a use case was modified or introduced, we had to change several components. Similarly, every time the underlying system software was upgraded (for example, making the recovery mechanism more fine-grained or adding a logging capability), we had to tweak multiple components.
Articulating the Problem
To get some real metrics behind my 1978 critique, I conducted a small case study of a telecom switching system consisting of hundreds of subsystems. Most of them were reusable for different customers (usually entire countries at that time), but typically each customer had his own sets of communication protocols. Therefore, we designed two customer-specific subsystems for each protocol: one for incoming calls, and one for outgoing calls. In my study, I selected a subsystem for outgoing calls and found the following:- The subsystem’s base function was to realize a part of the use case Make Telephone Call Using Protocol X. That use case was realized by many subsystems, but the protocol-specific part was allocated to the subsystem I studied. Interestingly, just 40 percent of the code in the subsystem was there to realize the base function.
- The rest of the code realized small parts of 23 other use cases; for example, code to block a telephone line using the protocol, to supervise the alarm level on a group of telephone lines using the protocol, to measure the traffic over these lines, to restart the lines in case a software error occurred, and to support the distribution of the subsystem over several computational nodes.
- About 80 percent of this code was generated with templates. The component designer (programmer) knew only that these templates had to be used, but did not have an in-depth understanding of their purpose—not very creative work, to say the least.
- We expected that the number of new use cases that would be part of the subsystem would continue to grow even more in the future. The report was received with interest, but with a degree of complacency: “This is the nature of software—use cases cross components—and there’s nothing to be done about it.” I disagreed.
Going through the case study, which had the small parts of 23 other use cases, I made some notes:
Twelve use-case parts were noninvasive: These parts were very simple additions to the subsystem: They could be added without changing the behavior of any other part; they just needed read access to the objects shared with other parts.
Eight use-case parts were extensions to but did not change the base use case: These parts required access to the execution sequence in the base use-case part (the telephone call use case). When the base use case was executed and passed a specific point in its code, these use-case parts needed to be invoked. The execution would always return to the point of invocation.
Three use-case parts had major impact on the base use case: These parts needed write access to the shared objects, and they would also change the execution sequence of the base use-case part.
The solution seemed obvious: If we could keep use cases separate, even while they cross several components, and maintain that separation all the way down through all lifecycle activities from requirements to test via analysis, design, implementation and testing, and, yes, also in runtime, we’d get a system that was dramatically simpler to understand, change and maintain.
The Basic Idea
|
|
|
I was looking for a new kind of modularity to live alongside the component modularity. These new kinds of modules would cross components, and would be composed with other modules of the same kind to provide the system’s complete functional behavior. Composition would occur on all levels, both inside a component and over all components as a whole. I was asking for functional modularity, but that was before use cases—today I’d call it use-case modularity. To achieve this dream, I needed two mechanisms:
- A use-case separation mechanism. Use
cases are designed to slice a system into separate usage-related parts. This
works very well for use cases that are peers: None of the peers is more basic,
mandatory or optional than the others. An example is a telephone call, which
is a basic use case in a telecom system. Its peer use cases—different
usage-related parts—include local calls, and calls to and from another
area or another country.
However, some use cases depend on other, more basic, use cases to work. To separate these, we needed an extension mechanism, which would allow a complex system to be developed (analysis, design, implementation, integration and test) by starting with a base use case and then successively extending the base with more behavior—these are called nonintrusive extensions. The goal was to get easy-to-understand design and code by structuring them from a base and letting the system grow without cluttering the base with statements that had nothing to do with the base, even if the statements were important for the additional behavior.
This extension mechanism would, when applied to use cases, give us extension use cases. On top of a base including use cases, we’d add extension use cases, and when composing the two, we’d get a new base with new extension use cases. In this manner, we could keep most use cases separate all the way down to code and even to executables.
- A use-case composition mechanism. For the system to work, we need to compose or integrate the slices (that is, the separated use cases) into a consistent whole to get executable code. We had several options: the weaving together could occur, for example, at precompile time, at compile time or at execution time. Composing extension use cases with their base, given the extension mechanisms that I worked with at that time, was relatively straightforward. However, we also needed to compose peer use cases that were not separated through extensions. Here we had to compose use cases with overlapping behavior; for instance, two peer use cases may have two operations that are similar, but not identical. This presents a more complex problem that is also more general, since composing extension use cases is just a special case of composing peer use cases.
Of the two mechanisms, I prioritized the ability to separate use cases through extension mechanisms, since that would allow us to keep use cases separate down to the level of a component’s code. Integrating the separate use cases could be done by a component developer. The composition mechanism could be introduced very simply by adding a precompiler to the development environment. Furthermore, in my experience, even this very small technological change has a big payback: Many new features required only simple design extensions, and these kinds of extensions could be tested in a dramatically simpler way—we’d get more bang for the buck.
Thus, my work became focused on extensions for both the use-case separation mechanism and the use-case composition mechanism, and I’d leave the much more complex work on composing use-case peers for the future.
Original Extensions
To explain this concept, I used the following example. The italicized text is—apart from a few modifications—a direct quotation from my 1979 paper, “Use Case Modularity.” Here, I’ve replaced the term function with use case, and reference point is now extension point. Simple explanations are bracketed, and irrelevant text has been replaced by ellipses.
Our … language constructs must be supplemented with a possibility to explicitly change the “flow-of-control.” We will illustrate this with an example; first, how we are doing this today [that is, in 1979] and then a possible further development: Example: Assume that we have two use cases, the Call Handling and the Traffic Recording. [A Call Handling use-case instance is invoked when a calling subscriber takes the phone off the hook: an off-hook signal is sent from the phone to the use-case instance. The use-case instance checks whether traffic recording is requested; if this is the case, a counter is stepped. This counter is stepped if a call is ongoing, and it will subsequently be stepped down when the call is terminated. The next actions are “connect digit receiver,” “send dial tone” to the calling subscriber, and for this example, we don’t need to go further. The other use case is Traffic Recording, the goal of which is to measure the average traffic from subscribers during, say, a 15-minute period. To do that, it will have two flows, only one of which will be shown in the diagram, “Mixed Use Cases.” First, every 10 milliseconds it will visit the set of call counters in the system. It will count the numbers that are stepped and divide that number with the total number of counters—the resulting number is a snapshot of the traffic during a 10-millisecond period. This number is recorded, and so are all similar numbers during a 15-minute period. The other flow will calculate the average traffic over the period in question.] You could consider the existence of the former to be independent of the latter, but not the opposite. However, to construct the Call Handling [use case] you must also have access to knowledge about the Traffic Recording use case. [The result could look like the very simplified image in “Mixed Use Cases.”] In the Call Handling use case, we were forced to include use case parts that are related to traffic recording. This is unfortunate; the use cases should be kept apart. By using a simple technique … this can be avoided. [See “Separation of Concerns.”] Thus, we have introduced a possibility to unambiguously refer to another use-case description and to change the flow-of-control of a use case. Call Handling and Traffic Recording can be described with use-case modularity. |
From the same paper:
Before the execution of a statement in a use-case descriptions—which has been compiled into target codes—we assume that the micro program (simultaneously) checks whether another use case in an outer layer has a reference to the current instruction address. If so, the execution of the current use-case description is interrupted and the sequence that is inserted by the referring use-case description is executed. |
The micro-programmable implementation of this concept resulted in a patent application (it was not approved, for reasons I will explain in a moment). Since a large class of extensions could be safely introduced without intruding on the base, regression testing would, with proper tooling, not be needed for this class. This was expected to result in huge savings in test effort and expense.
To achieve this mechanism, I introduced a few simple constructs:
- An extension point to allow us “to unambiguously refer to another use-case description” by using before or after declarations in that description. The term description means a diagram or piece of code.
- Two statements, insert at <extension point> and continue at
, let us extend a diagram or piece of code without explicitly stating this in the base.
In “Use Case Modularity,” I wrote: “The technology used to accomplish this is surprisingly simple in principle: Editing in runtime.” I recognized that the proposal was just the beginning of a new technique that, once adopted, would evolve on its own.
A few years later, in 1986 (“Language Support for Changeable Large Real Time Systems.” Proceedings of OOPSLA ’86, Sept 1986), I generalized the notion of extensions, and introduced the neologism existion as an existing set of objects (a base). Below is a direct quotation from this presentation (with some insignificant modificationss—the term probe is replaced with the term extension point).
There are only two kinds of relations between an existion and an extension:
The first case means that the extension can use existing actions on an object instance, provided these actions do not change the state of the object instance. The second case means that while an existion is executed atomically, the extension may ‘intervene’ at specified points. When the extension is executed, the control will be returned to the existion, which now continues its atomic action. More than one extension may intervene in the execution of an existion. An extension point specifies where the intervention is required in the execution of an existion. We must provide such constructs so that the extension can be described without changing the existion. The idea is to provide an extension … with a list of extension points [see “Functional Extensions”]. An extension point specifies an insertion point ... During interpretation of a transition path, … an object instance in the existion allows the desired statements of the extension … to intervene. An extension may itself be treated as an existion and be intervened by another extension. Since functional extensions do not change the behavior of existing services, these changes can be introduced in a single step. Linguistically, an extension can at first glance be viewed as a new class inheriting its existion. [The existion is oblivious to the extension.] … However, class inheritance is not the phenomenon desired here. Instead, an extension must exist together with its existion; it always requires its existion to be installed and it will only be executed when its existion is executed. |
|
|
The Grand Vision
[click for larger image] Adding Use-Case Modules on Top of the Component Modules When patching is done right, use cases are natural ways of understanding complex systems. |
Unfortunately, this concept was too similar to a patching techniques—and I always had to apologize for this similarity. The aforementioned patent application was not approved because there was already a patent for patching, and my proposal would have infringed on it.
In the OOSE approach (Object-Oriented Software Engineering: A Use Case Driven Approach,Addison-Wesley, 1992), we continued to support extensions in requirements and analysis, and we showed how they could be implemented in traditional object-oriented programming languages. In my book Software Reuse: Architecture, Process and Organization for Business Success (Addison-Wesley, 1997), I elaborated on variation points as a generalization of extension points. Many of these ideas have been carried over to the Reusable Asset Specification.
The first serious attempt to implement extensions came with the development of a new generation of switches at Ericsson in the early ’90s. Extensions were taken into a new development environment called Delos, which supported extensions all the way down to code.
Waiting for AOP
Was this a groundbreaking new idea? Obviously not; it relied on patching techniques that had been known for decades. What was new, though, was the realization that “patching done right” is the most natural way to understand complex systems.However, to get where we wanted, we’d have to wait for “the missing link”s—the availability of aspect-oriented programming. It was 25 years before Gregor Kiczales, Karl Lieberherr, Harold Ossher, Bill Harrison, Peri Tarr and many more dedicated researchers gave us that missing link. Next month, I’ll describe how to complement use case–driven development with aspect-oriented programming (AOP), and how to get full lifecycle support for aspect orientation.
Further Reading
|
Ivar Jacobson is the creator of the Objectory method (which evolved into the RUP) and, with Grady Booch and James Rumbaugh, a founder of UML. While at Ericsson in Sweden, his large-scale reuse efforts culminated in a component-based approach founded on a UML-like language. His books include Object-Oriented Software Engineering: A Use Case Driven Approach (with Magnus Christerson, Patrik Jonsson and Gunnar Overgaard, Addison-Wesley, 1992), The Object Advantage: Business Process Reengineering with Object Technology (with Maria Ericcson and Agneta Jacobson, Addison-Wesley, 1995) and Software Reuse: Architecture, Process and Organization for Business Success (with Martin Griss and Patrik Jonsson, Addison-Wesley, 1997). He is best known for his invention of the use case and use case–driven development.