Logic and Data Together
Another principle corollary to the principle of local consequences is keeping logic and data together. Put logic and the data it operates on near each other, in the same method if possible, or the same object, or at least the same package. To make a change, the logic and data are likely to have to change at the same time. If they are together, then the consequences of changing them will remain local.
It's not always obvious at first where logic or data should go to satisfy this principle. I may be writing code in A and realize I need data from B. It's only after I have the code working that I notice that it is too far from the data. Then I need to choose what to do: move the code to the data, move the data to the code, put the code and data together in a helper object, or realize I can't at the moment think of how to bring them together in a way that communicates effectively.
Symmetry
Another principle I use all the time is symmetry. Symmetries abound in programs. An add() method is accompanied by a remove() method. A group of methods all take the same parameters. All the fields in an object have the same lifetime. Identifying and clearly expressing symmetry makes code easier to read. Once readers understand one half of the symmetry, they can quickly understand the other half.
Symmetry is often discussed in spatial terms: bilateral, rotational, and so on. Symmetry in programs is seldom graphical, it is conceptual. Symmetry in code is where the same idea is expressed the same way everywhere it appears in the code.
Here's an example of code that lacks symmetry:
void process() { input(); count++; output(); }
The second statement is more concrete than the two messages. I would rewrite this on the basis of symmetry, resulting in:
void process() { input(); incrementCount(); output(); }
Still this method violates symmetry. The input() and output() operations are named after intentions, incrementCount() after an implementation. Looking for symmetries, I think about why I am incrementing the count, perhaps resulting in:
void process() { input(); tally(); output(); }
Often, finding and expressing symmetry is a preliminary step to removing duplication. If a similar thought exists in several places in the code, making them symmetrical to each other is a good first step towards unifying them.
Declarative Expression
Another principle behind the implementation patterns is to express as much of my intention as possible declaratively. Imperative programming is powerful and flexible, but to read it requires that you follow the thread of execution. I must build a model in my head of the state of the program and the flow of control and data. For those parts of a program that are more like simple facts, without sequence or conditionals, it is easier to read code that is simply declarative.
For example, in older versions of JUnit, classes could have a static suite() method that returned a set of tests to run.
public static junit.framework.Test suite() { Test result= new TestSuite(); ...complicated stuff... return result; }
Now comes the simple, common question -- what tests are going to be run? In most cases, the suite() method just aggregates the tests in a bunch of classes. However, because the suite() method is general, I have to go read and understand the method if I want to be sure. JUnit 4, on the other hand, uses the principle of declarative expression to solve the same problem. Instead of a method returning a suite of tests, there is a special test runner that runs the tests in a set of classes (the common case):
@RunWith(Suite.class) @TestClasses({ SimpleTest.class, ComplicatedTest.class }) class AllTests { }
If I know that tests are being aggregated using this method, I only need to look at the TestClasses annotation to see what tests will be run. Because the expression of the suite is declarative I don't need to suspect any tricky exceptions. This solution gives up the power and generality of the original suite() method, but the declarative style makes the code easier to read. (The RunWith annotation provides even more flexibility for running tests than the suite() method, but that's a story for a different book.)
Rate of Change
A final principle is to put logic or data that changes at the same rate together and separate logic or data that changes at different rates. These rates of change are a form of temporal symmetry. Sometimes the rate of change principle applies to changes a programmer makes. For example, if I am writing tax software I will separate code that makes general tax calculations from code that is particular to a given year. The code changes at different rates. When I make changes the following year, I would like to be sure that the code from preceding years still works. Separating them gives me more confidence in the local consequences of my changes.
The rate of change applies to data. All the fields in a single object should change at roughly the same rate. For example, fields that are modified only during the activation of a single method should be local variables. Two fields that change together but out of sync with their neighboring fields probably belong in a helper object. If a financial instrument can have its value and currency change together, then those two fields would probably better be expressed as a helper Money object:
setAmount(int value, String currency) { this.value= value; this.currency= currency; }
becomes:
setAmount(int value, String currency) { this.value= new Money(value, currency); }
and then later:
setAmount(Money value) { this.value= value; }
The rate of change principle is an application of symmetry, but temporal symmetry. In the example above, the two original fields value and currency are symmetrical. They change at the same time. However, they are not symmetrical with the other fields in the object. Expressing the symmetry by putting them in their own object communicates their relationship to readers and is likely to set up further opportunities to reduce duplication and further localize consequences later.