Going Functional
Throughout CScout's development, following some initial struggling to get the macro expansion specification to behave correctly, my implementation of the C preprocessor remained stable. Over time, I have accumulated about 50 test programs covering both the preprocessor's basic functionality and obscure problems I have encountered and fixed over the years. I have these test cases run as part of regression tests before every release.
In the process of applying CScout to the Linux kernel source code, I encountered a bug that I simply couldn't fix within the framework of the code I had written. The simplified version of the Linux code was:
#define A B #define B C #define X(val) Y(val) #define C(a) D((a)) X((A(1)) | (A(2)));
This should expand into:
Y((D((1))) | (D((2))));
However, every change I made to my code to fix this problem often resulted in the failure of another regression test.
At that point I decided to start from scratch, looking for a model implementation of the macro expansion algorithm that would help me implement mine correctly. I thus found that, back in the 1980s, when the members of the C Standardization Committee wrestled with defining macro expansion, they decided to come up with an algorithm that behaved appropriately and then translated that algorithm into standardese English. At that point, Dave Prosser, a member of the committee, drafted a pseudocode implementation in a functional style. The idea behind the algorithm was to expand as many macros as possible, as long as there's no danger of falling into an infinite recursion trap. I was unable to find the complete algorithm on the Web, but Dave kindly e-mailed me the original document source code (written using the UNIX troff macros), which I've now made available as a PDF document; see www.spinellis.gr/ blog/20060626/.
The algorithm is written in a pure functional style, using lists (formed by concatenating their elements with the operator "•") and recursion instead of iteration. It doesn't use any destructive assignments to variables; all variables are simply shorthand for other values. This style makes it very easy to reason about the code (a convenience for the Standardization Committee), but, as I painfully discovered later on, can lead you to a performance tar pit.
Prosser's algorithm works on preprocessing tokens derived from processing a source-code file and its macros. It uses the notion of a hide set associated with each token to decide whether to expand the token or not. Initially, each token starts with an empty hide set, but during macro expansion the tokens accrue in their hide sets the macros that were used during expansion. Here is the algorithm excerpt for expanding function-like macros:
expand(TS)
{
...
if TS is THS • (• TS' and T is a "()'d macro" then
check TS' is actuals • )HS' • TS'' and
actuals are "correct for T"
return expand(subst(ts(T ),fp(T ),actuals,((HSHS') {T },
{}) • TS'' );
The above sequence means that if the token sequence TS to be expanded consists of a token T with a hide set HS followed by an open bracket and a token sequence TS', then the algorithm checks that TS' contains the macro's actual parameters followed by a closing bracket with a hide set HS' and a sequence TS". At that point, expand calls an auxiliary function subst to substitute the macro's formal arguments with the actual arguments, and then recursively expands the result. The subst function also applies a new hide set to the result, which is formed by the intersection of the hide sets HS and HS' with the addition of macro T. The intersection of the two hide sets lets the algorithm perform the maximum number of replacements without going into an infinite loop.
The subst function takes as its arguments an input sequence consisting of a macro's body, the macro's formal and actual parameters, the hide set to apply to the result, and the output sequence generated (empty at first). It replaces the formal parameters found in the macro's body with the actual parameters supplied by its caller, while also handling the complications of the stringizing and token concatenation operators (# and ##). Without the code handling the two operators, its body is as follows:
subst(IS,FP,AP,HS,OS)
{
if IS is {} then
return hsadd(HS,OS );
else if IS is T • IS' and T is FP[i] then
return subst(IS',FP,AP,HS,OS • expand(select(i,AP )));
note IS must be THS' • IS'
return subst(IS',FP,AP,HS,OS • THS' );
}
This means that when the input sequence IS is empty, then the result is the application of the hide set HS to the output sequence OS that has been built up. Otherwise, the head token T is taken from the input sequence and subst is called recursively with the input sequence's tail IS' and a new output sequence. If T is a formal parameter FP[i], then the output sequence will have on its end the expansion of the corresponding actual parameter; otherwise, it will simply have T.
Having been bitten by many subtle problems in my first implementation of macro expansion, I decided to implement Prosser's algorithm in exactly the form it was written. I could then run my regression tests on it and be sure that I was starting from a solid base. I decided to tackle efficiency issues later. For representing the data structures in Prosser's algorithm, I used the closest equivalents available in the C++ Standard Library: list for the token sequences, and set for the hide sets. I could thus use Standard Library methods like front() and push_back() to manipulate the lists, and algorithms like set_intersection to process the hide sets. For example, this is how I form the hide set for calling the subst function:
HideSet hs; set_intersection( head.get_hideset().begin(), head.get_hideset().end(), close.get_hideset().begin(), close.get_hideset().end(), inserter(hs, hs.begin())); hs.insert(macro.get_name_token());
After I understood the algorithm, implementing it proved to be a straightforward task. In fact, the first implementation of the new macro processing class was 112 lines (17 percent) shorter than its predecessor. I always find lower line counts a good signless code to comprehend, fewer chances for bugs. Indeed, two days, 10 source file revisions, and 74 additional lines later, my implementation passed all my regression tests, including the troublesome Linux code. By comparison, I got to the state of the previous implementation in three years after 32 file revisions.