Gary is a software-design engineer with Credence Systems. He can be contacted at [email protected].
Not long ago I joined a team that had largely completed a new integrated circuit tester that still lacked something generally viewed as essential in the automatic test equipment (ATE) world -- a good pattern language. This situation presented a unique opportunity to design and develop a language specification and compiler from scratch. Through the good graces of management, I had the luxury of devoting the time needed to create an engineering spec that clearly stated the functional requirements and, importantly, defined the exact syntax of the language in EBNF (short for "Extended Backus-Naur Form," a precise yet understandable way to describe a language. See Advanced Compiler Design and Implementation, by Steve Muchnick, Morgan Kaufmann Publishing, 1997). This took some time, but I would advise anyone trying to meet a schedule driven by marketing that this is the only way to survive. The pattern compiler takes as input ASCII-text files written in the pattern language and produces as output binary files that conform to a spec for our machine. It compiles both algorithmic (see Listing One; available electronically, see "Resource Center," page 5) and linear patterns (Listing Two; also available electronically) for memory and logic devices, respectively. The language spec is 60-pages long, much of it EBNF. The compiler is comprised of a 5000-line Another Tool for Language Recognition (ANTLR) grammar and about 10,000 lines of Java code (plus the Java generated from the grammar). Its interesting attributes, which I will detail in this article, are the time to market advantages imparted by the use of ANTLR (for more information about ANTLR, see http://www.antlr.org/), an excellent compiler construction tool, and the use of Java.
Why ANTLR?
Many DDJ readers are familiar with the LEX and YACC compiler construction tools. I sometimes refer to ANTLR as "LEX and YACC for the third millennium." Written entirely in Java, it generates either Java or C++, combines the lexical analyzer and parser specifications in one file, optionally generates abstract syntax trees (ASTs) and tree-walking classes, and allows very fine-grained control of the parse through predicates. In addition to being a great compiler construction tool, ANTLR is distributed in source-code form, and it's free. Moreover, no legal rights are reserved and the documentation clearly states that an "individual or company may do whatever they wish with source distributed with ANTLR or the code generated by ANTLR, including the incorporation of ANTLR, or its output, into commercial software." That seems to eliminate the licensing and legal obstacles. And, if you require source because your native compiler doesn't compile Java bytecode, it also removes a big technical obstacle. In this article, I'll show you how ANTLR works by leading you through some examples from my compiler. Since there's more to it than I can convey here, I recommend you download and read the online documentation from the http://www.antlr.org/ web site. If you're interested in trying it out, there are some examples included in the source distribution. ANTLR works equally well on all platforms that support a Java VM, and for the adventurous, there's the C++ code generator.
My experience with compiler construction tools has taught me that the code they generate can be a lot slower -- especially for large input files -- than what you can achieve by handcrafting a compiler front end. However, a brand-new, complex and dynamic language like ours mandates the use of one. Fast time-to-market trumps performance, at least initially, and ANTLR helps achieve it by letting you express the syntax of your language in an EBNF-like form (referred to as a "grammar") and using it to generate lexical analyzer and parser classes. If you've defined your language in EBNF and have a willingness to master the ANTLR syntax, you can create and maintain a compiler very efficiently. You may also find some reuse opportunities.
Elements of an ANTLR Grammar
One of the first things you declare in an ANTLR grammar is the definition of a parser class. Thereafter, you're likely to define a start rule, possibly like Example 1. It creates a method in the parser class, compilationUnit, which expects zero or more instances of "importDirective" or "language- Element." These are rules defined elsewhere in the grammar. They express their own tokens and semantic actions, either directly or by invoking other rules. Semantic actions in ANTLR are expressed as Java code enclosed in curly {} brackets and are triggered by subrules or whole rules.
You must create a class that defines a main() and start the parsing yourself. The class might be similar to Example 2. Sometimes it's useful to instantiate another parser object, and start it at a different rule. For instance, the pattern compiler by design recognizes two kinds of imported files, distinguishing them on the basis of their filename extensions. One requires the usual start rule, but the other uses a different start rule, which is optimized to recognize only two language elements. The rule that makes this choice, importDirective, is (minus exceptions) in Example 3. It always recognizes the keyword import and a fileName, which handles path complexities and returns a String. It then instantiates lexer and parser objects that handle the imported file starting at the appropriate rule.
CycleNames, a simple element of the pattern language in Listing Three (available electronically), is a good example of how the pattern language ANTLR grammar is designed. Its rule (Example 4) recognizes the keyword CYCLE _NAMES followed by tokens ASSIGN and LCURLY. These are defined in the lexer. This part of the rule is associated with a semantic action (Java inside curly brackets) that instantiates a CycleNames object and passes it by reference to another rule (actually a method in the parser class) which recognizes and stores its attributes. If the recognition phase of the rule completes without exception, the semantic action at the end of the rule registers the CycleNames object with PBIBuilder.
Compiler Design
At this point a few words about the design of the pattern compiler are in order. It's a simple compiler, comprised of three phases: lexical analysis/parsing, semantic analysis, and code generation. It generally adheres to the tenet that strict partitioning between phases for purposes of maintainability and portability is good (see Modern Compiler Implementation in Java, Basic Techniques, by Andrew W. Appel, Cambridge University Press, 1997). Occasionally though, I cross the line to avoid computationally expensive iterations, or to do incremental semantic analysis.
Figure 1 shows the compiler classes and their relationships using Unified Modeling Language (UML). Each language element found in a pattern file is modeled as a class, and three classes control the three phases of the compiler. PBIBuilder, modeled after the Mediator pattern (see Design Patterns: Elements of Reusable Object-Oriented Software, by Erich Gamma, et. al, Addison-Wesley, 1995), is used by each language element to register itself during the parsing phase. If no exceptions are thrown during the parse, a SemanticAnalyzer object visits each language-element object invoking the analyze() method (a class method). Code generation follows a similar course. If no exceptions are thrown during semantic analysis, a PBIWriter object visits each registered language-element object, invoking its generateCode() method. The visitor pattern implemented in SemanticAnalyzer and PBIWriter is facilitated by the fact that these classes extend PBI-Builder and share its registries.
Why Java?
I chose Java because ANTLR generates Java. Soon after I began using it, I discovered that Java is fun. It's simpler than C++, safer than C, and there seems to be general agreement that it improves programmer productivity. I tell people that I achieve a 25 percent productivity edge the minute I choose Java. No pointers. Objects and arrays are always passed by reference. Primitive types are always passed by value. Memory management is largely transparent to the programmer. Array bounds checking is automatic. The division of a class's interface and its implementation is gone. Good-bye operator overloading and multiple inheritance. Java classes (java.lang, java.util, java.io, for instance) are powerful, simple to use, and not part of any standard library. They're part of the language by design, as are exceptions, which are very useful for creating context-sensitive error messages.
Other features I like include class variables, class methods, and member classes. Java uses the static keyword to declare variables and methods common to all instances of the containing class. When multiple instances of a class need to share some common data or method, a situation not uncommon, this model fits. Each instance still has its own methods and instance variables, but those declared static are shared. I find class variables and member classes useful for modeling complex data that has a shared component and a per-instance component; see Listing Four (available electronically). Class variable _ctrList is populated with dynamically allocated instances of DynamicSource, a member class. Each instance of DynamicSource is internally associated with an instance of the containing class, SourceSelect, and they all are visible to the class method SourceSelect.generateCode().
Native Compiler
Java is normally compiled into a portable, bytecode format which is interpreted at run time, and executed by a virtual machine (VM). My marketing director viewed the virtual machine as a liability. Portability was not a priority, and an external software dependency (the VM) was not acceptable. I solved this problem by using Symantec's Visual Café for Java Version 3.0 (http://cafe.symantec.com/), an IDE that offers a Java compiler which generates native Windows executables. In addition to removing the VM dependency, it also largely resolves performance issues often associated with Java. A native executable that looks and runs like any other is tough to argue with, even if you're a hard-core Microsoft Visual C++ devotee. Of course, the fact that it runs as fast as other compiled code is no license for using poor programming practices. Memory conservation, careful use of objects, and favoring primitive types still matter.
Systems Programming in Java
You may be wondering how Java is at performing the functions of an embedded system compiler -- things like binary I/O, bitwise logical, and bit-shifting operations. Example 5 shows how I use java.io.DataOutputStream to do binary output. Since, according to the JDK 1.1.5 online documentation for the java.io.DataOutputStream class, DataOutPutStream "lets an application write primitive Java data types to an output stream in a portable way," I'm able to do things like
out.writeInt(LittleEndian.toInt(_addr));
Why LittleEndian.toInt()? It turns out Java's "portable way" is Big endian and my target requires Little endian format. So I've created a class that transforms integral types. One of its methods is shown in Example 6. As this code suggests, bitwise logical operators in Java are basically identical to C operators. However, since Java has no unsigned integral types, you have to take care to use the unsigned right shift operator, >>>, to fill in the high bits of the shifted value with zero.
Controlling a Parse using Predicates
Predicates are one of the enabling features of ANTLR. Example 7 illustrates how the pattern rule (Listing Five, available electronically) distinguishes logic test patterns like _pvm_ (Listing One) from memory test patterns like _ckb Diag_ (Listing Two). Their syntax up to a point is quite similar, then it becomes radically different. However, the PG _STATIC blocks that precede them establish an attribute that can be tested with a semantic predicate to determine what's next. If the condition (PBI-Builder.getPMode() == 1) is true, the parser knows to expect a logic pattern (lvmVector). If not, a memory test pattern (pgInstruction) must follow.
Reusing the Compiler
The compiler I've described, and the language it recognizes, are specifically designed to drive a large, custom application-specific integrated circuit (ASIC). Because it's taken about a year to create them, and because there's another custom ASIC for the next tester in the works, I've proposed that the language compiler and the next generation ASIC evolve concurrently. Chip design verification commonly uses language-driven simulation. And simulation languages, which tend to be Awk scripts in my experience, fall short of what is suitable for use in a product. Why not create a simulation enabled code generator, modify the ANTLR grammar, and use the compiler to drive the simulation? Doing so:
- Leverages the effort spent creating the language and the compiler.
- Makes it possible to exercise the design with patterns created to test real integrated circuits, not just to verify a design.
- Evolves the next product's hardware and software deliverables concurrently.
Seems reasonable. ANTLR, Java, and a modular, object-oriented design make it relatively straightforward.
Conclusion
When I joined the team, the first milestone was a beta compiler in six months. Some in the group had reason to doubt it would happen, given the complexity of the underlying hardware. It did, and in the six months since many additions have satisfied both marketing and our customers. Management is happy, I believe, and they like the idea of using the compiler for the next generation product. I credit ANTLR, Java, Visual Café, and a good engineering team for our success.
DDJ
Listing One
//
// pvm.kpl - logic test pattern
//
import multiKTL.h
PG_STATIC {
PG(0);
pmode(Normal);
stops(SyncFail);
stopDelay(onNextPlusConditional);
transferDelay(onNextPlusConditional);
};
PG_STATIC {
PG(1);
pmode(PVM);
stopDelay(afterThree);
};
//
// OUTSS for driving vectors to the device
//
SOURCESELECT _outSS {
PG(0);
DYNAMIC {
A1[PG1LVMD, PG1LVMD];
A2[PG1LVMD, PG1LVMD];
A3[PG1LVMD, PG1LVMD];
A4[PG1LVMD, PG1LVMD];
DB0[PG1LVMD, PG1LVMD];
}
};
//
// STATIC sources to enable SRAM addressing in PVM PG
//
SOURCESELECT _pvm1 {
PG(1);
STATIC {
ALL[LVMFIFO];
}
};
VectorChar {
//
// char f1 f2 data
// ---- ---- ---- ----
Z = G2Z, G2Z (0);
0 = G2D, G2D (0);
1 = G2D, G2D (1);
L = DC, ED (0);
H = DC, ED (1);
X = DC, DC (0);
x = DC, DC (1);
z = G2Z, G2Z (1);
S = STAY, STAY (0);
I = G2D_, G2D_ (1);
i = G2D_, G2D_ (0);
};
PG_VCD {
//
// dpin vector
// name column
// ---- ------
A4 = 3;
A2 = 2;
A1 = 0;
A3 = 1;
DB0 = 4;
};
PG_PATTERN _main_ {
PG(0);
(pvmen1, cycleCounter) stop;
};
PG_PATTERN _pvm_ {
PG(1);
(tset 0) "01 LHX";
(tset 1) "X0 1LH";
(tset 2) "HX 01L";
(tset 0) "LH X01";
(tset 1) "1L HX0";
(tset 1) repeat 10 "XX XXX";
(tset 0) "1X XXX";
(tset 1) "X1 XXX";
(tset 2) "XX 1XX";
(tset 0) "XX X1X";
(tset 1) "XX XX1";
(tset 1) repeat 10 "XX XXX";
(tset 0) "0X XXX";
(tset 1) "X0 XXX";
(tset 2) "XX 0XX";
(tset 0) "XX X0X";
(tset 1) "XX XX0";
(tset 1) repeat 10 "XX XXX";
(tset 0) "IX XXX";
(tset 1) "XI XXX";
(tset 2) "XX IXX";
(tset 0) "XX XIX";
(tset 1) "XX XXI";
(tset 1) repeat 10 "XX XXX";
(tset 0) "SX XXX";
(tset 1) "XS XXX";
(tset 2) "XX SXX";
(tset 0) "XX XSX";
(tset 1) "XX XXS";
(tset 1) repeat 10 "XX XXX";
(tset 0) "zX XXX";
(tset 1) "Xz XXX";
(tset 2) "XX zXX";
(tset 0) "XX XzX";
(tset 1) "XX XXz";
(tset 1) repeat 10 "XX XXX";
};
Listing Two
//
// Checkerboard and diagonal pattern for flash memory device
//
import KTLShared.kpl // KTL shared attributes
//
// enable synchronous fail on PG0
//
PG_STATIC {
PG(0);
interrupts(PGstop);
aborts(PG1);
stops(SyncFail);
stopDelay(onNextPlusConditional 1);
transferDelay(onNextPlusConditional 1);
};
//
// DG set for solid ones
//
DG_SET solid1_ {
DG(1);
OutputSource = ALU;
ALUFunc = 44; // F = ALL HIGHS
Ainput = srcA;
Binput = srcA;
STATIC {
coinA = [ x7,x5,x4,x3,x2,x1,x0 ];
coinB = [ y6,y5,y4,y3,y2,y1,y0 ];
}
};
//
// DG set for checkerboard
//
DG_SET checkerboard_ {
DG(1);
OutputSource = ALU;
ALUFunc = 38; // F = (A ^ B)
Ainput = srcA;
Binput = srcB;
STATIC {
srcA = [ lo,lo,lo,lo,x0,x0,x0,x0,x0,x0,x0,lo,lo,lo,lo,lo ];
srcB = [ y0,y0,y0,y0,y0,y0,y0,y0,y0,y0,y0,y0,y0,y0,y0,y0 ];
}
};
//
// DG set for diagonal zeroes
//
DG_SET diagZero_ {
DG(2);
OutputSource = ALU;
ALUFunc = 3; // F = 0
Ainput = coin;
Binput = coin;
STATIC {
coinA = [ x6,x5,x4,x3,x2,x1,x0 ];
coinB = [ y6,y5,y4,y3,y2,y1,y0 ];
}
};
//
// source select set for DG0
//
SOURCESELECT sources_ {
PG(0);
STATIC {
A0[ADDR0,ADDR0];
A1[ADDR0,ADDR1];
A2[ADDR0,ADDR1];
A3[ADDR0,ADDR1];
A4[ADDR0,ADDR1];
A5[ADDR0,ADDR1];
A6[ADDR0,ADDR1];
A7[ADDR0,ADDR1];
A8[ADDR0,ADDR1];
A9[ADDR0,ADDR1];
A10[ADDR0,ADDR1];
A11[ADDR0,ADDR1];
A12[ADDR0,ADDR1];
A13[ADDR0,ADDR1];
A14[ADDR0,ADDR1];
A15[ADDR0,ADDR1];
A16[ADDR0,ADDR1];
A17[ADDR0,ADDR1];
A18[ADDR0,ADDR1];
DB0[DATA0,DATA1];
DB1[DATA0,DATA1];
DB2[DATA0,DATA1];
DB3[DATA0,DATA1];
DB4[DATA0,DATA1];
DB5[DATA0,DATA1];
DB6[DATA0,DATA1];
DB7[DATA0,DATA1];
DB8[DATA0,DATA1];
DB9[DATA0,DATA1];
DB10[DATA0,DATA1];
DB11[DATA0,DATA1];
DB12[DATA0,DATA1];
DB13[DATA0,DATA1];
DB14[DATA0,DATA1];
DB15[DATA0,DATA1];
CE_0[LVMFIFO,LVMFIFO];
OE_[LVMFIFO,LVMFIFO];
WE_[LVMFIFO,LVMFIFO];
}
};
//
// Pattern
//
// (1) write background ones
// (2) write diagonal zeros
// (3) read whole device
// (4) write checkerboard
// (5) read checkerboard
//
// - drive address and data with cga()
// - device has seven y-address pins, 12 x-address pins and
// - sixteen data pins
//
PG_PATTERN _ckbDiag_ {
//
// initialization block
//
INIT: ( cga(x,y)=0,
// mask applied when incrementing
cga_mask(x)=0xf000,
cga_mask(y)=0xff80,
// compare applied to test conditional
cga_cmp(x)=0x0fff,
cga_cmp(y)=0x007f
);
default: ( driveDG=cga,
driveAG=cga,
source=sources_
);
//
// write background ones
//
do {
write_data1(data=0x9, ++cga(x), link cga(y,x));
} while (cga(x,y) != cga_cmp(x,y));
//
// write diagonal zeros
//
do {
write_diagZero(DG_SET(2)=diagZero_ , ++cga(x,y));
} while (cga(x,y) != cga_cmp(x,y));
//
// read whole device
//
do {
read_all(DG_SET(2)=diagZero_ , ++cga(x), link cga(y,x));
} while (cga(x,y) != cga_cmp(x,y));
//
// write checkerboard
//
do {
write_chkBrd(DG_SET=checkerboard_, ++cga(x),
link cga(y,x) );
} while (cga(x,y) != cga_cmp(x,y));
//
// read checkerboard
//
do {
read_chkBrd(DG_SET=checkerboard_, ++cga(x), link cga(y,x));
} while (cga(x,y) != cga_cmp(x,y));
//
// stop, use cycle sets defined in "stop" from KTL
//
noop() stop;
};
Listing Three
//
// multiKTL.h - shared attributes for a multi-site device test
//
// cycle names
CYCLE_NAMES = {
readD,
writeD,
noop
};
// socket table
SOCKET single = {
//Device Pin Pin Definitions Assignment
// pin names normal optional nor. opt. channel
//------ ------ --------- --------- ---- ---- -------
DP1 = NC_1, PWR_PIN, DPS, VCC1;
DP2 = A1, ADDR_PIN, ADDR_PIN, X1, X1, 0,10,20,30,40;
DP3 = A2, ADDR_PIN, ADDR_PIN, X2, X2, 1,11,21,31,41;
DP4 = A3, ADDR_PIN, ADDR_PIN, X0, X0, 2,12,22,32,42;
DP5 = A4, ADDR_PIN, ADDR_PIN, Y1, Y1, 3,13,23,33,43;
DP6 = GND_1, GND_PIN;
DP7 = CE_, INPUT_PIN, INPUT_PIN, NA, NA, 4,14,24,34,44;
DP8 = DB0, IO_PIN, IO_PIN, D0, D0, 5,15,25,35,45;
DP9 = RESET_, INPUT_PIN, INPUT_PIN, NA, NA, 6,16,26,36,46;
};
Listing Four
/**
* SourceSelect class
*
* @author Gary L. Schaps
*/
import java.util.*;
import java.io.*;
import java.lang.Exception;
public class SourceSelect {
/**
* code generator
*
* @param out is the output stream
*
* @exception java.io.IOException is thrown for I/O problems
*/
protected void generateCode(DataOutputStream out)
throws IOException{
int i, j, Llen, k, idx, port, sel; // iterators
int _len=0, _nchan=0, _start=0, _end=0;
//
// SSETMUXSELP(P1/P2)
//
if (_SSELECT_BLOCK[_PG] == false){
//
// Port 2 is replaced by P2MXSEL in LVM mode (PMODE1)
//
if (PBIBuilder.getPMode(_PG) == 1) {
_len=1;
_nchan=25;
_staticPBI[_PG]._muxVPlist[0][0] = _nchan;
_start=6;
}else{
_len=2;
_nchan= NCHAN;
_start=0;
}
_end = _nchan + _start; // end of static source mux range
//
// ... create the static source select block, and ...
//
out.writeInt(LittleEndian.toInt(27)); // IDR_SSELECT
out.writeInt(LittleEndian.toInt(1<<_PG)); // PG
out.writeInt(LittleEndian.toInt(27)); // IDR_SSELECT
out.writeInt(LittleEndian.toInt(_len)); // len
//
// SSELECT_SETs
//
for (idx=0; idx<_len; idx++) {
out.writeInt(LittleEndian.toInt(idx+1)); // SSPORT
out.writeInt(LittleEndian.toInt( // vplist
_staticPBI[_PG]._muxVPlist[idx][0]));
for (i = _start+1; i < (_end + 1); i++){
out.writeInt(LittleEndian.toInt(
_staticPBI[_PG]._muxVPlist[idx][i]));
}
for (i = _start; i < _end; i++){ // msblist
out.writeInt(LittleEndian.toInt(
_staticPBI[_PG]._msblist[idx][i]));
}
for (i = _start; i < _end; i++){ // ctrlist
out.writeInt(LittleEndian.toInt(
_staticPBI[_PG]._muxCtrlist[idx][i]));
}
}
_SSELECT_BLOCK[_PG] = true;
out.flush();
}
//
// dynamic sources ...
//
if (_dynamicSources == null ) { return; }
//
// MAP RAM
//
if (_SSMAPRAM[_PG] == false) {
for (port=1; port<=2; port++) {
out.writeInt(LittleEndian.toInt(29)); // IDR_SSMAPRAM
out.writeInt(LittleEndian.toInt(1<<_PG)); // PG
out.writeInt(LittleEndian.toInt(29)); // IDR_SSMAPRAM
out.writeInt(LittleEndian.toInt(64)); // len
for (i = 0; i < 64; i++){ // SSMAPRAM SETS
idx = i%16;
if (_mapRam[_PG][idx] != null) { // name
Llen = (_mapRam[_PG][idx].length() - 1);
for (j=0; j <= Llen && j < 32; j++){
out.writeByte(_mapRam[_PG][idx].charAt(j));
}
if (Llen < 32){
for (k=(30-Llen); k>=0; k--){
out.writeByte('\0');
}
}
}else{
for (k=31; k>=0; k--){
out.writeByte('\0');
}
}
out.writeInt(LittleEndian.toInt(port));// port
out.writeInt(LittleEndian.toInt(i)); // address
out.writeInt(LittleEndian.toInt(idx)); // ssmapram
}
}
_SSMAPRAM[_PG] = true;
out.flush();
}
//
// SOURCE SELECT RAM
//
if (_SSRAM[_PG] == false){
k=0;
for (port=1; port<=2; port++){
for (sel=1; sel<=2; sel++){
out.writeInt(LittleEndian.toInt(28));
out.writeInt(LittleEndian.toInt(1<<_PG));
out.writeInt(LittleEndian.toInt(28));
out.writeInt(LittleEndian.toInt(16));
for (i = 0; i < 16; i++){
out.writeInt(LittleEndian.toInt(port));
out.writeInt(LittleEndian.toInt(sel));
out.writeInt(LittleEndian.toInt(i));
out.writeInt(LittleEndian.toInt(NCHAN));
for (j=0; j<NCHAN; j++){
out.writeInt(LittleEndian.toInt(j));
}
for (j=0; j<NCHAN; j++){
out.writeInt(LittleEndian.toInt(
_ctrList[_PG][i]._muxCtrList[k][j]));
}
}
k+=1;
}
}
_SSRAM[_PG] = true;
out.flush();
}
} // generateCode()
/*
* setUnregisterable()
*/
protected void setUnregisterable() {
_unregisterable = true;
}
/**
* isUnregisterable()
*/
protected boolean isUnregisterable() {
return _unregisterable;
}
/**
* instance variables
*/
private String _name;
private int _PG;
private boolean _unregisterable;
private Hashtable _dynamicSources;
private int _mapRamAddr;
/**
* DynamicSource - dynamic source member class
*/
public class DynamicSource {
int _muxCtrList[/* 4 */][/* NCHAN */] = {
/* mainNormal ==> P1 SETRAMA */
{0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0},
/* mainECE ==> P1 SETRAMB */
{0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0},
/* altNormal ==> P2 SETRAMA */
{0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0},
/* altECE ==> P2 SETRAMB */
{0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0}};
} // DynamicSource
/**
* StaticSource - static source member class
*/
public class StaticSource {
int _muxVPlist[/* 2 */][/* NCHAN + 1 */] = {
{48, 0, 1, 2, 3, 4, 5, 6, 7, 8,
9,10,11,12,13,14,15,16,17,18,
19,20,21,22,23,24,25,26,27,28,
29,30,31,32,33,34,35,36,37,38,
39,40,41,42,43,44,45,46,47,48},
{48, 0, 1, 2, 3, 4, 5, 6, 7, 8,
9,10,11,12,13,14,15,16,17,18,
19,20,21,22,23,24,25,26,27,28,
29,30,31,32,33,34,35,36,37,38,
39,40,41,42,43,44,45,46,47,48}};
int _msblist[/* 2 */][/* NCHAN */] = {
{1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1},
{1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1,1,1,
1,1,1,1,1,1,1,1}};
int _muxCtrlist[/* 2 */][/* NCHAN */] = {
{0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0}};
} // StaticSource
/**
* class variables
*/
private static Hashtable[] _staticSources = {
null, null, null};
private static StaticSource _staticPBI[] = {
null, null, null};
private static boolean[] _staticSourcesAnalyzed = {
false, false, false};
private static boolean[] _SSELECT_BLOCK = {
false, false, false};
private static boolean[] _SSMAPRAM = {
false, false, false};
private static boolean[] _SSRAM = {false, false, false};
private static String[][] _mapRam = {
{null, null, null, null,
null, null, null, null,
null, null, null, null,
null, null, null, null},
{null, null, null, null,
null, null, null, null,
null, null, null, null,
null, null, null, null},
{null, null, null, null,
null, null, null, null,
null, null, null, null,
null, null, null, null}
};
private static boolean[] _mapRamRegistered = {
false, false, false};
private static DynamicSource[][] _ctrList = {
{null, null, null, null,
null, null, null, null,
null, null, null, null,
null, null, null, null},
{null, null, null, null,
null, null, null, null,
null, null, null, null,
null, null, null, null},
{null, null, null, null,
null, null, null, null,
null, null, null, null,
null, null, null, null}
};
private static boolean[] _ctrListInitialized = {
false, false, false};
} // EOF
Listing Five
//
// pattern rule
//
pattern
// init-actions
{int PG = 0, pgs=1; Pattern pat = null;
int startLine=LT(1).getLine();}
// rule
: "PG_PATTERN" id:IDENT LCURLY
// more rule
(
("PG" LPAREN pgs=pgSpec RPAREN SEMI)
{
switch(pgs) {
case 1:
case 2:
case 4:
PG = pgs/2;
break;
default:
PBIBuilder.foundBadCode();
System.err.println(
"Error: line(" + (startLine+1) +
"), PG spec must be PG(0 | 1 | 2)");
break;
}
}
|
/* nothing */
)
{pat = new Pattern(PG, id.getText(), false);}
// more rule
( initBlock[pat] | /* nothing */ )
( defaultBlock[pat, PG] | /* nothing */ )
(
{PBIBuilder.getPMode(PG) == 1}? ( lvmVector[pat] )*
{pat.analyzeLVMFormats();}
|
( pgInstruction[pat] )*
)
// end rule
RCURLY SEMI
// action
{PBIBuilder.registerPattern(pat);}
;
Copyright © 1999, Dr. Dobb's Journal