More on Discrete-event Simulation
Dear DDJ,
In Peter Varhol's "Extending a Visual Language for Simulation" (DDJ, June 1993), the code for the factorial function should be:
if X < 0 then fact := 0 else if X = 0 then fact := 1
The code for procedure Poisson contains variables which are calculated but unused, as in testY_int. X[0] appears to be positive, based upon comments and its use in calculating testX_int, but the logarithm Ln(--X[0]) is calculated.
This simulation appears to be a continuous time simulation using VisSim in which the Poisson distribution is used to determine if an arrival has occurred in a given time interval. A true discrete-event simulation proceeds from event to event and would calculate the time interval to the next arrival (using the exponential distribution for a Poisson process). Such a simulation would generally be far faster.
Louise Baker
Albuquerque, New Mexico
Peter replies: I've since fixed the code for both the Poisson and exponential distributions. Now it also lets the user enter a random seed, or lets the system generate one. You've put your finger on one of the problems of writing discrete-event libraries for a continuous-simulation engine. Each VisSim clock tick represents a fixed amount of time, so 100 clock ticks would be equivalent to 100 seconds in the simulation run. I even started writing it in the way you suggested, but came to the same conclusion you did--it would be too slow to represent all but the shortest simulation.
Instead, I did the opposite. In my code, each clock tick represents a customer-driven event--an enqueue, a dequeue, or a service complete. To make a long story short, the probability distributions determine the amount of time that has passed between events. Therefore, a fixed-length VisSim clock tick actually represents a variable amount of time, depending on values generated by the distributions.
Performance depends a lot on the system and the complexity of the simulation. If VisSim does plots, for example, the simulation can take much longer than otherwise. Interestingly, if there are no outputs, VisSim does nothing. On reasonably straightforward one- or two-queue simulations with one or two plots, I can run 1000 transactions in about 10 seconds on a 486/33, which isn't too shabby. It isn't as good as some of the procedural simulation languages, but they don't have the ability to view the simulation, or to change parameters while the simulation is running.
Huffman Compression
Dear DDJ,
The code in Example 1 is extracted from a program I built around Al Stevens's Huffman encoding routines, which appeared in the October 1992 issue. As Tom Swan pointed out in his "Algorithm Alley" column (DDJ, July 1993), Huffman compression routines are slow. My version of compress() is faster than the original because it is not recursive and because the Huffman tree is traversed a maximum of 256 times rather than once for each character in the input file. Maintaining a large file-output buffer in memory boosts performance even more, because, in addition to the obvious advantage of writing often to memory and seldom to disk, you can then use the lowest-level file-output routines.
Richard Zigler
Marion, Michigan
Fuzzy Logic By Any Other Name_
Dear DDJ,
Though useful, "fuzzy logic," as discussed by Michael Swaine in "Programming Paradigms" (DDJ, July 1993), is neither non-Western nor fuzzy. "Gradient logic" might be a better name for it. Gradients and continuums have been familiar to Western thinkers for a very long time. After all, who discovered calculus?
It's just that traditionally, if a system has had more than two truth values, we haven't called it logic--we've called it arithmetic, or mathematical modeling. Lotfi Zadeh's accomplishment was to link logic and arithmetic in a handy way; he did not invent a new form of thought.
Anyhow, there have long been many extensions of Aristotelian logic, such as Boolean algebra, modal logic, deontic logic, conditional logic, and defeasible (default) logic. The last of these does much the same job as fuzzy logic, except that it deals with uncertainty in the inference rules rather than in the truth values.
As for the idea that "non-Western" thinking transcends ordinary logic, it's very easy to think that anything transcends ordinary logic if you don't understand it very well.
Michael A. Covington
Athens, Georgia
It's No Secret
Dear DDJ,
The December 1993 "Editorial'' by Jon Erickson, "Cryptography Fires Up the Feds" has come closer to the truth than he might imagine. As a researcher in cryptography and an inventor of a patented public-key cryptographic system, I have had my own visits from the NSA. I have read all of the technical and news articles regarding U.S. policy and, until recently, have been perplexed by the rationale behind it.
Conversations with NSA personnel and careful reading of the events of the last several years culminating in the Clipper/Skipjack initiatives of this year, lead to the only logical conclusion regarding government policy on encryption technology: U.S. government attempts to control encryption technology are directed not at foreign governments, but at United States citizens.
This conclusion is a simple deduction of the facts: 1. Since encryption algorithms (including RSA and DES) cannot feasibly be contained within our borders, restricting products that contain them is futile, if the goal is to keep them from being used by foreign governments. 2. Powerful encryption technology has been developed abroad, including some recent work by an erstwhile enemy. 3. No foreign government will purchase equipment that contains encryption technology open to U.S. intelligence agencies.
These facts must be obvious even to those bureaucrats in Washington attempting to dictate policy on the exchange of information. Logically, therefore, the government's attempts to control encryption technology are directed at its own citizens. Limiting export of products including encryption technology inhibits domestic development of the technology, as do acts such as the Clipper/Skipjack initiative. Without the restrictions the government is pursuing, in several years we could buy a reasonably priced telephone that allows us to communicate securely--free from possible government eavesdropping. With government restrictions, only gangsters and drug dealers will use secure communications devices purchased abroad. We need to insist to our legislators and policy makers that we don't wish to purchase a false sense of security at the expense of our Constitutional liberties.
Walter M. Anderson
Bedford, Massachusetts
Date Redux
Dear DDJ,
In the July 1993 "Letters," Karl Hoppe gives an algorithm for determining the date of Easter, but he doesn't mention that this algorithm works only for the Gregorian Calendar. (Readers probably also noticed that there's a typographical error in the listing: There should be a variable, J, for the quotient of C/4, and in the next step 21 should be 2J.)
For the Julian calendar (until 1583), the algorithm can be changed to the following (based on the paper by Chr. Zeller, Acta Math. 9, 1894), using the same A, B, and C, as before:
Step Remainder -------------------------- (19A+15)/30 D (D+C+C/4-B)/7 E
Easter is then D+7--E days after March 21. The same algorithm works for years after 1582 if 19A+15 is replaced by 19A+15+B--B/4-- B/3 and D+C+C/4--B is replaced by D+C+C/4+B/4+2--2B. Hoppe's algorithm has the advantage of giving the month and day of Easter directly, while Zeller's algorithm produces only the offset of Easter from March 21. On the other hand, Zeller's algorithm is fully explained in his paper.
Hoppe's letter inspired me to read Peter Meyer's "Julian and Gregorian Calendars" (DDJ, March 1993). This is an interesting article and the routines seem to work, but I wish Peter had indicated the advantage of Gregorian-day numbers over the commonly used Julian-day numbers. He does note that the
Julian-day number for any date is simply the Gregorian-day number plus the Julian-day number of October 15, 1582. (For this to be true, interpreting "Julian-day number" in the astronomical sense, you must use the gdn for the Gregorian calendar on or after October 15, 1582, and make dates from October 5, 1582 through October 14, 1582, invalid.)
Peter comments that, "no function or program can be relied upon unless it is tested thoroughly," and he includes a program, DATETEST, which presumably does this. But this routine shows only that date_to_gdn() is the inverse of gdn_to_date(); this is important information, and if it is false, the functions are clearly wrong. If it is true, however, the functions are not necessarily correct. For example, modify date_to_gdn() by replacing the line 'dt->gdn = gdn' with 'dt->gdn = gdn/2' and modify gdn_to_date() by replacing 'gdn = dt->gdn' with 'gdn = 2*(dt->gdn)'. These functions are obviously different from the ones given in Meyer's code and do not give the correct values. Nevertheless, DATE-TEST 0 1 applied to these functions "reveals no bugs," because the modified date_to_gdn() is the inverse of the modified gdn_to_date().
The only actual check of date_to_
gdn() would be to show that for any given date, the function sets date.gdn to the number of days before or after October 15, 1582. This requires that either you have a way of determining that number, which is known to be correct and which is independent of date_to_
gdn(), or that you offer a convincing theoretical proof that the routine does what it is supposed to do. The numerous "magic numbers" in Meyer's code make this latter alternative difficult for one who does not know the meanings of these numbers and the significance of the operations using them. The claimed range of years --37,390 to 40,555 makes the first process more difficult since there probably doesn't exist a program which is known to give correct results for all years in this range.
I have used Peter's code to produce the Julian-day number for any date and the date for all dates from January 1,
--4712 through January 1, 4000 against the values given by a program I wrote (directly from the definition of Julian Day, with no magic numbers needed). This check produced no errors, and used many, but not all, of Peter's dat_to_gdn() and gdn_to_date() functions. Assuming that my code is correct, this verifies much of Peter's code for the range tested.
Peter's code is useful for converting Gregorian dates to Julian dates and back; no other program I know of can do so. I believe it works as claimed, but I'd like to know for sure that it works throughout the large range he gives.
B.J. Ball
Austin, Texas
Example 1: Zigler's compression routine.
typedef struct { int cnt ; /* count of nodes to root */ DWORD path ; /* bit-encoded path to root */ } HPATH ; /* path to root of Huffman tree */ static int pascal compress ( void ) { register int c ; /* chars from input file */ h ; /* follows path through tree */ int ncnt ; /* count nodes to root */ child ; /* child node of current node */ HPATH * php ; /* pointer to HPATH array */ DWORD acc = OL ; /* accumulator for code bits */ for ( c = 0 ; c < 256 ; c++ ) { if ( ht[c].cnt > 0 ) { php = hp + c; h = c; ncnt = 0; acc = OL; do { ncnt++; acc <<= 1; child = h; h = ht[h].parent; if ( child == ht[h].left ) (int)acc |= 1; } while( ht[h].parent != -1 ); php->cnt = ncnt; php->path = acc; } } while ( (c = getc(fi)) != EOF ) { php = hp + c: ncnt = php->cnt; acc = php->path; while ( ncnt-- ) { outbit ( (int)acc & 1 ); acc >>= 1; } } }
Copyright © 1994, Dr. Dobb's Journal