Today’s business applications rarely live in isolation, to paraphrase Martin Fowler’s Patterns of Enterprise Application Architecture (Addison-Wesley, 2003). The customer care system gets the date of the customer’s last payment from the accounting system while the order processing system talks to the shipping company to estimate shipping charges. Traditionally, communication between systems has been the domain of batch processing or nightly file transfers.
The trend toward real-time responsiveness and computing mobility is enticing, but one thing holding it back is communication overhead: Sending data across networks can be orders of magnitude slower than a local method invocation. One possible answer to latency problems is middleware providing asynchronous messaging. One such solution is reaching maturity: the Java Messaging Service.
Better Than a Bottle Sending a JMS message to a destination. |
Five years have passed since the introduction of the Java Messaging Service (JMS), a unified API to access asynchronous messaging services from a Java application. The API has gradually found its way into the J2EE specification, evolving into JMS 1.1 in the current J2EE 1.4 spec. Asynchronous messaging lets two or more applications send data to each other without having to wait for receipt confirmation. The infrastructure guarantees message delivery, even if the receiving application isn’t currently running or the network connection is interrupted. It sounds simple enough, but asynchronous messaging demands a new way of thinking about system architecture. First, let’s look at a trivial example.
“Hello, Asynchronous World!”
Applications send data via JMS by encapsulating it inside a message. The JMS
API provides a set of different message types, including a text message that
contains a character string (such as an XML document) or a message that contains
a (serializable) Java object. Most message classes come with convenient constructors
to make the creation of a message object pretty much a one-liner.
Once we have a message object, we need to send it somewhere. We do this by specifying a destination. As we’ll see shortly, a destination is a type of data channel agreed upon by both the sender of the message and the recipient. JMS provides two types of destinations: topics and queues. For now, let’s keep things simple and assume that we send the message to a queue, which implies that the message will be received by a single recipient. Before we can send the message, though, we first need to establish a connection to the messaging system:
Context jndiContext = new InitialContext(); ConnectionFactory connectionFactory = (javax.jms.QueueConnectionFactory) jndiContext.lookup("QueueConnectionFactory"); Connection connection = connectionFactory.createConnection(); Session session = connection.createSession (false, Session.AUTO_ACKNOWLEDGE); connection.start();Now we're ready to send a message across the JMS:
Queue queue = (Queue) jndiContext.lookup("jms/Queue"); MessageProducer producer = session.createProducer(queue); TextMessage requestMessage = session.createTextMessage ("Hello, Asynchronous World!"); producer.send(requestMessage);
Pipes and Filters Loose coupling allows the insertion of additional steps into the chain of pipes (destinations) and filters (components). |
Apparently, JMS makes sending messages quite easy (admittedly, I omitted any necessary exception handling and hard-coded all literal values): We obtain a reference to a message queue from JNDI, create a producer for this queue and send a message object on its way.
What’s Your Destination?
Our simple JMS example highlights one important property of asynchronous messaging:
The source application does not send the message directly to another application—instead,
it sends the message to a destination. What happens to the message after that
is fairly irrelevant to the source application (some systems use the term “fire-and-forget”).
The sending application may not even know which application consumes the message
from the destination. Therefore, we call this mode of interaction “loosely
coupled”—the sender and the receiver of the message depend on each
other only for agreement on a common message format and a specific destination.
This style of interaction works particularly well if the message represents
a business event such as “New Order” or “Inventory Low”;
the application that publishes this message needn’t be aware of the way
this message is processed.
Loose coupling offers quite a bit of flexibility in setting up communication between systems. For example, if the two communicating applications don’t agree on a common message format, it’s easy to interject a transformation component to translate one message format into the other. This process is completely transparent to the sender application—the recipient application simply consumes messages of the destination to which the transformation component publishes. Because the resulting architecture consists of an alternation between destinations and components, it’s often referred to as a pipes-and-filters architectural style, labeling the destinations as pipes and the components as filters. Frank Buschmann’s Pattern-Oriented Software Architecture (Wiley, 1996) explains how this concept is used, among other things, to chain applications together in Unix/Linux shell scripting. Messaging employs the same concept, but distributed across multiple computers and applications.
Because of messaging’s loosely coupled nature, messages rarely flow directly from one application to another. Components such as message brokers are commonly interjected into the message stream. These brokers add another level of indirection between the applications by performing message format translation, routing, logging and other functions. As a result, when designing messaging architectures, it’s often more interesting to examine what sits between the applications than to elaborate on the JMS API features for message creation or consumption (we can get those from the online documentation).
Time Travel Is Possible
The concept of asynchronous, loosely coupled interaction is powerful. The sending
application doesn’t have to worry about which application consumes the
message, nor must it wait until the message is consumed. “Fire-and-forget”
certainly seems convenient. Most experienced architects have learned, however,
that there are no free lunches. I sometimes explain asynchronous messaging to
developers by saying that it “makes things that used to be hard easy …
but many things that used to be easy are now hard.” The convenience of
sending guaranteed messages literally around the world to applications running
on different platforms or technologies is a huge benefit. What’s the catch?
The messaging system can guarantee that the message will be delivered, but it
does not guarantee when it will be delivered. Delays may occur because of network interruptions or because the receiving application is simply not available. Therefore, this type of delivery is better termed “guaranteed … ultimately.”
The Road Less Traveled Messages can get out of sequence if they take different paths. |
Worse yet, if an application sends two messages, the messaging system doesn’t guarantee that they’ll arrive in the same order. While a specific destination usually maintains the internal sequence of messages, individual messages may be routed through different paths and intermediate steps before arriving at their final destination. Some of these intermediate steps may take longer than others, resulting in out-of-order message delivery. For example, let’s assume that some messages on a queue must undergo additional transformation, while others are passed straight to the next queue. If the transformation is slow, the transformed messages will arrive later than the other messages.
Out-of-sequence delivery can be problematic if messages depend on each other or if an application is expecting responses to a specific request—not always can an application really “fire and forget.” In many cases, an application needs to receive a response to a request message it sent. Because destinations are unidirectional, the application receives responses on a different destination than the one on which it publishes requests. This means that the responses can arrive in a different order than the requests were sent. As a result, the application must explicitly correlate incoming response messages to the requests.
New Order An application must be prepared to correlate request and response messages. |
The application could avoid having to deal with correlation by not sending another message until the response to the previous message arrives. This requires the application to block between each request and response message, thus eliminating many of the benefits of asynchronicity. Having the application make a single request at a time can be particularly detrimental, because making a request to a remote system is much slower than making a local method call.
Synchronous method calls never made you worry about these issues: The call stack ensured that after a method finished, processing automatically resumed with the calling method. In the asynchronous world, correlation is such a common need that the JMS API includes the methods getJMSCorrelationID and setJMSCorrelationID in the message interface.
The Message-Oriented Mind-Meld
The Java Messaging Service API offers a glimpse into the world of asynchronous
messaging. As is so often the case, however, getting to know the classes
and methods defined in the API is only a tiny step toward creating successful
solutions. Designing, developing and testing message-based solutions requires
rethinking some basic assumptions. In many ways, it’s comparable to
the switch from procedural to object-oriented programming. OO design introduced
a relatively small number of new concepts, such as classes, inheritance
and polymorphism, but it took a long time and many books to get our arms
around the patterns and best practices that describe good solutions using
these concepts. Developers creating asynchronous messaging solutions find
themselves in similar need of guidance for this new programming model. One
good source may be design patterns, which document common solutions to recurring
problems. A collection of design patterns on asynchronous messaging are
available at www.enterpriseintegrationpatterns.com.
Out-of-This-World Debugging A “simple” messaging example shows how asynchronicity can trip you up. In his recent article, “Errant Architectures” (Software Development, April 2003), Martin Fowler reminded us of the dangers of distributed methods. Asynchronous messaging solutions definitely have the pedigree to qualify as an architect’s dream: distributed, loosely coupled, cross-platform, reliable and asynchronous. Unfortunately, some of these very qualities can turn a messaging solution into a developer’s, debugger’s or even author’s nightmare. Let me explain. Bobby Woolf and I were working on a simple messaging example for our upcoming book on enterprise integration patterns, and wanted to show just about the most trivial application using the JMS API. We decided to create a Requestor class that sends a message via a queue destination to a Replier who sends a message back (see picture) to the Requestor via another queue (queues are unidirectional). So we hacked together two classes, used the code from the “Hello, Asynchronous World” example and started running some tests. We ran each class in its own console to verify cross-process communication abilities. After correcting a few coding errors, we ran into a very interesting problem. We started the Replier first because it basically acts as a server, waiting for incoming responses to be processed. Naturally, on start-up, the Replier reported that there were no messages to process yet see right console window). When we started the Requestor, it reported sending a new message and receiving a response right away. Huh? The Replier hadn’t even reported receiving a message yet! Is time travel really possible? Because our design didn’t include a flux capacitor, we concluded that this problem was more likely a symptom of our oversight rather than a wormhole in the intergalactic fabric. After adding more println statements and reviewing our code, we pretty much eliminated the possibility of a coding error in any of the two classes. Running out of ideas, we decided to not even start the Replier and see what the Requestor would do. To our surprise, the Requestor still received an immediate response message from the Reply Queue! Now this bordered on black magic—or did it? Then the proverbial light bulb blinked on. An “extra” message was stuck on the Reply Queue. Every time we ran the Requestor, it read an “old” message from the Reply Queue. Once the Replier received the new request (usually after a few seconds), it placed a new reply message into the Reply Queue. Since queues guarantee delivery even if the target application isn’t running, the queue stored this message until the next time the Requestor started up, so we continually had an extra message stuck in the Reply Queue, causing requests and replies to be out of sync. How did the first reply message get stuck? Most likely, the Requestor aborted after making a request and never got around to reading the corresponding response. Because JMS queues guarantee delivery of messages, this problem persisted across multiple executions of the test applications. As always, life teaches the best lessons, so here’s what we took away from this humbling experience: —G. Hohpe |
JMS and Web Services Messaging design considerations are relevant to Web services. These days, it’s hard to talk about interaction between applications without mentioning Web services. The first wave of hype seemed to focus on kits that made it easy to generate or receive SOAP requests from within an application. Soon thereafter, many developers and architects realized that making or receiving a SOAP request isn’t the hard part—routing the request to the correct recipient and ensuring its reliable transport of requests are far more problematic. Recently, Web services intermediaries entered the limelight, underlined by a migration to a document-based, asynchronous model (manifested in three competing proposals for reliable Web services: ebMS, WS-Reliability and WS-ReliableMessaging). As a result, many of the considerations we apply to designing JMS messaging solutions are quite relevant to the world of Web services. In fact, a number of frameworks, such as Apache AXIS, now support the routing of SOAP messages over a JMS-compliant transport. —G. Hohpe |
Messaging: A Brief History Asynchronous messaging solutions have been around for quite some time. IBM’s MQSeries (the grandfather of all message-based middleware) and Microsoft’s MSMQ give away their focus on message queues right in the name. On the other hand, Teknekron’s RendezVous (outside of the financial world, better known as TIBCO RendezVous) is based on a broadcast, publish-subscribe paradigm, comparable to JMS Topics. The late 1990s brought the enterprise application integration (EAI) boom, along with a flurry of new vendors and product features. Soon, MQSeries and MSMQ offered a Pub-Sub Toolkit, while TIBCO added a Distributed Queue. Unfortunately, the EAI sector’s growth resulted in an equally large number of APIs and terminologies. Sun Microsystems stepped in, bringing all vendors under one hat with the JMS specification. JMS is geared toward both publish-subscribe (via topics) and queue-based messaging and is now supported by virtually all EAI vendors. But messaging is increasingly becoming an integral part of the software development platform. For example, Microsoft included MSMQ message queuing services into the Windows operating system and the .NET framework. Likewise, Sun made JMS part of the J2EE specification, allowing different vendors to provide implementations as long as the API conforms to the JMS spec. As a result, access to messaging services has now become almost ubiquitous. —G. Hohpe |
;