Showing posts with label ESB. Show all posts
Showing posts with label ESB. Show all posts

Monday

Message Auditing in SOA

Any serious service-oriented system implementation can be comprised of a number of services assembled into compositions. Actually, this is a strategy for overcoming, or partially solve the problem of web services “sea” as posted before. When you have a maze of services being called from many different clients, it can easily result in a formidable amount of information being passed around in form of messages. If principles of service-oriented design are followed, these messages would be in a standardized format. leading to a variety of possibilities as to how mechanisms can be positioned to filter and persist the messages and to also extract business intelligence from them in an arbitrary manner. The impact of standards-based messaging on auditing is as deep and as sweeping as its impact on the field of integration in general.

A Case for Auditing Messages in Service-Oriented Systems

Not much attention has been given to auditing messages in service-oriented eco-systems. Most modern infrastructure and service bus platforms provide simple message auditing mechanisms, for example, by writing messages to a log file or in a proprietary database. Typically, these mechanisms are not optimized for performance and are recommended for diagnostic purposes only.

This traditional ignorance towards auditing is most likely due to the fact that auditing is usually not seen as primary business need. Also, the requirements for auditing can be so daunting and unpredictable that it can be very difficult to comprehend and achieve in a consistent manner, especially across heterogeneous environments.

With the advent of service-orientation, service design principles, standardized message structures and commercial infrastructures that directly enable service-oriented computing, it is becoming significantly easier to build a system that is generic enough to plug into a variety of platforms and also specific enough to capture only certain messages on an "as need" basis. As a result, we can now build message auditing systems without having to creep inside of and customize the service logic.

This effectively enables us to:

  1. respond to arbitrary queries pertaining to compliance requirements
  2. gather business intelligence from arbitrary perspectives by running queries against persisted messages
  3. set up observation systems to raise alarms when certain events happen
  4. extract diagnostic information about systems to better optimize their resources
  5. create a dashboard to observe the overall state of systems in real-time

Let's now explore these requirements in more detail and then establish a high-level design and implementation.

Typical Requirements

Any reasonable service-oriented implementation can have multiple services (and even multiple versions of services) along with multiple XML schemas. The services may be implemented on more than one platform and may be called by diverse clients. The volume and size of messages are not predictable – because well-designed services are typically interoperable, reusable and compassable, we cannot predict how and when new usages of existing services will emerge.

This leads to the following requirements that a message auditing system may need to address:

id

Requirement

Description

1

Flexibility to Handle Messages that may have Different Structures

The auditing system cannot assume any specific structure or schema for the messages. A request or response message at one service may look very different from the request or response at another service. Even different versions of same service may have different message structures

2 Ability to Specify Criteria to Filter Messages so that only Specific Messages are Audited The message auditing system must support message filtering to specify which messages really need to be audited. It is not incomprehensible to visualize that not all messages need to be audited. An ability to filter the messages reduces the load on the auditing system and creates a meaningful message warehouse, which can be easily managed and processed to gather on-going business intelligence. The challenge in this requirement is that since the messages can be of any structure, the criteria for filtering can also vary.
3 Ability to Map messages to the Service Instances from which they Emanated The message auditing must be able to support the mapping of an audited message to the instance of the service from which it originated, so that, in case more details have to be found, the log files on the server may be referenced. In the absence of this type of mapping logic, it may be difficult to gain the needed amount of insight into how message were previously processed.
4 Ability to Support Multiple Destinations where Message need to be Stored The message auditing system must be able to support multiple destinations for the audited messages. Typical examples of where the message may be sent are: the database, e-mail, files and folders, queues and other services. The database is commonly considered the single most important destination, since it is permanent and can be used to generate business and reporting intelligence. However, it is important to support other destinations for real-time reporting, events, and for help with diagnosing problems
5 Ability to Scale to Accommodate Increasing Load As mentioned previously, because it is impossible to predict how a service eco-system will grow and evolve, it is also not possible to accurately estimate future loads placed upon the message auditing system. Therefore, the system needs to be inherently and seamlessly scalable so that any required infrastructure upgrades can be added without affecting existing services
6

Must be Not be Intrusive to Service Logic

An auditing system should not impose significant change upon existing services. Services must be designed so that their autonomy is maximized while their coupling to the infrastructure is minimized. When incorporating a an auditing system (be it accessed directly by service logic or made available via separate utility services), care needs to be taken so that it should not introduce negative service coupling requirements.
7 Must be Autonomous Message auditing systems should not be dependent on services. They may depend on the standardized schemas being used to define service contract types (following the service design principle of Standardized Service Contract), but they should not have any direct dependencies on business services, their message structures, or service logic. This enables auditing services (or systems) to exist as independent parts of the enterprise with their own life cycles.
8 Must Act as an Optional Feature Not all services need auditing and those that do may not need auditing all the time. Message auditing logic must therefore be designed to be as "pluggable" as possible, so that it can be easily enabled and disabled without affecting services and their supporting infrastructure.
9 Must be Flexible and Extensible

The auditing requirements can be so different from one organization to another or even from one service inventory to another in the same company, that the auditing system should not be built to address a predefined set of auditing requirements. Instead, the system must act as a platform which can be customized or extended to accommodate evolving requirements.

10 Ability to Define Arbitrary Queries for Reporting or Event Generation Purposes Ad-hoc reporting is often ignored in traditional auditing systems. In order to support this requirement, messages must be stored in a manner that makes it is possible to query them to generate both reports and events. Keeping in mind that each service operation has its own message structure, it must be possible to specify the query in an arbitrary manner. This is also due to the fact that the requirements for reports and events cannot be predicted since the auditing on a system may lead to the need for ad-hoc queries.
11 Ability to Generate Automated Reports It has become increasingly common for systems to comply to regulation-based auditing requirements. With the advent of Sarbanes-Oxley Act in particular, companies are required to provide high-quality reporting capabilities in order to respond to auditing-based compliance queries.

Solution

There are several SOA infrastructure platforms that simply have insufficient auditing capabilities. Building your solutions on such a foundation may force you to later adopt a third-party auditing solution. However, auditing requirements can become so varied that it simply may not be feasible to build or use one system that can address them. Therefore, you need to place reasonable assumptions and constraints on what a given auditing system can and cannot do.

For example:

  • Service instances must be identifiable by some unique ID that can be stored as metadata along with the messages.
  • The auditing system should not be responsible for decrypting messages.
  • The auditing system may be allowed to add its own information as headers to the messages.
  • It must be possible to identify the version of service, operation and schemas being used from the message.


To limit the scope of a message auditing system, the following points must also be considered:

  • Auditing logic should not be used for routing purposes. The message processing chain should be part of service-logic.
  • Sometimes, the word "auditing" is used to describe a business requirement for applications. For example, in a customer management application, the administrator must be able to see the history of how customer information was modified and by whom. This should not be confused with "message auditing", which is logic that is considered to be part of an agnostic system (a system that is not bound to a specific business context).
  • Auditing systems may alter messages by adding their own header information. This should not be considered as a "breakage" in the message integrity because these headers are generally used for auditing purposes only.
  • The message store should not be used for non-repudiation. This is again is due to the fact that headers may be added to the message for auditing.


Architecture and Design

There can be many different ways to build an auditing system. Here, we will explore one possible approach based on the context and requirements discussed so far.

Overall the solution can be broken down into the following parts:

  1.   Service Instance Manager
  2.   Message Interceptor
  3.   Filter Manager
  4.   Filters-Service Instance Mapper
  5.   Destination Manager
  6.   Filter-Destination Manager
  7.   Report Manager
  8.    Report Scheduler
1 Service Instance Manager The message auditing must be done in a way that it should be possible to track from which instance of what service the message was captured. So, it is important to have a mechanism in place that consistently and uniquely defines each service instance. It may also be a good idea to implement a self-registration process so that when an instance of a service is started, it can register itself in a database.
2 Message Interceptor

The capturing mechanism must be pluggable so that it is easily enabled or disabled. It has to be non-intrusive to service logic and to enable high levels of scalability, it should be designed to simply capture and send messages to a queue.

This module is also responsible for gathering metadata about the message (e.g. the service instance identifier to map the messages to the service log files). All of this metadata must be either added to the messages via headers or it may be captured as separate fields.

The module must be able to capture request and response messages together and send them as pairs for auditing. This ensures that request and response message information will be bundled for future processing and reporting purposes.

This module should not apply filters to the messages since that may introduce time-consuming logic that could adversely affect the runtime service performance. Typically, a message capturing mechanism can be developed by implementing message handlers or interceptors.

3 Filter Manager

A filter is a simple function that would take request and response messages and associated metadata and determine whether a given set of messages needs to be audited or not. Typically a filter would look into the message content and apply a rule to find out if certain conditions resolve to true or not.

Some common types of filters that can be implemented and readily used are

• look for a certain string in the message to be present or absent

• apply a given XPath query to the message that results in a Boolean response

• apply a given XQuery query to the message that results in a Boolean response

• apply interfaces to define arbitrary criteria based on the message content

While defining the filter, it must be indicated whether to apply it on the request or the response. The definitions of these filters may be created and stored in a database or configuration file

4 Filter-Service Instance Mapper A mechanism to map the filters with service instances is needed to determine which filters to apply for a given set of messages and their metadata. This can also be done in a database or a configuration file. Note, however, that a database may be more suitable do to the need to sometimes establish many-to-many relationships.
5 Destination Manager The destination for filtered messages represents the location they need to be sent to for storage. Some possible destinations can be the database, a queue, e-mail, files and folders, URLs, other services, or any implementation of a simple interface that can accept the request, response, message metadata and filter and is able to send them to whatever destination they need to go. All destinations may be stored in configuration file or a database.
6 Filter-Destinations Mapper It needs to be specified for each filter where the messages must be sent. This can be achieved by specifying a list of destinations for each filter register.
7 Report Manager The database is a mandatory destination if it is important to run reports on audited messages. Defining reports at a basic level would require setting up query that needs to be executed to select only relevant messages and further provide some way to render these messages. A simple yet powerful solution for this is storing the messages in XML type columns and specifying queries in terms of XPath or XQuery queries. As mentioned previously, the messages may be based on an arbitrary structure and it may therefore be easy to specify rendering formats with technologies like XSLT
8 Report Scheduler This is a regular scheduling mechanism that can run reports based on the provided schedule and report definition.

Putting It All Together

Figure 1 identifies some of the interfaces that will enable the creation of the previously listed solution parts. For the sake of brevity, only important interfaces are listed.


Figure 1


Once all these components are in place, we can study how the system will work at runtime and how the original requirements can be met. At service start-up, the service instance must register itself with a central registry as displayed in Figure 2.


Figure 2: At service startup, the service instance object is created and registered

The following steps demonstrate how the auditing system would work at runtime:

  1. A service client sends a message to a service instance.
  2. The auditing interceptor captures the service instance identity, request, and its metadata, and stores this information locally.
  3. The service logic processes the request and constructs the response.
  4. The auditing interceptor captures the response.
  5. The auditing interceptor combines the request, response, metadata, and service instance identity, and sends this combined packet to a queue.
  6. The response is returned to the client.

These steps are illustrated in Figure 3.


Figure 3: This figure shows how audit interceptor captures the request and response and simply submits to a queue. This makes the auditing system less intrusive and more scalable. Note that filters are not applied at this point to reduce the additional processing.


It evident that the auditing interceptor acts in a non-intrusive and scalable manner. It does not depend on message structures and individual service instances.

Let's now see how the messages can be processed by the queue listeners:

1. The queue listener picks the message packet that has request, response, service instance identifier, and other message metadata.

2. The listener finds the mapped filters for given service instance.

3. The filters are applied to the messages and, as a result, each filter indicates whether the message was a match or not. It is possible that many filters are provided for a given instance of the service and more than one may indicate a match.

4. If no filter is specified or no filter indicates that the message must be audited, the message is simply discarded.

5. For every filter that indicates that the given set of messages must be audited, it is determined what the destinations for that filter are.

6. The message packet is then sent to every destination mapped for the given filter.


These steps are further illustrated in Figure 4.


Figure 4: This diagram shows how the messages are actually audited by queue listener. First, for given service instance, the registered filters are looked up, then the filters are applied to messages to check if they need to be audited. If yes, then destinations are looked up for given filter and the message are sent to all destinations associated with each filter.

The sequence of processing steps shown in Figure 4 represents a flexible processing model that allows for customizable behaviour for each service instance and filter. It is extensible since any new filter may be added to the service instance and any new destination may be provided for each filter.

Once the reports have been defined using XPath or XQuery, the queries can be run against the messages. Since the message structure can be arbitrary, it is important to save them in XML type columns.

Let's now consider some sample cases and see how the auditing system can support these requirements:

1. You want to track all activities of a user

It is a common situation where you may want to track a user's activity (e.g. for the purpose of diagnosing a problem). Assuming that each message would have a user ID embedded in it, you can create an XPath filter and add that to all service instances. If you want to quickly diagnose the problem, you could specify your e-mail address as the destination. Once this filter is enabled, the messages will start coming to your inbox. After you have collected relevant messages, you can disable the filter.

2. You want to find how a lookup service is being used.

You might have a service that lets users lookup some service providers in a given area. You may want to find out how this is being used to improve user interface. You can capture all messages for this service and query using XQuery or XPath to count how many times zip, state or city were provided, what mile radius is usually selected by users, and so on. Based on this, the user-interface may pre-select some values.

Care must be taken to ensure that interaction with the queue is efficient (for example, it may be executed in a separate thread). If the messages hold sensitive data, the message database must be designed and managed so that the data in not compromised. Since the message auditing will simply store the messages without association with entitlements, it may not be possible to create reports based on some access control. Messages with attachments pose another challenge since the attachments themselves may also need to be stored and queried.

Conclusion

With the advent of XML-based technologies, the evolution of service design principles, service design patterns, and industry standards, message auditing has become viable commodity within the modern SOA eco-system. Adding auditing logic to your systems opens up a flood gate of opportunities for extracting business intelligence from messages.

The beauty of message auditing with XML is that it does not bind you to a specific data structure and therefore allows for the extraction of information in an arbitrary manner. This can be the best defence against the random nature of service-oriented system auditing, which can span from performance data to compliance requirements, from user behaviour to drawing business intelligence, and many more usages.

Tuesday

SOA and the real Service Integration issue

A real problem considering a SOA project, either in its beginning or in extending an all ready existing one, is the Web Services (ws) hell, that is hiding in the corner. Beginning the SOA approach, especially when having limited time to deliver and / or pure business analysis results, defining and architecting a solution that is compliant with your client (both in the kick-off architecture and in the way of working – aka good adaptation of the new environment) is possibly to provide a sea of ws. From the other hand, extending or taking a project in an already existing SOA environment that was developed by someone else, you have a small daemon hiding in the corner that always play with you in order to make you build more and more new ws in order to build your components.

In any case, in the SOAs you are always face the problem of exponential increase of the ws pool. In order to avoid such a problem, you have to give effort in focusing on the standardization of the core architecture elements (imagine that in an existing case, developed by someone else, you have to understand and comply with the already taken way) and furthermore to establish an optimized integration approach for existing assets of the environment.

Having these in mind, you have either to use some existing ground of work to extent by, or to … re-invent the wheel.

Re-inventing the wheel, means that you are prepared to give much more effort in business analysis and re-engineering and furthermore to develop all the necessary tools and bridges, both for the interoperability elements of the infrastructure, as well as for the semantics and conceptual transformations that will take place in order to implement the needed physical (system) bridges.

Or using some existing assets…

Java Business Integration (JBI) is an effort focused on standardizing the core architecture elements of integration architectures. It is a specification developed under the Java Community Process (JCP) for an approach to implementing a Service Oriented Architecture (SOA). The JCP reference is JSR-208. JBI extends Java EE and Java SE with business integration service provider interfaces (SPIs). It enables the creation of a Java business integration environment for the creation of composite applications. It defines a standard runtime architecture for assembling integration components to enable a SOA in an enterprise information system.

Following the JBI road, you might be able to use another ally for the SOA stacks you gonna need.

Open ESB is an open source integration platform based on JBI technology. It implements an Enterprise Service Bus (ESB) using JBI as the foundation. This allows easy integration of Web Services to create loosely coupled, enterprise-level integration solution.

Open ESB Architecture

Because Open ESB is built on top of the JBI specification, it makes sense that, before diving into the architecture of Open ESB, we take a look at the JBI architecture, which is illustrated in Figure 1.

As the figure shows, JBI adopts a pluggable architecture. At the heart of a JBI runtime is a messaging infrastructure called Normalized Message Router (NMR) that is connected to a bunch of JBI components. JBI components are architectural building blocks of a JBI instance and are plugged into the NMR to interact with each other. The interaction among components carries out the functional logics of the JBI instance.

Normalized Message Router

The primary function of the NMR is to route normalized messages from one component to another. It enables and mediates the inter-component communication. When a component needs to interact with another component, the component does so by generating a normalized message and sending it to the NMR. The NMR will route the message to the destined component based on some routing rules. After the destined component gets and processes the message, it will generate a response message if required, and will send the response message to the NMR. It is the NMR's responsibility to deliver the response message back to the original component.

The NMR uses a WSDL-based messaging model to mediate the message exchanges between components. From the NMR's point-of-view, all JBI components are service providers and/or service consumers. The WSDL-based model defines operations as a message exchange between a service provider and a service consumer. A component can be a service provider, a service consumer, or both. Service consumers identify needed services by a WSDL service name rather than end-point address. This provides the necessary level of abstraction and decouples the consumer from the provider, allowing the NMR to select the appropriate service provider transparently to the consumer.

An instance of an end-to-end interaction between a service consumer and a service provider is referred to as a service invocation. JBI mandates four types of interactions: One-Way, Reliable One-Way, Request-Response, and Request Optional-Response. These interactions map to the message exchange patterns (MEPs) defined by WSDL 2.0 Predefined Extensions.

The NMR also supports various levels of quality of service for message delivery depending on application needs and the nature of the messages being delivered.Component - Service Engine and Binding Component
A JBI component is a collection of software artifacts that provide or consume Web Services. As mentioned previously, JBI components are plugged into the NMR to interact with other components. JBI defines two types of components, Service Engines (SEs) and Binding Components (BCs).

A Service Engine is a component that provides or consumes services locally within the JBI environment. Service Engines are business logic drivers of the JBI system. An XSLT Service Engine, for example, can provide data transformation services, while a BPEL Service Engine can execute a BPEL process to orchestrate services, or enable execution of long-lived business processes. A Service Engine can be a service provider, a service consumer, or both.

A Binding Component is used to send and receive messages via particular protocols and transports to systems that are external to the JBI environment. They serve to isolate the JBI environment from the particular protocol by providing normalization and de-normalization from and to the protocol-specific format, allowing the JBI environment to deal only with normalized messages.

Distinguishing between these two types of components is more functional. In fact, JBI uses only a flag to distinguish these two. The programming model and APIs of these two types are otherwise identical. However, by convention, Service Engines and Binding Components implement different functionality in JBI.

Service Unit and Service Assembly

JBI runtime hosts components and so acts as a container for components. Components in turn act as containers for Service Units (SUs). A Service Unit is a collection of component-specific configuration artifacts to be installed on SEs or BCs. One can also think of a Service Unit as a single deployment package destined for a single component. The content of a Service Unit are completely opaque to JBI, but transparent to the component it is deployed to. An SU contains a single JBI-defined descriptor file that defines the static services provided and consumed by the Service Unit.

Service Units are often grouped into an aggregated deployment file called a Service Assembly (SA). A Service Assembly includes a composite service deployment descriptor, detailing to which component each SU contained in the SA is to be deployed. A service assembly represents a composite service.

Lifecycle Management

JBI also defines a JMX-based infrastructure for lifecycle management, environmental inspection, administration, and reconfiguration to ensure a predictable environment for reliable operations.

Components interfere with JBI via two mechanisms: service provider interfaces (SPIs) and application program interfaces (APIs). SPIs are interfaces implemented by the binding or engine; APIs are interfaces exposed to bindings or engines by the framework. The contracts between framework and component define the obligations of both framework and JBI component to achieve particular functional goals within the JBI environment.

Unless you're doing component development, it's unlikely that you need to work with these SPIs and APIs directly.

Open ESB

Once we've understood the architecture of JBI, Open ESB becomes really simple. Open ESB is an implementation of the JBI specification. It extends the JBI specification by creating an ESB from multiple JBI instances. The instances are linked by a proxy-binding based on Java Message Service (JMS). This lets components in separate JBI instances interoperate in the same fashion as local ones (see Figure 2).

ESB administration is done by the Centralized Administration Server (CAS), a bus member that lets the administrator control the system directly.

Open ESB includes a variety of JBI components, such as the HTTP SOAP Binding Component, the Java EE Service Engine, and the BPEL Service Engine

 

A Sample Service Integration Scenario

In this section, we'll examine a simple use case and illustrate a possible service integration solution using Open ESB.

Problem Description

ABC Movie Theatres is a fast-growing movie theatre chain. To better serve its customers, the management has decided to put in place a new ticket booking service, the Booking Service, which will be responsible for handling most of the ticket purchasing requests generated from various systems such as Web-based applications, ticket vending machines and points of sale (POS) at box offices.

The business logic of handling a booking request is somewhat complicate. But in a nutshell, it involves the following steps:

  1. When a booking request is received, the Booking Service will first try to process the ticket information. This could include checking for ticket availability; holding the tickets for the customer if tickets are indeed available or provide alternatives to the customer otherwise; applying any applicable promotions and calculating the total dollar amount
  2. Then the Booking Service will charge the customer using the payment information included in the booking request
  3. Finally the Booking Service will send a confirmation to the customer using the contact information included in the request.
High-Level Solution Description

Considering the fact that a significant amount of the logic needed by the new booking service has been implemented in different applications over time, and that these applications have been proved stable over time, it makes a lot sense to leverage these existing IT assets for cost efficiency. So the architect team at ABC comes up with the solution of service-enabling and -consolidating existing logics, bringing the services into the ESB and exposing them as new composite services that are made available via different protocols.

To do this, they have to identify candidates for service-enabling. Currently the company has a billing system developed and used by the finance department and it has been working perfectly over years. This system runs over the HTTP protocol. Another system is the notification system developed by the customer relationship department. This system listens to a JMS message queue and, when it gets a new message, sends out notifications to the customer by the means specified in the message, e.g., an e-mail or a voice mail message. These two systems become the ideal candidates for billing customers and sending confirmations.

The company also has a system for processing ticket orders. This system, however, is severely old and impossible to scale up to handle the ever-increasing transaction volume due to recent acquisitions. The architect team has decided it's time to write a replacement application; it's also decided that the replacement application, the Ticket System, will be built using EJB technologies due to the transactional nature of the application.

The architect team also decided that the new Booking Service will be made available via the SOAP-over-HTTP protocol as well as the File protocol to support different client systems.

Solution Details

When it comes to Open ESB development, it's all about creating JBI Service Units and packing them in a Composite Application (or Service Assembly, in JBI terms). Figure 3 illustrates the Service Units for this solution and the interactions among those units. As mentioned earlier, Service Units are deployed to their corresponding JBI components. For simplicity's sake, we'll use the term Service Unit and Component interchangeably provided it doesn't cause any confusion in the particular context.

At a high level, two BCs, the Booking Service SOAP BC and the Booking Service File BC, are created to enable the Booking Service and expose it to the outside world via the SOAP and File protocol respectively, allowing different kinds of clients to consume the service. A client application can invoke the Booking Service via either of these protocols. Invocation requests generated by client applications are then routed to the Booking Process, a BPEL Service Engine. The Booking Process orchestrates the Ticket Service, the Billing Service, and the Notification Service to fulfil the request.

File and SOAP BC - The Booking Service

In Open ESB, a File BC is a JBI binding component that binds file systems to WS-I Web Services. A File BC scans a pre-configured file location for new files. If a new file is found, the component generates a Web Service call using the content in the file as the payload of the Web Service input message. Response messages will also be written to the file system by the component. Many of the properties of a File BC, such as the file location, file name, and time interval for the component to scan the specified location for new file, can be configured.

A SOAP BC works the same way as the File BC, only instead of scanning a file system directory, a SOAP BC accepts WS-I SOAP messages over the HTTP protocol.

A File BC and a SOAP BC are created in this solution to enable the Booking Service and expose it to external systems via these protocols. External systems that wish to consume the service do so by either sending a WS-I-compliant message, or dropping the message in the specific file location.

External requests received by the previous two BCs will be routed, by the NMR, to the Booking Process, a BPEL Service Engine that executes a BPEL business process to orchestrate services.

BPEL Service Engine - The Booking Process

A BPEL Service Engine is a JBI runtime component that provides services for executing WS-BPEL-compliant business processes. The contract between a business process and partner services is described in WSDL.

A BPEL SE can save business process data to a persistent store if configured to do so. This is required for recovering from system failure and running long-lived processes. A BPEL SE can be deployed to a clustered environment to achieve high scalability. The service engine's clustering algorithm automatically distributes processing across multiple engines. When the business process is configured for clustering, the BPEL Service Engine's failover capabilities ensure throughput of running business process instances. When business process instances encounter an engine failure, any suspended instances are picked up by all available BPEL Service Engines.

The Booking Process in this solution is a BPEL SE and is at the heart of this solution. It does some simple message transformation and, most importantly, invokes the Ticket Service, Billing Service and the Notification Service. Figure 4 shows a simplified version of the Booking Service process.

Java EE Service Engine - The Ticket Service

A Java EE Service Engine brings Java EE components into the Open ESB runtime as Web Services. A Java EE Service Engine acts as a bridge between a Java EE application server and a JBI environment for Web Service providers and Web Service consumers deployed in the application server. Java EE Web components or EJB components that are packaged and deployed as Web Services on a Java EE container can be transparently exposed as service providers in JBI environment.

In this solution, the Ticket System is implemented using EJB Session Beans and wrapped as a JAX-WS Web Service. The service is then brought into the Open ESB runtime by the Ticket Service SE.

HTTP and JMS BC - The Billing Service & the Notification Service

Let's face it - all BCs work the same way. This is the beauty of Open ESB architecture. We've looked at two BCs, the File and the SOAP BC. Similarly an HTTP BC binds the HTTP protocol to the Web Service, and a JMS BC binds the JMS protocol.

In this solution, the HTTP BC and the JMS BC are used to bring the Billing Service and the Notification Service into the Open ESB runtime respectively - as stated previously, the Billing Service runs over the HTTP protocol and the Notification Service over the JMS.

It's important to be aware that, although these components are shown connected directly in the figure, they never communicate directly to each other. Instead, components send massages to and receive messages from the NMR. The NMR is responsible for transforming messages and routing messages to the appropriate destinations.

GlassFish & NetBeans - Development & Deployment

Open ESB runs on any OSGi R4-compliant runtime. GlassFish has a built-in JBI runtime and is bundled with NetBeans for easy development. NetBeans provides a comprehensive GUI development environment. The java.net community is working on a new project, GlassFish ESB aimed at creating a community-driven ESB for the Glassfish Enterprise Server platform.

Developing the process in NetBeans involves creating the needed JBI modules and including them in a composite application. Figure 5 shows a screenshot of creating the composite application in the NetBeans IDE.

Conclusion
Open ESB provides a robust and flexible platform for building service-oriented integration solutions. Its component-based architecture allows maximum extensibility and interoperability. It's based on industry standards and is easy to use. It seamlessly integrates with other Java enterprise technologies.

References
Open JBI Components

The overall goal of Project Open JBI Components is to foster community-based development of JBI components that conform to the Java Business Integration specification (JSR208). You can join this project as a JBI component developer or as part of an existing JBI component development team.

About JBI Components

The JSR208 specification provides for three installable JBI components: Service Engines, Bindings, and Shared Libraries. JBI components operate within a JBI container, which is defined by the JSR208 specification. Two popular implementations of JBI containers are Project Open ESB and ServiceMix, an alternative approach which has been mentioned in previous posts.

Friday

ESBs emerging roles

Two distinct use cases are emerging for Enterprise Service Buses:

  • firstly many organizations are using ESBs internally as the enabling infrastructure for their Service Oriented Architecture.
  • The second use case places the ESB on the edge of the network in a role more traditionally occupied by B2B integration solutions.


This second use case is being used by organizations providing Web service interfaces to their business and in particular by pure play Software as a Service (Saas) providers who realise they need to take a proactive approach to their customers' integration issues. This may even involve hosting some portions of the integration solution on behave of the customers. An ESB deployed in a SaaS environment requires additional features over and above those of an ESB used internally. For example:

  • Multi-tenanting support is fundamental to the SaaS model, but good support for it is not available in many ESBs. Hosting integrations on behalf of customers requires that data generated as a result of messages flowing through the ESB is segmented on a per customer basis. This applies to a whole variety of information stored in log files, activity reporting and data stored in databases. The segmentation of data in databases also applies to languages such as WS-BPEL which are heavily dependant on the persistence of data. The SaaS provider may need to make this data available to their customers in order to provide the same level of visibility into their integration solutions as they do for their core services.
  • Scalability and Performance requirements for SaaS are typically more demanding than within the enterprise environment. The ability to deploy within a clustered environment is now a must.
  • Productivity tools are vital. It must be possible to create integration solutions quickly for customers. Within the enterprise, an integration project might have been viewed as one off projects. For the SaaS provider they are now a normal part of providing access to their systems to each new customer.
  • The software license model for the ESB needs to be consistent with the on-demand or per user billing model for the SaaS provider.
  • The ability to deploy client side instances of the ESB is also important as the SaaS provider may not want to host all the integrations. For this scenario, ease of installation and maintenance is vital as is the need to work within a wide variety of IT environments.

Wednesday

Java Spring Integration Framework … or more on Camel

In a previous post I have described the Apache Camel framework and also the a .. revelation or a change of view for the integration issue. I’m glad to see that in Java Spring Based Integration Framework another usage of the framework came out.  But more the less, there is a correction needed to be made. In the above  posting, it is stated that :

Apache Camel is a Spring based Integration Framework which implements the Enterprise Integration Patterns with powerful Bean Integration.

Correct, but not exactly ... Camel is not a Spring Based Integration Framework. It has been used for Spring integration, but not only. As I have stated,

Camel is a Java API that allows you to do message routing very easily. It implements many of the patterns found in Enterprise Integration Patterns. It doesn't require a container and can be run in any Java-based environment. Camel has a whole bunch of components (Bruce is showing a 6 x 10 grid with a component name in each grid. In other words, there's 60 components that Camel can use. Examples include: ActiveMQ, SQL, Velocity, File and iBATIS).

and the blog continues nicely…

Camel lets you create the Enterprise Integration Patterns to implement routing and mediation rules in either a Java based Domain Specific Language (or Fluent API), via Spring based Xml Configuration files or via the Scala DSL. This means you get smart completion of routing rules in your IDE whether in your Java, Scala or XML editor.

Apache Camel uses URIs so that it can easily work directly with any kind of Transport or messaging model such as HTTP, ActiveMQ, JMS, JBI, SCA, MINA or CXF Bus API together with working with pluggable Data Format options. Apache Camel is a small library which has minimal dependencies for easy embedding in any Java application.

Apache Camel can be used as a routing and mediation engine for the following projects:

  • Apache ActiveMQ which is the most popular and powerful open source message broker
  • Apache CXF which is a smart web services suite (JAX-WS)
  • Apache MINA a networking framework
  • Apache ServiceMix which is the most popular and powerful distributed open source ESB and JBI container

Friday

SOA and the Integration Patterns

SOA is simply a way to think when designing systems. Service oriented integration is a way to leverage investments in existing IT systems using the principles of SOA. As I have mention before, Apache ServiceMix is an enterprise service bus (ESB) that provides a platform for system integration utilizing reusable components in a service oriented manner. Also I have stated the ServiceMix 4.0, the next generation of the ServiceMix ESB.
Among the issues of the SOA case is actually the integration of infrastructure ‘places’, new and more important existing ones. Bruce Snyder gave the session ‘Taking Apache Camel for a Ride’ were he looked up into the Enterprise Integration Patterns.

The revered Enterprise Integration Patterns (EIP) book is indispensable for handling messaging-based integration, but utilizing these patterns in your own code can be tedious, especially if you have to write the code from scratch every time. Wouldn't it be nice if you had a simple API for these patterns that makes this easier? Enter Apache Camel, a message routing and mediation engine that provides a POJO-based implementation of the EIP patterns and a wonderfully simple Domain Specific Language (DSL) for expressing message routes.”

From this point of view, a question – revelation came up:

Camel is a Java API that allows you to do message routing very easily. It implements many of the patterns found in Enterprise Integration Patterns. It doesn't require a container and can be run in any Java-based environment. Camel has a whole bunch of components (Bruce is showing a 6 x 10 grid with a component name in each grid. In other words, there's 60 components that Camel can use. Examples include: ActiveMQ, SQL, Velocity, File and iBATIS).

The revelation

Chris Richardson asks "What's left inside of ServiceMix". Why use ServiceMix if you have Camel? ServiceMix is a container that can run standalone or inside an app server. You can run distributed ServiceMix as a federated USB. Camel is much smaller and lightweight and is really just a Java API. ServiceMix 4 changed from a JBI-based architecture to OSGi (based on Apache Felix). They also expect to create your routes for ServiceMix 4 with Camel instead of XML. To process messages, you can use many different languages: BeanShell, JavaScript, Groovy, Python, PHP, Ruby, JSP EL, OGNL, SQL, XPath and XQuery.

Camel has a CamelContext that's similar to Spring's ApplicationContext. You can initialize it in Java and add your routes to it:

CamelContext context = new DefaultCamelContext();
context.addRoutes(new MyRouterBuilder());
context.start();

Or you can initialize it using XML:

    com.acme.routes

Camel's RouteBuilder contains a fluid API that allows you to define to/from and other criteria. At this point, Bruce is showing a number of examples using the Java API. He's showing a Content Based Router, a Message Filter, a Splitter, an Aggregator, a Message Translator, a Resequencer, a Throttler and a Delayer.

Bruce spent the last 10 minutes doing a demo using Eclipse, m2eclipse, the camel-maven-plugin and ActiveMQ. It's funny to see a command-line guy like Bruce say he can't live w/o m2eclipse. I guess Maven's XML isn't so great after all.

Camel is built on top of Spring and has good integration. Apparently, the Camel developers tried to get it added to Spring, but the SpringSource guys didn't want it. Coincidentally, Spring Integration was released about a year later.

Camel also allows you to use "beans" and bind them to Camel Endpoints with annotations. For example:

public class Foo {
 
    @MessageDriven (uri="activemq:cheese")
    public void onCheese(String name) {
        ...
    }
}

Other annotations include @XPath, @Header and @EndpointInject.

Camel can also be used for BAM (Business Activity Monitoring). Rather than using RouteBuilder, you can use ActivityBuilder to listen for activities and create event notifications.

full window

Thursday

More on OSGi...

I was introduced to OSGi quite by coincidence. While searching for standardized approaches for using standards and techniques for an architecture approach, I started looking into it. I was quite surprised to find the ease with which it provided the solutions to some of the problems that we face in the web application development.

One of the concepts of OSGi that really intrigued me, was how it allows bundles to export services that can be consumed by other bundles without knowing anything about the exporting bundle.

OSGi takes care of this by introducing Service Registry where the exporting bundle registers the interfaces that it want to expose and any other bundle which wants to use those interface can just look up in the registry to use the implementation.

The other concept of OSGi which I also found interesting was how OSGi uses version management to allow different versions of the same java class to be used within the project.

The OSGi notion is crucial in web oriented solutions and special in SOA, ESB and java related projects. As reposted the OSGi begins to be adopted by various tools. The SeviceMix4 case is also used in the FUSE ESB 4 case, which is an enterprise version of Apache ServiceMix 4. ServiceMix 4 supports OSGi but does not fully support JBI, and therefore the FUSE team recommends that developers who have been using FUSE ESB 3 or JBI continue to use FUSE ESB 3. Users of OSGi should use FUSE ESB 4. The 4.1 release of both ServiceMix and FUSE ESB will fully support both JBI and OSGi.

FUSE ESB 4 continues to provide support for widely adopted integration standards like JBI 1.0 and JMS while also ensuring support for the latest emerging standards like OSGi and JBI 2.0. The new FUSE ESB 4 provides a single platform that makes it easy for developers to implement the integration patterns they need with the programming model of their choice.

FUSE ESB 4 includes the following features not included in FUSE ESB 3:

  • Normalized message router – a standard way for components to plug in and talk to the ESB, now supports multiple programming models in addition to JBI
  • OSGi framework – a faster and standard way to create, deploy, and easily provision integration components as modules
  • JBI 1.0 and 2.0 compatibility – support for the latest version of the emerging JBI 2.0 standard and backwards compatibility with JBI 1.0 so components developed for FUSE/ServiceMix 3.x can be seamlessly deployed onto FUSE ESB 4
  • Native Spring support – enables Spring users to quickly create components using Spring XML
  • FUSE Integration Designer – graphical user interface to integrate systems using Enterprise Integration Patterns (EIPs)

Actually, the Integration Designer is the difference from the alone-ServiceMix4. And the approach provides a hind on using Open Source in a more professional manner.


full window

The Agility issue

A previous post comment, was the inspiration for setting this subject. Actually, the issue described was fuzzing me in the past. The content is agility. Agile with the wide notion of the term and not bided into the for-come notion of agile development, projecting and so on.

In almost all projects related with software development, my main issue was the ability to be as more agile as possible. Not only from the technological perspective but also from the managerial and marketing orientation that any complete work needs to fulfil. Meaning that expect from the agility to use yours favourite, loving soft tools kits etc (ability which comes usually afterwards in a second time), you might have to establish a prospect relation with customer, cooperators and sell thyself ;) . Of coarse all these when you are not part of a big company or organisation which either has an establish name and image or has the proper persons for doing that.
The above approach (correct or not) was (and is) always tingle me when trying propose a work to be done. Especially in the past years, whereas the open source wasn't so approved from the clients. Agility but moreover stability and reliability were (and are) the main issues that make clients sceptics about thinking on open source. Of coarse, the budgeting and real agility issues were the real allies. And all the above for the small to middle clients (which are the majority in my cases).

For the cases whereas you want to use/build SOA applications, you have to start considering the parts which will make your life easier.

(a)ESB-ing


The Enterprise Service Bus (ESB)- which can be defined as middlewarethat brings together both integration technologies and runtime servicesto make business services widely available for reuse - offers the bestsolution for meeting today's enterprise application integrationchallenges by providing a software infrastructure that enables SOA.However, there are currently a number of different vendors that provideESB solutions, some of which focus purely on SOAP/HTTP and others whoprovide multi-protocol capabilities. Because these vendors span thehorizon from big enterprise generalists (app servers), to mid-tierenterprise integration providers, all the way to smaller,ESB/integration specific-providers; there doesn't seem to be anestablished consensus regarding the key requirements for an ESB.

As application architects, we have often thought about whatrequirements would define an ESB designed specifically to cater to theneeds of an agile, enterprise integration model.

... main characteristics of an Agile ESB

The main criteria we were looking for in our ESB are as follows:

1. Standards based

While standards-based support is marketed by many ESB vendors, thesupport is provided externally, requiring developers to work withproprietary APIs when directly interacting with internal APIs.A good example is ServiceMix was designed with the requirement to eliminate product APIlock-in, by being built from the ground up to support the Java BusinessIntegration specification (JSR 208). Our agile ESB needs to use JBI asa first class citizen, but also support POJO deployment for ease of useand testing.

2. Flexible

Another characteristic of an agile ESB is the flexibility with whichit can be deployed within enterprise application integration framework:standalone, embedded in an application component, or as part of theservices supported by an application server. This allows for componentre-use throughout the enterprise. For example, the binding for areal-time data feed might be aggregated as a web-service running withinan application server, or streamed directly into a fat client on atraders desk. An agile ESB should be able to run both types ofconfigurations seamlessly.

To provide rapid prototyping, an agile ESB should support both scripting languages and embedded rule engines, allowing businessprocesses to be modeled and deployed quickly.

3. Reliable

Our ESB needs to handle network outages and system failures and tobe able to reroute message flows and requests to circumvent failures.

4. Breadth of Connectivity

An agile ESB must support both two way reliable Web-services andMessage Oriented-Middleware and needs to co-operate seamlessly with EISand custom components, such as batch files.


Having the above in mind, the steping described on the getting-started post, will give the ability that the requirements constrains define, to have tool-ing freedom and open source approaches. For example it will be nice to be able to have an ESB to be vendor independent and open source,in order to promote user control of source code and direction. An added benefitof this is not only the zero purchase cost, but the total cost ofownership will be reduced where users are actively contributing andmaintaining our ESB. Making therefore, a possible quick -win be more realisable.

Furtheremore, the constrains and the business needs will provide the guidance for using other associated tools in order to develop and implement the necessary parts for web services, pojos, etc. The tooling know goes one step further to what can fit in the scheme proposed by the solution. And this is the reason that in the post comment i mention about seperation of 'concerns' meaning an agile seperation in order to extend the scope of freedom in the tool-ing adjenta available.

Tuesday

BPEL flows and correlation

Steve Smith has post a discussion on the basics of BPEL correlations. In this post, with his help, it is presented the challenge faced when trying to apply correlations to my business process, which unfortunately for me does not align with any vendor tutorials I have seen. Imagine that :)

The portion of my business process which requires correlation involves calling out to a notification web service and supplying an unbounded set of people to which I want to notify. This notification web service then responds with a single notification identifier (notification ID) representing the notification sent. At some future point in time, the people notified will respond (or the notification system will let me know there was no response). These responses can be received in any order at any time, and are received via a web service call into my ESB. The payload of this web service call includes the notification ID and the response of the person. After receiving the response, my business process updates the person information with the response and continues on. This is exactly what correlations are made for. Perfect!

Challenge 1: If I define correlation initialization on the notification web service invocation activity, this will persist 1 instance of the business process. But I have multiple responses coming back. After the first response is received, I have no business process instances waiting.

Solution 1a: I'll just loop around the web service invocation for each person in my set of people and make individual calls out to the notification web service. This will create multiple notification IDs and therefore persist multiple business process instances. So as responses come back, each one will correspond to its own instance. The business process would look something like this:

Problems: First, the business process will always run to completion as the response web service implementation gets invoked after the while loop (i.e. there is no receive task for the process to wait on). So, if we replace the web service implementation with a JMS receive task we can get over that hurdle. All we have to do is create another business process that implements a web service, and within the implementation it uses a JMS send activity to send the notification response to the corresponding JMS queue that this process is monitoring. The new business processes look like this:



Now the problem is that we really only have one business process instance being created. Although it seems like multiple business process instances are being persisted, each one really maps to the same instance. So, once the 1st response is received, the persisted instance is gone and other responses are left hanging.

Solution 1b: So, we remove the loop here and will need to create another business process (we are now up to 3 business processes) that sends messages with a single person to the inbound JMS queue. This will cause a new business process instance to be created for each person, so each response can now be correlated to one of these instances.

Challenge 2: How do we define the correlation set so that it actually works? Our correlation aliases were initially defined as the notification ID returning from the notification web service invocation and the JMS receive message text. Since the message text value (an XML message) does not match the notification ID (an int data type), the correlation is not working.

Solution: This was a difficult one. I went through many iterations with problems....err challenges encountered with each one. Here they are:

(1) Add an unmarshal task so that we can correlate on the resulting notification ID value. This will require using correlation on the Unmarshal activity. Since the JMS receive activity, which occurs prior to the unmarshal, causes the BPEL engine to retrieve an instance from persistence, and no correlation set is defined for this task, the engine grabs any instance. So essentially there is no real correlation going on.

(2) Try using the JMS message header correlation ID. This involved mapping the response notification ID to the correlation ID of the JMS message header prior to placing this on the response queue (in our second business process). The issue here was the correlation ID message property is defined as a string, while the notification ID is an int. So CAPS will not let you define an appropriate correlation key/correlation set that will work. The data types of each alias need to be the same, which makes sense. Also, since my business process has 2 JMS receive tasks, the BPEL engine gets confused when trying to create the instance identifier using the correlation ID of the JMS message header.

(3) Call for help. I discussed this problem with someone I know from Sun and he was able to provide a pre-release of some documentation that provided a good amount of insight into how to deal with correlations. The result of this new found knowledge was the creation of one more business process (for a total of 4) that actually had 2 JMS receive tasks. I could then create a correlation set based on the JMS message header correlationID and use correlations on each of the JMS receive tasks. The following images show 3 of the 4 business processes (the one not shown is simply a loop that places messages on the inbound JMS queue for the notification web service invocation process).

Notification Web Service Invocation
Notification Response Web Service Implementation

Combined Business Process Using Correlation


First, a message is placed on the inbound queue of the notification web service invocation process. This calls the external web service and uses the response to place a message on the correlation.queue JMS queue. Also, the notification ID that is the response from the web service invocation is set to the JMS message header correlationID property.

Recall that the Combined Business Process has a correlation set defined to be the correlationID message header property. The 2 JMS receive activities are set to use correlations. The first receive task initializes it and the second uses it.

So, once the first message is received, an instance of the Combine Business Process is created and then persisted (since it is now waiting on the second JMS receive). Now, at some future point in time, a person responds to the notification (or the service responds with "no response"). The response includes the notification ID and the response. The Notification Response Web Service Implementation maps the notification ID to the correlationID message property and places the response on the response.queue.

The second JMS receive task on the Combined Business Process is using this queue. This causes the BPEL engine to retrieve the business process instance that corresponds to the notification ID contained within the persons response. The process instance is retrieved (along with its state), and the business process continues to completion. This implementation of correlation finally worked.
full window
full window

ServiceMix 4 into the play

So, what's new in SMX4? This is the first question I asked when I heard that a new major release of ServiceMix is on the horizon. Like most people, I pulled down the latest binaries of SMX4 and started going through the documentation. However, first before jumping into SMX4 lets get a quick picture of what the ServiceMix 3.x (SMX3) technology stack looks like so we can get a point of reference.

In SMX3 the JBI 1.0 container is a runtime environment for a Normalized Message Router (NMR) and JBI components (e.g. Service Engines and Binding Components) leveraging Apache ActiveMQ as the Message Broker. Typically an ESB offers at least message routing (e.g. ServiceMix EIP & Apache Camel), transformation (e.g. XSLT) and connectivity (e.g. File, JMS, HTTP, etc.) as does SMX3. Apache CXF is used to provide SOAP support and xml marshaling. From an operations perspective, SMX3 provides JMX-based management for running components and other internals as well as run standalone or embedded in an application server or web container (e.g. JBoss, Tomcat, etc.). There is a lot more that SMX3 has to offer however for this post that's a good picture. One last note on SMX3, version 3.2 and higher requires Java 5.

In SMX4 the technology stack really hasn't changed much. The key technologies are still there like Apache ActiveMQ, Apache CXF, JMX, etc. Apache Camel is the preferred technology for routing however, ServiceMix EIP appears to still be supported. Ok, so what has changed?

Looking at the architecture, the most noticeable change is the separation of the JBI container into a ServiceMix Kernel and a ServiceMix NMR. The ServiceMix Kernel is a lightweight OSGi-based environment deployed on Apache Felix (an OSGi R4 Services Platform implementation). The kernel and NMR are their own projects in Apache and can be downloaded and built separately. However, not to worry, if you download FUSE 4 preview (certified release of SMX4) from IONA you get the kernel and NMR bundled along with other stuff to run the ESB.

Why the change to OSGi?

James Strachan (ServiceMix co-founder and committer) pointed out to Rod Biresch, that one of JBI's weakest areas is the cumbersome classloader model and that needed to be addressed.

OSGi solves this problem with advanced OSGi bundling and classloading mechanisms. Ok, that sounds good but what is OSGi? Well in a nutshell, it is an environment where bundles (modules) communicate through well-defined services while hiding their internals from other bundles. OSGi also specifies how components are installed and managed. Components can be started, stopped, updated and uninstalled dynamically without bringing down the entire server. This is very beneficial for high availability systems (like an ESB). OSGi bundles are versioned and only bundles that can collaborate are wired together in the same class space. This solves the problem with library dependencies or JAR hell :) There are many more benefits to OSGi and the OSGi Alliance is a great place to start learning. With this change, ServiceMix now becomes an OSGi and JBI-based ESB.

What changed in the NMR?

The obvious change is that the NMR is now it's own Apache project, ServiceMix NMR. So, theoretically you could run different versions of the SeviceMix Kernel and NMR. Unfortunately I don't have a good use case for that although, this separation does provide flexibility. The NMR is now a set of OSGi bundles that can be deployed on any OSGi runtime but mainly built on top of the ServiceMix Kernel. The NMR project is also the JBI 1.0 implementation with plans to support JBI 2.0 in the future. There are a few things missing with ServiceMix NMR 1.0-m1 release (from the release notes):
  • no support for JMX deployment and ant tasks
  • no support for Service Assemblies Connections
  • no support for transactions (a transaction manager and a naming context can be injected into components if they are available as OSGi services, but not transaction processing - suspend / resume - will be performed, as it would be requested for real support)...more on this a little later.
So depending on what you're doing, these missing capabilities may or may not hamper early adoption.

A few other interesting changes to mention. First, SMX4 is now a standalone only runtime. Previous versions supported embedding in application servers, web containers as well as applications. Flows are no longer used for message exchanges. This confused me a little however, thanks to James he clarified things. Flows are NMR flow types that are the mechanism by which the NMR sends messages from one Binding Component or Service Engine to another. SMX3 has 4 flow types that ended up causing confusion and added complexity for developers. This was simplified in SMX4 with a tendency to use explicit routes by exploiting Camel's routing capabilities.

Last change to note, SMX4 no longer performs transaction specific processing. However, it will pass transaction information along as it would any other message. Guillaume Nodet (Apache Project Management Committee Chair and active committer) gave some insight regarding the changes in transaction support and it turns out that the release notes are a little misleading. Guillaume explained that previously the NMR was considered as a transactional resource, which means the sending of a JBI exchange through the NMR could be part of the transaction (depending if it was sent synchronously or not). This is no longer the case with SMX4. Transactions are now conveyed along with the exchanges whether they are sent synchronously or asynchronously. As a result, the NMR does not need to be aware of transactions because they are now controlled by the different components instead. The benefits from this change include:
  • Improved scalability of ServiceMix when using transactions because all JBI exchanges are now sent asynchronously.
  • Improved interoperability as the other JBI containers tend to follow the same pattern when/if they support transactions.
Hopefully that gives you a pretty good sense of what is new in SMX4.
full window