Tuesday

Building with Mule - Open ESB

Ok, I know that we're all up to our ears with technical acronyms, terms, buzzwords, etc. that all appear to have some marketing spin to them in hope that they will eventually work their way into industry acceptance. This is the way as The case at hand, was an integration project, integration between warehouce, CRM and a clients custom front end for Order management (OM), supporting the CRM. The story goes as Rob's one. After a few requirements gathering sessions with the client, we discovered that the project had a number of integration points with new and existing systems...the age old integration challenge.
The existing systems ranged from a warehouse management system to a ASP-based controller that interfaced with front-ends and a C-based for physical hardware devices (e.g. switches, barcode readers, etc.). I was involved early on in the initial discovery sessions and was quick to point out that all these integration points (files, database tables, TCP messages, etc.) we were faced with are very similar to challenges faced while implementing an SOA but on a much smaller scale. So, why not take advantage of the benefits that an ESB provides (i.e. protocol translation, message transformation and routing) to speed development and deliver a maintainable solution? common to me and Rob's case, the next thought was the same : "Hence the “SOApp” concept was born. Well, the term “SOApp” (Service Oriented Application) was a play on the SOA acronym about 3 months ago when I was considering writing up a post (this post) to discuss a unique way of using ESB technology. "

The following is the common story to both cases, but copied from Rob's entry for illustrate the TCP messages and the Telnet approach of his, as an enhancement of this story-telling:

Ok, so we had a conceptual architecture centered around an ESB. The team was all in agreement that we would benefit from NOT having to write custom integration code and simply focus on developing business logic in POJOs. The decision to use Mule (v. 1.4.1) for the ESB was an easy choice for us because we had plenty of experience using it in our SOA Lab. There were many interesting aspects of the project however, this post focuses on how we dealt with the various integration challenges. Let's take a closer look. The diagram below illustrates what the architecture looks like with a focus on various system interfaces.



Warehouse Management System (WMS) – File-based Interface
The WMS generated flat files in a proprietary format on some random schedule. This data is essential to the overall operation of the entire system. These flat files needed to be processed as soon as the would appear in a directory. A Mule file connector was configured to poll a directory, pickup a file, pass the data onto a POJO for processing and eventually update the PostgreSQL database via Hibernate. Nothing fancy here just basic file processing.

PC Controller – TCP Sockets Interface
This controller serves as a central integration point for various switches, sensors, etc. A basic use case was when a sensor was triggered, the controller would send a proprietary delimited length text message was sent to a particular POJO handling that transaction in Mule. The POJO would typically query the PostgreSQL database (wilst in my case Oracle) and return the resulting data back to the controller. There was a non-functional requirement for the entire transaction to take less 200 ms. At first, we used a Mule TCP connector to communicate (both inbound and outbound) with the controller however, due to some limitations on the controller side (and time constraints) we had to switch outbound communications to custom socket code. Here was a case where we only benefited from using Mule's connectors for one communications direction however, since the socket code was easy enough to write it didn't take long to develop a workaround. Oh, the transaction performance ended up to be less than 100ms on average.

Pick-To-Light Controller – JDBC Connector Interface
This controller was a java-based controller that interfaced physical devices via the PostgreSQL database. Though this is not the “ideal” method for interfacing it was a constraint that we had to deal with. What was interesting about this interface is that we configured a Mule JDBC connector to process incoming requests when information appeared in the database tables. The POJOs that processed the data then used our Hibernate persistence layer to respond back to the controller via database tables.

Java Swing Clients – MuleClient / TCP Interface
There were two Java Swing clients that we developed for administrators and end users to interface with the system. These swing clients used the MuleClient to simplify the TCP communications with Mule. Each POJO was considered a “service” with a unique endpoint deployed in Mule. What is interesting about this is that the swing client only had to specify a single uri (e.g. tcp://localhost:) along with a payload and let the MuleClient handle the rest. There was no need to identify a specific endpoint in the URI, which made this design very flexible and maintainable. Here we leveraged the content-based routing capabilities of Mule by configuring various FilteringOutboundRouter's to call internal POJOs for processing. The outbound routers inspect the payload type and routed accordingly based on configuration.

CLI (Command Line Interface) Java Application – Telnet & MuleClient / TCP Interface
Well I saved the most interesting interface for last, IMHO :) One of the requirements was to have a handheld Windows CE device, capable of scanning barcodes, perform CRUD operations on the PostgreSQL database over a wireless 802.11b network. We had part of this problem solved by using the MuleClient to communicate with our POJOs deployed in Mule. Ok, so, how do we implement the MuleClient on a Windows CE device? Well, John Shepard (Chariot Architect) came up with a clever design to use Telnet and a basic Java application to solve the problem. This was a great example of leveraging proven technology along with a creative way of configuring Unix accounts. The end user accounts were configured to start the CLI java application after a successful login to the Unix machine instead of a bash shell. This work very well and performed beautifully. There was no need to install and manage client applications or worry about performance of the handheld device. The built-in Telnet capability of the Windows CE device was all that was needed on the client. This was very cool to see in action especially knowing that only a basic java application was behind it.

The purpose of this post was to give you an overview of how to use an ESB in new and creative ways aside from their original intent. In the future, I may expand more on these interfaces to provide more detail. There were a number of other interesting aspects to the project like using annotations and XDoclet to automatically generate the Mule configuration, etc. that is good material for future posts. Wrapping this up, we did take on some risk of using Mule in this capacity, however, luckily there were no major problems we ran into. Besides we always had Java to fall back on if Mule came up short anywhere. Could this be the beginning of a new movement in application development...the “SOApp”? Maybe, however I can already hear the snickering of my colleges as they read the end of this post :)

------------------------------
a good approach for a first, dirty-hands SOA enableness ;)
full window
full window

8 comments:

Unknown said...

Hi, nice and illustrative reading :)

I have posted the link for this entry to the Mule user list, feel free to follow-up here: http://www.nabble.com/An-interesting-post-on-SOApp-using-Mule-td19974076.html

Andrew

GL Blog said...

Speaking of SOA applications, this is very similar to what the SOAFaces projects is doing. It is enabling SOA (and Mule ESB) powered apps to be built from AJAX applications directly (GWT).

http://code.google.com/p/soafaces/

Dimosthenes said...

Hi, thanks for your posts.Wasn't aware of soafaces. Although, almost 2 years ago, GWT was (nearly) at its first steps. The GWT was from the first time into my attention and play-around tools for a while. But back at 2006-7 wasn't a... production selection. Moreover, for the specific project, the separation of tools was forced by the nature of the project and the usage of Ajax actually wasn't first priority. (interoperability and and bridges between systems etc were must to be done).

Actually, this is a general issue on open source tools. The client must be able to see the trust behind selections and moreover the value added upon his investment. This is something that the developer or the technical provider-consultant can use as a benefit in order to use his 'likes' tools freely.We mustn't be cryptic or hide behind our words in order to have the availability of using what ever we like.

Anonymous said...

Hi Dimosthenes,

i have a question about the CLI/Telnet GUI you are talking about. I'm working on an similar problem and currently we are running a Java GUI on the handheld reader. The Problem we face here is that the JVM is totally outdated and we are getting more and more Problems with this constraint. So in what i would be interested is how this feature was accepted by the customers and how did the GUI looked like on the handhelds? Was it a simple command line GUI or did you do some fancy stuff with telnet?

Thanks, Mike

Dimosthenes said...

Hi Mike, actually you're referring to one of the most interesting parts of the solution. It was hidden behind the interoperability umbrella.
The part of the requirements was to have a handheld Windows CE device, capable of scanning barcodes, perform CRUD operations on the PostgreSQL database over a wireless 802.11b network. We had part of this problem solved by using the MuleClient to communicate with our POJOs deployed in Mule. So in my version of the solution, i have used the jdbc connector as i reffer to, in order to manage the db repository of existing/available parts, or attributes of the parts. In the second case, the database was having a reference of all existing product attributes, regardless availability but in the first, the pojo of mule, was just counting resource availability after actions upon products (from bar code). So, the CLI was simple, actually a jdbc service with no human intervention. This service was feeding our portal (which in turn was feeding the appropriate app for the handheld reader).The swing clients used the MuleClient to simplify the TCP communications with Mule. Each POJO was considered a “service” with a unique endpoint deployed in Mule. What is interesting about this is that the swing client only had to specify a single uri (e.g. tcp://localhost:) along with a payload and let the MuleClient handle the rest. There was no need to identify a specific endpoint in the URI, which made this design very flexible and maintainable. Here we leveraged the content-based routing capabilities of Mule by configuring various FilteringOutboundRouter's to call internal POJOs for processing. The outbound routers inspect the payload type and routed accordingly based on configuration. Don't know if that helped you, but is the best i can give you according to the inputs you gave. (:))If not, please define better your issue, with versions on the client side (e.g. did the Mule worked there? the necessary handheld infos & from where you have to grab them?)

Anonymous said...

The ESB landscape has changed since 2008, I have posted updated evaluation criteria

http://blog.cobia.net/cobiacomm/2012/08/01/esb-comparison/

Anonymous said...
This comment has been removed by the author.
Anonymous said...

You can use the WSO2 open source ESB to achieve many complex integration scenarios. Read http://wso2.com/whitepapers/enterprise-integration-patterns-with-wso2-esb/ for some of the major Enterprise Integration Patterns.