Showing posts with label Open Source. Show all posts
Showing posts with label Open Source. Show all posts

Thursday

The Class Casting Exception in OSGi Environments

In previous posts, i have deal with the OSGi approach on building services, either the general approach or  using iPOJO. These are ok, if one is willing to implement a startup of services bundles or some project from the start, or to develop a solution with a nature of separation or independency on existing IT assets. Unfortunately, this is rarely the case. Usually, there is either the need to consume some objects from existing applications or to just the other way around,  to consume objects created in the OSGi environment. And real life, demands both. The first issue is normally easy to deal with. Actually, this is the way you should go on. Having in mind the S.O.A. enabling that your project, solution,architecture, code etc, should comply with,  there is an issue in the horizon.

Demystifying the Issue one level down

Consider the common condition: You have a non-OSGi Java Application in which an OSGi-based application is embedded. You want to consume objects created in the OSGi-based application (an OSGi Service for example).

The above condition will provide us a problem issue: When you try to consume (invoking methods, assigning to local variables, etc.) the objects created inside the OSGi Environment from the non-OSGi application you get Class Casting Exception.

The reason for getting this exception, is that any class loaded inside the OSGi Environment is loaded from a special class loader provided by the OSGi Framework which is different from the class loader used by the non-OSGi application. So even if the class of the non-OSGi application is exactly the same as the one of the OSGi-based application, the objects will still be treated as of different types since their classes were loaded using different class loaders.

Getting around

If you want to perform on the object a single method invocation or two without passing parameters of conflicting types (the classes that loaded from different class loaders) just invoke the method using reflection, if you have a more complex case then use utilities that would allow you to bridge the classes.Here comes another key resource place for the OSGi’s: DynamicJava. You can overcome the above exception with the help of  API-Bridge for instance, which provides such capabilities. Actually, this API was created to address this problem specifically.

Bridge objects act as their type is of the target class loader, so when the non-OSGi application works with the bridged object it will seem to it that it was loaded from the same class loader. Lets see an example that clarifies how API-Bridge functions.

API-Bridge

Below is a code snippet that loads the same class using two different class loaders and then bridges them using the API-Bridge provided by DynamicJava resources.

   1: public void testApiBridge() throws Exception {  


   2:      /// Class A which implements interface AInterface is loaded by two   


   3:      /// different class loaders so they are treated as different examples  


   4:      Class aClass1 = getClassLoader1().loadClass("org.test.A");  


   5:      Class aClass2 = getClassLoader2().loadClass("org.test.A");  


   6:      Class aInterface1 = getClassLoader1().loadClass("org.test.AInterface");  


   7:      Class aInterface2 = getClassLoader2().loadClass("org.test.AInterface");  


   8:        


   9:      /// Objects of AImplementation class loaded from Class Loader 1 are not  


  10:      /// assignable from objects loaded from Class Loader 2.  


  11:      assertFalse(aInterface1.isAssignableFrom(aInterface2));  


  12:        


  13:      Object a2 = aClass2.newInstance();  


  14:        


  15:      /// The object is not castable since it's an instance of a class  


  16:      /// that was loaded using a different class loader.  


  17:      assertFalse(aInterface1.isInstance(a2));  


  18:        


  19:      /// We create an API Bridge that bridges classes (their fields,  


  20:      /// method parameters, etc.) of the "org.test" package to  


  21:      /// the class loader 1.  


  22:      ApiBridge apiBridge = ApiBridge.getApiBridge(getClassLoader1(), "org.test");  


  23:        


  24:      Object bridgedA2 = apiBridge.bridge(a2);  


  25:      /// The bridged a2 object is now castable  


  26:      assertTrue(aInterface1.isInstance(bridgedA2));  


  27:  }  






Although the example doesn't look very clear at first glance, it's important to note that we wrote only 2 lines of code to bridge the API. What we did up there is that we bridged an object and verified the fact is that it's castable to the AInterface of a different class loader.



While API-Bridge supports classes when used as implementations, super classes and method parameters, casting is only supported for interfaces.



Real World


An example of this problem is a Java Web Application run in an OSGi-oblivious Servlet Container (Tomcat for example), the Web Application has an OSGi-based application embedded inside of it. The OSGi-based application registers an OSGi Service that implements the javax.servlet.Servlet interface. The Web Application has a Delegator Servlet which delegates requests to the registered OSGi Service. The problem with this application is that the Web Application can't consume the Servlet object since it's loaded by a class loader of the OSGi Environment, the fact that causes ClassCastException to be thrown. To overcome this problem, I used the API-Bridge to bridge the Servlet API. Below is the ServletDelegator class which delegates requests to the OSGi Service and uses the API-Bridge to bridge the Servlet objects





   1: public class DelegatorServlet extends HttpServlet {  


   2:       


   3:     @Override  


   4:     protected void doGet(HttpServletRequest request, HttpServletResponse response)  


   5:             throws ServletException, IOException {  


   6:         BundleContext bundleContext = getBundleContext();  


   7:         if (bundleContext != null) {  


   8:             ServiceReference servletServiceRef = bundleContext.getServiceReference(  


   9:                     Servlet.class.getName());  


  10:             if (servletServiceRef != null) {  


  11:                 /// We can't write a line like:  


  12:                 /// Servlet servletObject =  


  13:                 ///     (Servlet)bundleContext.getService(servletServiceRef);  


  14:                 /// because a class casting exception will be thrown since the  


  15:                 /// Servlet class in this code is loaded by Tomcat class loader   


  16:                 /// while the retrieved Servlet by the OSGi Class Loader.  


  17:                 Object servletObject = bundleContext.getService(servletServiceRef);  


  18:                   


  19:                 ApiBridge apiBridge = ApiBridge.getApiBridge(  


  20:                         Thread.currentThread().getContextClassLoader(),  


  21:                        "javax.servlet", "javax.servlet.http");  


  22:                 Servlet servlet = (Servlet)apiBridge.bridge(servletObject);  


  23:                 /// Note that method parameters will be bridged too by API Bridge,  


  24:                 /// so they could be passed to the OSGi Environment.  


  25:                 servlet.service(request, response);  


  26:             } else {  


  27:                 printHtml("Error: A Servlet OSGi Service was not found.",  


  28:                         response);  


  29:             }  


  30:         } else {  


  31:             printHtml("Error: Bundle Context was not found in the ServletContext.",  


  32:                     response);  


  33:         }  


  34:     }  


  35:       


  36:     protected BundleContext getBundleContext() {  


  37:         /// Here we retrieve the Bundle Context of the OSGi-based application  


  38:     }  


  39:       


  40:     /// ... Other supportive code  


  41:       


  42: }  


Friday

Alternative on Building OSGi service: build using iPOJO

Continuing from a previous post on creating OSGi services, this posting, is making thinks more adaptable. The adaptation key here, is in iPOJO.  iPOJO stands for injected POJO. It’s a component model from Apache on top of OSGi. It is very flexible, extensible and easy to adapt as already mentioned. It is a try to remove the overhead of developer’s handling services in OSGi. So, as before, let’s build a simple service using iPOJO and consume it.

OSGi is about modularizing your application. In this demo, we will have three projects (or three bundles) that will be deployed on Apache Felix. The needed ingredients are:

  1. Bnd tool from aQute

Problem Statement

The problem is very simple: Our service is a simple OSGi service that will display the string “Hi There!”.

Getting Started

Like any OSGi service, our service is represented as an interface:

package hi.service;
public interface HiService {
public String sayHi();
}


This will be our first project or first bundle. The bundle will have just the interface. Please note that you will have to export hi.service package when building the bundle. To make this task easy, use the bnd tool to create the jar. I have used Ant script to compile and package the projects. Once bundle is ready, you will use it as a dependency library for compiling other projects.



 



Service Implementation





The next step is to implement our service. We will create a new project with previous projects jar file as a dependency. The service implementation is also POJO and there is no mention of OSGi service in the code. Let’s have a look at our implementation:



package hi.component;
import hi.service.HiService;
public class HiComponent implements HiService {
String message = "Hi There!";
public String sayHi() {
return message;
}
}


Now we define that we have a iPOJO component. This is done through a xml that is placed along with the project. The xml defines that, we have a component of the class hi.component.HiComponent. The example below is a very simple one, you can have call backs, properties set etc in this xml using different xml elements.



<ipojo>
<component classname="hi.component.HiComponent">
<provides/>
</component>
<instance component="hi.component.HiComponent"/>
</ipojo>


We will compile this project into another bundle that will be deployed in the runtime.



 



Using the Service



This is the final project. This project will be consuming the service we created above. The client can be a POJO or a Activator class. Like the service implementation code, we do not code for a service. Instead we go about and code as if the implementation is available to us. iPOJO framework would take care of the rest. Here is our "Hi there!" client:



package hi.client;
import hi.service.HiService;

public class HiClient {

private HiService m_hi;

public HiClient() {
super();
System.out.println("Hi There Client constructor...");
}

public void start() {
System.out.println("Starting client...");
System.out.println("Service: " + m_hi.sayHi());
}

public void stop() {
System.out.println("Stoping client...");
}
}


I have some sys outs to see how our client work. Just like the service project, we have a xml that will define what is the required service, callback functions etc. Have a look at the xml:



<ipojo>
<component classname="hi.client.HiClient">
<requires field="m_hi"/>
<callback transition="validate" method="start"/>
<callback transition="invalidate" method="stop"/>
</component>
<instance component="hi.client.HiClient"/>
</ipojo>


The xml specifies that HiClient requires m_hi to execute. m_hi is a instance of our service. So as long as the service is not available, the HiClient component does not get executed. The callback xml elements specify which methods to execute when the state of the component changes.

Once compiling and packaging of this project is done, we are ready to deploy our example into a runtime and see it working. When you have Felix and iPOJO framework downloaded, you have to configure Felix to load the bundles when it’s started.


Felix configurations are placed in config.properties under conf folder.You will have to modify the entires for felix.auto.start.1 variable. Here is how it looked like after I modified:



felix.auto.start.1= \
file:bundle/org.apache.felix.shell-1.0.1.jar \
file:bundle/org.apache.felix.shell.tui-1.0.1.jar \
file:bundle/org.apache.felix.bundlerepository-1.0.3.jar \
file:bundle/org.apache.felix.ipojo-1.0.0.jar \
file:bundle/hi.service.jar \
file:bundle/hi.component.jar \
file:bundle/hi.client.jar


I have put all my jars in felix/bundle folder. You may place them in different location but you must specify the correct paths above.

We are ready now. Run your felix runtime to see the results!

The OSGi and the Services issue

Few months ago, i had introduce the Open Service Gateway initiative in a post, as trying to make a solid ground for the association of posts concerning the ServiceMix related posts, for introducing a simplistic usage of open source tools on S.O.A. related projects. From that time and on, we were introduced to a wide adoption of OSGi methodology and approach by developers and vendors (both legacy and open source). This fact gives the option and possibility to consider OSGi technology as technology that will transform Java development. Specially on the enterprise side.

To establish a point of useful OSGi, i am giving a quick example on how to build a simple service and consume it. This is quite useful in the S.O.A. development, considering it as a step in the development tasks of the reference architecture that one should follow in a S.O.A. enabled solution and approach. Specially when in a small team the object is cheap (cost and time) development considering services.

You may start and work with Equinox and Eclipse IDE. You may also use other OSGi runtime like Apache Felix or Knopflerfish. Knopflerfish even gives you are good GUI to work with. I will not be explaining fundamentals of OSGi technology.The goal here is to be a little bit more simplistic and independent, in order to be able to work with any other IDE, approach or tools you might prefer. So here we are concerned with a simple application (bundle).  Lets start with.

What's OSGi service?

In very general terms, conforming with S.O.A. re-use, a service is a … repeatable task. When it comes to business, any repeatable task in your business process is a service. Similarly in a application, you can have generic tasks (even specific tasks) that are repeatedly used and therefore, can be represented as service. Representing and using these tasks as services is what SOA is all about! But that' at an enterprise level. When it comes to OSGi services, it is the same concept but applied at JVM level.
In OSGi, a service is a plain java object which is published to a registry. A consumer can consume the registered service through lookup. A service a be registered and unregistered at any point of time. Service is built using interface-based programming model. To implement or build a service you basically provide implementation to a interface. To consume, you only need the interface for the lookup and there is no need to know about the implementation. The service registry is the "middle man" who help producers and consumers to get in touch with each other.

Building STARTUPOSGISRVservice

The first step would be to create our interface or "front end" of the service. For our service, we will have a simple interface named IStartupOSGISRV:

package org.my.service.startupservice;

public interface IStartupOSGISRV {
public String sayHi();
}


And here is our service implementation.



package org.my.service.startupservice;

public class StartupOSGISRV implements IStartupOSGISRV {
public String sayHi() {
return "Hi There!";
}
}


That's it! Our service is ready for use. But, we need to inform consumers that the service is ready to serve. For this, we will have to register our service with the OSGi service registry.

OSGi framework provides us with standard APIs to register and unregister service with the registry. We will use the registerService method to register as shown below:



serviceRegistration = context.registerService(IStartupOSGISRV.class.getName(),startupservice,null);


I am sure for beginners this is not enough. Let's explain the stuff little further.To register our new service, we will build a simple bundle that will call registerService method.



package org.my.service.startupservice;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceRegistration;

public class Activator implements BundleActivator {

private ServiceRegistration serviceRegistration;
private IStartupOSGISRV startupservice;

public void start(BundleContext context) throws Exception {
System.out.println("Starting StartupServiceBundle..");
startupservice= new StartupService();
serviceRegistration = context.registerService(IStartupOSGISRV .class.getName(),startupservice,null);

}

public void stop(BundleContext context) throws Exception {
serviceRegistration.unregister();
}

}


Our Activator class implements BundleActivator. Basically, its a simple OSGi bundle with start and stop methods. We will register our service with the bundle starts up and unregister when the bundle is uninstalled from the framework.

Now lets have a closer look at start method. We create a instance of our service and then use registerService method. The first argument is service name which is obtained using InterfaceName.class.getName(). Its a best practice to use this method instead of specifying the name as string (org.my.service.startupservice.IStartupOSGISRV). The second argument is the instance of the service itself. And the final argument is Map wherein developers can pass additional properties to the service.


To unregister the service, you simple call unregister method when we stop the bundle. So now we have a running service on our OSGi runtime.  Now we must consume it.



 



Consuming a service



To consume a service, we first create serviceReference object form the BundleContext. This can be achieved by calling getServiceReference method. The method takes the class name as a argument. Once you have the serviceReference object, we will use getService method to finally get the service. We will have to typecast the object returned by getService method before using it.



startupserviceRef = context.getServiceReference(IHelloService.class.getName());
IStartupOSGISRV serviceObjectStartupService = (IStartupOSGISRV)context.getService(startupserviceRef);
System.out.println("Service says: " + serviceObjectStartupService.sayHi());


Implementing the service and consumer is the same package is easy. Because, the interface is available. When you have your service and consumer bundle separate, there are some important points to note. OSGi provides the capability of specifying the packages they can be exported or imported. With this facility you can expose your service interface and hide its implementation from the public. The configuration details are specified in the MANIFEST file. Have a look at our StartupService's MANIFEST file:



Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: StartupService Plug-in
Bundle-SymbolicName: StartupService
Bundle-Version: 1.0.0
Bundle-Activator: org.my.service.startupservice.Activator
Bundle-ActivationPolicy: lazy
Bundle-RequiredExecutionEnvironment: JavaSE-1.6
Import-Package: org.osgi.framework;version="1.3.0"
Export-Package: org.my.service.startupservice;uses:="org.osgi.framework"


Notice that we have exported org.my.service.startupservice package. Similarly, we import this package in our consuming bundle.

And to add some notes; The code used for consuming the service is not the best way. You should make the code very simple and easy to understand without involving Exceptions handling,Null pointer checks and ServiceListeners.

Tuesday

SOA and the real Service Integration issue

A real problem considering a SOA project, either in its beginning or in extending an all ready existing one, is the Web Services (ws) hell, that is hiding in the corner. Beginning the SOA approach, especially when having limited time to deliver and / or pure business analysis results, defining and architecting a solution that is compliant with your client (both in the kick-off architecture and in the way of working – aka good adaptation of the new environment) is possibly to provide a sea of ws. From the other hand, extending or taking a project in an already existing SOA environment that was developed by someone else, you have a small daemon hiding in the corner that always play with you in order to make you build more and more new ws in order to build your components.

In any case, in the SOAs you are always face the problem of exponential increase of the ws pool. In order to avoid such a problem, you have to give effort in focusing on the standardization of the core architecture elements (imagine that in an existing case, developed by someone else, you have to understand and comply with the already taken way) and furthermore to establish an optimized integration approach for existing assets of the environment.

Having these in mind, you have either to use some existing ground of work to extent by, or to … re-invent the wheel.

Re-inventing the wheel, means that you are prepared to give much more effort in business analysis and re-engineering and furthermore to develop all the necessary tools and bridges, both for the interoperability elements of the infrastructure, as well as for the semantics and conceptual transformations that will take place in order to implement the needed physical (system) bridges.

Or using some existing assets…

Java Business Integration (JBI) is an effort focused on standardizing the core architecture elements of integration architectures. It is a specification developed under the Java Community Process (JCP) for an approach to implementing a Service Oriented Architecture (SOA). The JCP reference is JSR-208. JBI extends Java EE and Java SE with business integration service provider interfaces (SPIs). It enables the creation of a Java business integration environment for the creation of composite applications. It defines a standard runtime architecture for assembling integration components to enable a SOA in an enterprise information system.

Following the JBI road, you might be able to use another ally for the SOA stacks you gonna need.

Open ESB is an open source integration platform based on JBI technology. It implements an Enterprise Service Bus (ESB) using JBI as the foundation. This allows easy integration of Web Services to create loosely coupled, enterprise-level integration solution.

Open ESB Architecture

Because Open ESB is built on top of the JBI specification, it makes sense that, before diving into the architecture of Open ESB, we take a look at the JBI architecture, which is illustrated in Figure 1.

As the figure shows, JBI adopts a pluggable architecture. At the heart of a JBI runtime is a messaging infrastructure called Normalized Message Router (NMR) that is connected to a bunch of JBI components. JBI components are architectural building blocks of a JBI instance and are plugged into the NMR to interact with each other. The interaction among components carries out the functional logics of the JBI instance.

Normalized Message Router

The primary function of the NMR is to route normalized messages from one component to another. It enables and mediates the inter-component communication. When a component needs to interact with another component, the component does so by generating a normalized message and sending it to the NMR. The NMR will route the message to the destined component based on some routing rules. After the destined component gets and processes the message, it will generate a response message if required, and will send the response message to the NMR. It is the NMR's responsibility to deliver the response message back to the original component.

The NMR uses a WSDL-based messaging model to mediate the message exchanges between components. From the NMR's point-of-view, all JBI components are service providers and/or service consumers. The WSDL-based model defines operations as a message exchange between a service provider and a service consumer. A component can be a service provider, a service consumer, or both. Service consumers identify needed services by a WSDL service name rather than end-point address. This provides the necessary level of abstraction and decouples the consumer from the provider, allowing the NMR to select the appropriate service provider transparently to the consumer.

An instance of an end-to-end interaction between a service consumer and a service provider is referred to as a service invocation. JBI mandates four types of interactions: One-Way, Reliable One-Way, Request-Response, and Request Optional-Response. These interactions map to the message exchange patterns (MEPs) defined by WSDL 2.0 Predefined Extensions.

The NMR also supports various levels of quality of service for message delivery depending on application needs and the nature of the messages being delivered.Component - Service Engine and Binding Component
A JBI component is a collection of software artifacts that provide or consume Web Services. As mentioned previously, JBI components are plugged into the NMR to interact with other components. JBI defines two types of components, Service Engines (SEs) and Binding Components (BCs).

A Service Engine is a component that provides or consumes services locally within the JBI environment. Service Engines are business logic drivers of the JBI system. An XSLT Service Engine, for example, can provide data transformation services, while a BPEL Service Engine can execute a BPEL process to orchestrate services, or enable execution of long-lived business processes. A Service Engine can be a service provider, a service consumer, or both.

A Binding Component is used to send and receive messages via particular protocols and transports to systems that are external to the JBI environment. They serve to isolate the JBI environment from the particular protocol by providing normalization and de-normalization from and to the protocol-specific format, allowing the JBI environment to deal only with normalized messages.

Distinguishing between these two types of components is more functional. In fact, JBI uses only a flag to distinguish these two. The programming model and APIs of these two types are otherwise identical. However, by convention, Service Engines and Binding Components implement different functionality in JBI.

Service Unit and Service Assembly

JBI runtime hosts components and so acts as a container for components. Components in turn act as containers for Service Units (SUs). A Service Unit is a collection of component-specific configuration artifacts to be installed on SEs or BCs. One can also think of a Service Unit as a single deployment package destined for a single component. The content of a Service Unit are completely opaque to JBI, but transparent to the component it is deployed to. An SU contains a single JBI-defined descriptor file that defines the static services provided and consumed by the Service Unit.

Service Units are often grouped into an aggregated deployment file called a Service Assembly (SA). A Service Assembly includes a composite service deployment descriptor, detailing to which component each SU contained in the SA is to be deployed. A service assembly represents a composite service.

Lifecycle Management

JBI also defines a JMX-based infrastructure for lifecycle management, environmental inspection, administration, and reconfiguration to ensure a predictable environment for reliable operations.

Components interfere with JBI via two mechanisms: service provider interfaces (SPIs) and application program interfaces (APIs). SPIs are interfaces implemented by the binding or engine; APIs are interfaces exposed to bindings or engines by the framework. The contracts between framework and component define the obligations of both framework and JBI component to achieve particular functional goals within the JBI environment.

Unless you're doing component development, it's unlikely that you need to work with these SPIs and APIs directly.

Open ESB

Once we've understood the architecture of JBI, Open ESB becomes really simple. Open ESB is an implementation of the JBI specification. It extends the JBI specification by creating an ESB from multiple JBI instances. The instances are linked by a proxy-binding based on Java Message Service (JMS). This lets components in separate JBI instances interoperate in the same fashion as local ones (see Figure 2).

ESB administration is done by the Centralized Administration Server (CAS), a bus member that lets the administrator control the system directly.

Open ESB includes a variety of JBI components, such as the HTTP SOAP Binding Component, the Java EE Service Engine, and the BPEL Service Engine

 

A Sample Service Integration Scenario

In this section, we'll examine a simple use case and illustrate a possible service integration solution using Open ESB.

Problem Description

ABC Movie Theatres is a fast-growing movie theatre chain. To better serve its customers, the management has decided to put in place a new ticket booking service, the Booking Service, which will be responsible for handling most of the ticket purchasing requests generated from various systems such as Web-based applications, ticket vending machines and points of sale (POS) at box offices.

The business logic of handling a booking request is somewhat complicate. But in a nutshell, it involves the following steps:

  1. When a booking request is received, the Booking Service will first try to process the ticket information. This could include checking for ticket availability; holding the tickets for the customer if tickets are indeed available or provide alternatives to the customer otherwise; applying any applicable promotions and calculating the total dollar amount
  2. Then the Booking Service will charge the customer using the payment information included in the booking request
  3. Finally the Booking Service will send a confirmation to the customer using the contact information included in the request.
High-Level Solution Description

Considering the fact that a significant amount of the logic needed by the new booking service has been implemented in different applications over time, and that these applications have been proved stable over time, it makes a lot sense to leverage these existing IT assets for cost efficiency. So the architect team at ABC comes up with the solution of service-enabling and -consolidating existing logics, bringing the services into the ESB and exposing them as new composite services that are made available via different protocols.

To do this, they have to identify candidates for service-enabling. Currently the company has a billing system developed and used by the finance department and it has been working perfectly over years. This system runs over the HTTP protocol. Another system is the notification system developed by the customer relationship department. This system listens to a JMS message queue and, when it gets a new message, sends out notifications to the customer by the means specified in the message, e.g., an e-mail or a voice mail message. These two systems become the ideal candidates for billing customers and sending confirmations.

The company also has a system for processing ticket orders. This system, however, is severely old and impossible to scale up to handle the ever-increasing transaction volume due to recent acquisitions. The architect team has decided it's time to write a replacement application; it's also decided that the replacement application, the Ticket System, will be built using EJB technologies due to the transactional nature of the application.

The architect team also decided that the new Booking Service will be made available via the SOAP-over-HTTP protocol as well as the File protocol to support different client systems.

Solution Details

When it comes to Open ESB development, it's all about creating JBI Service Units and packing them in a Composite Application (or Service Assembly, in JBI terms). Figure 3 illustrates the Service Units for this solution and the interactions among those units. As mentioned earlier, Service Units are deployed to their corresponding JBI components. For simplicity's sake, we'll use the term Service Unit and Component interchangeably provided it doesn't cause any confusion in the particular context.

At a high level, two BCs, the Booking Service SOAP BC and the Booking Service File BC, are created to enable the Booking Service and expose it to the outside world via the SOAP and File protocol respectively, allowing different kinds of clients to consume the service. A client application can invoke the Booking Service via either of these protocols. Invocation requests generated by client applications are then routed to the Booking Process, a BPEL Service Engine. The Booking Process orchestrates the Ticket Service, the Billing Service, and the Notification Service to fulfil the request.

File and SOAP BC - The Booking Service

In Open ESB, a File BC is a JBI binding component that binds file systems to WS-I Web Services. A File BC scans a pre-configured file location for new files. If a new file is found, the component generates a Web Service call using the content in the file as the payload of the Web Service input message. Response messages will also be written to the file system by the component. Many of the properties of a File BC, such as the file location, file name, and time interval for the component to scan the specified location for new file, can be configured.

A SOAP BC works the same way as the File BC, only instead of scanning a file system directory, a SOAP BC accepts WS-I SOAP messages over the HTTP protocol.

A File BC and a SOAP BC are created in this solution to enable the Booking Service and expose it to external systems via these protocols. External systems that wish to consume the service do so by either sending a WS-I-compliant message, or dropping the message in the specific file location.

External requests received by the previous two BCs will be routed, by the NMR, to the Booking Process, a BPEL Service Engine that executes a BPEL business process to orchestrate services.

BPEL Service Engine - The Booking Process

A BPEL Service Engine is a JBI runtime component that provides services for executing WS-BPEL-compliant business processes. The contract between a business process and partner services is described in WSDL.

A BPEL SE can save business process data to a persistent store if configured to do so. This is required for recovering from system failure and running long-lived processes. A BPEL SE can be deployed to a clustered environment to achieve high scalability. The service engine's clustering algorithm automatically distributes processing across multiple engines. When the business process is configured for clustering, the BPEL Service Engine's failover capabilities ensure throughput of running business process instances. When business process instances encounter an engine failure, any suspended instances are picked up by all available BPEL Service Engines.

The Booking Process in this solution is a BPEL SE and is at the heart of this solution. It does some simple message transformation and, most importantly, invokes the Ticket Service, Billing Service and the Notification Service. Figure 4 shows a simplified version of the Booking Service process.

Java EE Service Engine - The Ticket Service

A Java EE Service Engine brings Java EE components into the Open ESB runtime as Web Services. A Java EE Service Engine acts as a bridge between a Java EE application server and a JBI environment for Web Service providers and Web Service consumers deployed in the application server. Java EE Web components or EJB components that are packaged and deployed as Web Services on a Java EE container can be transparently exposed as service providers in JBI environment.

In this solution, the Ticket System is implemented using EJB Session Beans and wrapped as a JAX-WS Web Service. The service is then brought into the Open ESB runtime by the Ticket Service SE.

HTTP and JMS BC - The Billing Service & the Notification Service

Let's face it - all BCs work the same way. This is the beauty of Open ESB architecture. We've looked at two BCs, the File and the SOAP BC. Similarly an HTTP BC binds the HTTP protocol to the Web Service, and a JMS BC binds the JMS protocol.

In this solution, the HTTP BC and the JMS BC are used to bring the Billing Service and the Notification Service into the Open ESB runtime respectively - as stated previously, the Billing Service runs over the HTTP protocol and the Notification Service over the JMS.

It's important to be aware that, although these components are shown connected directly in the figure, they never communicate directly to each other. Instead, components send massages to and receive messages from the NMR. The NMR is responsible for transforming messages and routing messages to the appropriate destinations.

GlassFish & NetBeans - Development & Deployment

Open ESB runs on any OSGi R4-compliant runtime. GlassFish has a built-in JBI runtime and is bundled with NetBeans for easy development. NetBeans provides a comprehensive GUI development environment. The java.net community is working on a new project, GlassFish ESB aimed at creating a community-driven ESB for the Glassfish Enterprise Server platform.

Developing the process in NetBeans involves creating the needed JBI modules and including them in a composite application. Figure 5 shows a screenshot of creating the composite application in the NetBeans IDE.

Conclusion
Open ESB provides a robust and flexible platform for building service-oriented integration solutions. Its component-based architecture allows maximum extensibility and interoperability. It's based on industry standards and is easy to use. It seamlessly integrates with other Java enterprise technologies.

References
Open JBI Components

The overall goal of Project Open JBI Components is to foster community-based development of JBI components that conform to the Java Business Integration specification (JSR208). You can join this project as a JBI component developer or as part of an existing JBI component development team.

About JBI Components

The JSR208 specification provides for three installable JBI components: Service Engines, Bindings, and Shared Libraries. JBI components operate within a JBI container, which is defined by the JSR208 specification. Two popular implementations of JBI containers are Project Open ESB and ServiceMix, an alternative approach which has been mentioned in previous posts.

SOA Open Source stacks

 

Dave Linthicum wrote a post called Open Source SOA provides some major advantages. In his post Dave stated:

When it comes to SOA, I think open source provides two major advantages:

  • First, it’s typically much less expensive than the tools and the technology that are proprietary.
  • Second, they are typically much more simplistic and easier to understand and use.

To the second point, simplicity. The open source SOA vendors seem to take a much more rudimentary approach to SOA, and their tools seem to be much easier to understand and, in some cases, use. While some people want complex, powerful tools, the reality is that most SOAs don’t need them. If you’re honest with the requirements of the project, you’ll see that good enough is, well, good enough.

Great points. I would also add another clear advantage which I learned the hard way. On a previous enterprise wide SOA initiative, I drank the cool-aid that the vendor stack was an integrated stack and was simpler to deploy and manage over a stack of a mix of vendors. What I found out is that the mega vendors (IBM, Oracle, etc.) have bought so many pure play tools (rules engines, BPMs tools, data services and MDM tools, governance tools, etc.) that the smooth integration ends when the Power Point decks are closed. In reality, the mega vendor stacks are a hodge podge of rushed acquisition and integration efforts.The important thing, is that the underlying architecture of each tool within the stack are completely different and there are very few people (if any) within the organization who understands the complete stack. In fact, we were dealing with two very different organizations when dealing with support and they were not in sync. Eventually the entire company was consumed by another mega vendor (you can probably guess which acquisition this was) and the whole product roadmap was turned upside down.

Now let’s look at some of the well established open source stack vendors like WSO2, MuleSource, and RedHat. These vendors do not suffer from acquisition madness and chaos. If fact, they are all built on a consistent architecture and do offer smooth integration between the various layers of the stack. Do they have all of the features of the commercial products? No. Do they have enough features for most SOA initiatives. Definitely. MikeKavis wrote a post on CIO.com called Tight Budgets? Try open source SOA. Here is a quick summary of the advantages he discussed:

  1. Try before you buy
  2. Lower cost of entry
  3. Cost effective support
  4. Core competency
  5. For the people by the people

So what open source options do I have, you might ask? The following picture shows the open source tools that some people prefer for their new SOA initiative. There are using a combination of WSO2, Intalio, Drools, Liferay, and PushToTest.

This is just one example of many. You can mix and match tools from different open source communities or you could standardize on one community. Here is an example of Red Hat’s jBoss SOA stack.

And MuleSource has a well known suite of tools as well.

Many organizations are still not very comfortable with open source for mission critical initiatives. There is a debunk for many of the open source myths in the past (here, here, and here).

If there ever was a time to embrace open source, the time is now in this harsh economy. As commercial SOA vendors continue to get gobbled up by the mega vendors, it is time to seriously consider alternatives.

The above schematics and the first place to step on (context) for writing this post are from  Mike Kavis . Also I would like to thank him for using his comments and graphs (although their are not his - Please refer to Mule, JBoss and WSO2 sites for these and extended info for these tools).

Adapting with the SOA – Mashing up, the case of Enterprise Mashups

 

The Dilemma

When defining a SOA scheme, or developing web services (of coarse in a SOA compliant way or SOA enabled) you try to sell or develop a reusable asset both to you (for easier future development) and your clients (in order to deliver them a dynamic, adaptable and easy to extent asset). Nowadays, lots are mentioned around enterprise mashups. Actually, when working in web development projects 4-5 years ago, the idea of mashups, was very helpful and many times the focus of my work. Thing of the Microsoft’s Sharepoint logic, whereas you just by dropping in web-parts, you were able to develop in just a ‘lego’ -way some potral, an intranet and what so ever web presence. Knowing the tool’s abilities and the business/project requirement needs the whole job was straight forward. The same for example, holds for the Plone CMS system, which just by re-using existing parts of it (that were tested and user friendly) you were able to deliver a very user friendly CMS system, making your clients very happy that they could manage some web aspects and deploys  from a desk and by paying a web developer.

In the world of the SOA era, this “lego” approach is hiding behind the re-usage promise.  Companies and developers, software architects and business, are now familiar with the bottlenecks of the above notion. The managers and so, of the SOA ‘environments’, came above the notion of SaaS. In general, Software as a Service (SaaS, typically pronounced 'sass') is a model of software deployment where an application is hosted as a service provided to customers across the Internet. By eliminating the need to install and run the application on the customer's own computer, SaaS alleviates the customer's burden of software maintenance, ongoing operation, and support. Conversely, customers relinquish control over software versions or changing requirements; moreover, costs to use the service become a continuous expense, rather than a single expense at time of purchase. Using SaaS also can conceivably reduce that up-front expense of software purchases, through less costly, on-demand pricing. From the software vendor's standpoint, SaaS has the attraction of providing stronger protection of its intellectual property and establishing an ongoing revenue stream. The SaaS software vendor may host the application on its own web server, or this function may be handled by a third-party application service provider (ASP). This way, end users may reduce their investment on server hardware too.

the…Philosophy

As a term, SaaS is generally associated with business software and is typically thought of as a low-cost way for businesses to obtain the same benefits of commercially licensed, internally operated software without the associated complexity and high initial cost. Consumer-oriented web-native software is generally known as Web 2.0 and not as SaaS. Many types of software are well suited to the SaaS model, where customers may have little interest or capability in software deployment, but do have substantial computing needs. Application areas such as Customer relationship management (CRM), video conferencing, human resources, IT service management, accounting, IT security, web analytics, web content management and e-mail are some of the initial markets showing SaaS success. The distinction between SaaS and earlier applications delivered over the Internet is that SaaS solutions were developed specifically to leverage web technologies such as the browser, thereby making them web-native. The data design and architecture of SaaS applications are specifically built with a 'multi-tenant' backend, thus enabling multiple customers or users to access a shared data model. This further differentiates SaaS from client/server or 'ASP' (Application Service Provider) solutions in that SaaS providers are leveraging enormous economies of scale in the deployment, management, support and through the Software Development Lifecycle.

Key characteristics

The key characteristics of SaaS software, in general, include:

  • network-based access to, and management of, commercially available software
  • activities that are managed from central locations rather than at each customer's site, enabling customers to access applications remotely via the Web
  • application delivery that typically is closer to a one-to-many model (single instance, multi-tenant architecture) than to a one-to-one model, including architecture, pricing, partnering, and management characteristics
  • centralized feature updating, which obviates the need for downloadable patches and upgrades.
  • SaaS is often used in a larger network of communicating software - either as part of a mashup or as a plugin to a platform as a service. Service oriented architecture is naturally more complex than traditional models of software deployment.

SaaS applications are generally priced on a per-user basis, sometimes with a relatively small minimum number of users and often with additional fees for extra bandwidth and storage. SaaS revenue streams to the vendor are therefore lower initially than traditional software license fees, but are also recurring, and therefore viewed as more predictable, much like maintenance fees for licensed software.

Ok. This is how the thing is defined to be like. This is the encyclopaedias bird-eye view. Now lets apply these. And we have a real world case. From the above Key characteristics,  the 3rd and the 5th are of great importance. The business analysis should give some hinds, but there is not only that.  The new startup that I am exposing here as example, is building a SaaS solution that will be consumed by various types of customers and partners. These customers and partners may want to consume our data services as a RSS feed, gadget, SMS message, web page, within a portal or portlet, or a number of different ways. I do not want to spend the rest of my life developing new output mediums for our services. Instead, I would rather spend my time adding new business services to enhance our product and service offerings hence contributing to the bottom line.

Enterprise Mashups to the rescue

Enterprise mashups will allow me to offer my partners and customers the ultimate flexibility to access our products and services in ways that are convenient for them without having to wait on my IT shop to decide if (a) we think the request is important enough in our priority list, (b) if we have the time and resources to work on it, and (c) how much we will charge them. On the IT side of the house, with an enterprise mashup strategy in place we can be assured that whatever mashups our customers and partners create, they will be subject to the same security and governance as the services we have developed. The diagram below shows a logical view of how our SOA will be designed.

As you can see, we have clearly abstracted the various layers within the architecture and they all inherit our overall security policies. SOA governance is applied to this architectural approach to enforce our standards and design principles. Overall IT governance provides oversight over the entire enterprise which includes legacy systems (we don’t mention any legacy yet), third party software, etc.

Now let’s add the enterprise mashup layer. We want to hide the complexity of our architecture from the end user and expose data services to them to consume. At the same time we want these mashups to be equally secure as the services we write and adhere to the same governing principles. Enterprise mashup products provide tools to make managing this layer easy and efficient. The diagram below shows the enterprise mashup layer inserted into the architecture as a layer on top of SOA.

 

Enterprise Mashups in simple terms

 Deepak Alur (JackBe’s VP of Engineering) discussed how enterprises have been focusing more on infrastructure and technology and not on the consumers of data. As he coined it, many shops are “developing horizontally and not addressing the needs of the users”. He talked about how users were doing their own brute force mashups by cutting and pasting data from various places into Excel. This creates various issues within the enterprise due to lack of data integrity, security, and governance. It is ironic how corporations spend huge amounts of money on accounting software and ERP systems, yet they still run the business out of user created Excel spreadsheets! The concept of enterprise mashups addresses this by shifting the focus back to the user consumption of data. Here are some of the requirements for mashups that Deepak pointed out:

  • User driven & user focused
  • Both visual & non-visual
  • Client & server side (although most are server)
  • Plug-n-Play
  • Dynamic, Adhoc, Situational
  • Secure & Governed
  • Sharable & Customizable
  • Near zero cost to the consumer

Jackbe’s enterprise mashup tool is called Presto. Presto is an Enterprise Mashup Platform that allows consumers to create “mashlets” or virtual services. IT’s role is to provide the security and governance for each data service that will be exposed for consumer use.

Presto Wires is a user friendly tool to allow users to create their own mashups by joining, filtering, and merging various data services (as shown in the picture below).

In this example the user is combing multiple data points from many different organizations in an automated fashion. They could then present this data to multiple different user interfaces and devices. All without waiting on IT.

How this solves my Dilemma


Back to my dilemma. By leveraging a tool like Jackbe’s Presto or WSO2’s Mashup Server, I can now present various data services in a secured and governed fashion to my customers and partners without being concerned on how they want to consume it. Whether they want the mashup on their own intranet, as a desktop gadget, as an application on Facebook, or what ever they dream of, all I need to be concerned with is the SLA of my data services. This also makes my product offering more competitive than my competitors who have proprietary user interfaces that do not provide the flexibility and customization that the customers desire.

As mentioned in the title, this is the Adaptation with your SOA cake. For those organizations who are disciplined enough to implement SOA and follow the best practices of design and governance, the reward can be an simple addition of an Enterprise Mashup Platform on top of your SOA stack. This is the ultimate flexibility and agility that SOA promises.

Wednesday

Automatically generate Java Web service clients with Axis2, XFire, CXF, and Java 6, including WSDL compatibility checks

Client-side WSDL processing with Groovy and Gant

Like it or not, service-oriented architecture (SOA) is a hot topic, and SOAP-based Web services have emerged as the most common implementation of SOA. But, as happens with all new-comings, "SOA reality brings SOA problems." You can mitigate these problems by creating useful Web service clients, and also by thoroughly testing your Web services on both the server side and the client side. WSDL files play a central role in both of these activities, so in this post i will extent the client autogeneration approach and dynamic client for a web service going into an alternative (and many time quicker) path, using an extensible toolset that facilitates client-side WSDL processing using Gant and Groovy.

The real life needs-requirements emerging …

The real SOA-wise for Web services, as real life (and furthermore mature SOA standards) demand, should be interoperability. Although, in various projects the sensitivity and depth of the term interoperability should be defined well, in those projects always, there should be a cross-platform Web service testing team responsible for testing:

  1. functional aspects as well as the
  2. performance,
  3. load, and
  4. robustness

of Web services. In the ‘open sea’ of open source and legacy related applications, with mixture of standards and a batch of tools available around for various jobs and tasks, I realized the need for a….

  • small,
  • easy-to-use,
  • command-line-based solution

for WSDL processing. I wanted the toolset to help testers and developers check and validate WSDL 1.1 files coming from different sources for compatibility with various Web service frameworks, as well as generating test stubs in Java to make actual calls. For the Java platform, that meant using Java 6 wsimport, Axis2, XFire, and CXF.

Searching for solution…

I’ve started client-side test development with XFire, but then switched to Axis2 because of changing customer requirements in our agile project. (Axis2 was considered to be more popular and widespread than XFire.) I have also used ksoap2 -- a lightweight Web service framework especially for the Java ME developer. We didn't expand the toolset to use ksoap2 because it has no WSDL-to-Java code generator.

Besides being controllable via simple commands, the toolset had to be able to integrate at least the WSDL checker into an automated build environment like Ant. One solution would have been to develop everything as a set of Ant targets. But executing everything with Ant is cumbersome when tasks become more complex, and you need control structures like if-then-else or for loops.

Even using ant-contrib binds you to XML-structures that are not easy to read, although you will have more functionality available. Anyhow, in the end you might need to implement some jobs as Ant tasks.

solution’s profile, overview:

All of this is possible, of course, but I was looking for a more elegant solution. Finally, I decided to use Groovy and a smart combination of Groovy plus Ant, called Gant. The components I have developed for the resulting Toolset can be divided into two groups:

  • The Gant part is responsible for providing some "targets" for the tester's everyday work, including the WSDL-checker and a Java parser/modifier component.
  • The WSDL-checker part is implemented with Groovy, but callable inside an Ant environment (via Groovy's Ant task) as part of the daily build process

That is an overview of the programming and scripting languages I used to build the Groovy and Gant Toolset. Now let's consider the technologies in detail.

Overview of Ant, Groovy, and Gant

Apache Ant is a software tool for automating software build processes. It is similar to make but is written in the Java language, requires the Java platform, and is best suited to building Java projects. Ant is based on an XML description of targets and their dependencies. Targets include tasks that every project needs, like clean, compile, javadoc, and jar. Ant is the de-facto standard build tool for Java, although Maven is making inroads.

Groovy is an object-oriented programming and scripting language for the Java platform, with features like those of Perl, Ruby, and Python. The nice thing is that, Groovy sources are dynamically compiled to Java bytecode that works seamlessly with your own Java code or third-party libraries. By means of the Groovy compiler, you can also produce bytecode for other Java projects. It is fair to say that I am biased towards Groovy compared to other scripting languages such as Perl or Ruby. While other people's preferences and experiences may be different from mine, the integration between Groovy and Java code is thorough and smooth. It was also easy, coming from Java, to get familiar with the Groovy syntax. What made Groovy especially interesting for solving my problems was its integration with Ant, via AntBuilder.

Gant (shorthand for Groovy plus Ant) is a build tool and Groovy module. It is used for scripting Ant tasks using Groovy instead of XML to specify the build logic. A Gant build specification is just a Groovy script and so -- to quote Gant author Russel Winder -- can deliver "all the power of Groovy to bear directly, something not possible with Ant scripts." While it might be considered a competitor to Ant, Gant relies on Ant tasks to actually do things. Really it is an alternative way of doing builds using Ant, but using a programming language, instead of XML, to specify the build rules. Consider using Gant if your Ant XML file is becoming too complex, or if you need the features and control structures of a scripting language that cannot be easily expressed using the Ant syntax.

What you see is what you get -- toolset contents and prerequisites

All the code above for the Groovy and Gant Toolset has been tested on a Windows XP and Windows 2003 Server OS with Java 5 and Java 6 using Eclipse 3.2.2 . In order to play with the toolset in its entirety, you will need to install Java 6, Axis2-1.3, XFire-1.2.6, CXF-2.0.2, Groovy-1.0, and Gant-0.3.1. But you can configure the toolset using a normal property file and exclude WSDL checks for frameworks you are not interested in, including Java 6's wsimport.

Installing the frameworks is simple: just extract their corresponding archive files. I assume you have either Java 5 or Java 6 running on your machine. You can use the Java 6 wsimport WSDL checker even with Java 5; you just need Java 6's rt.jar and tools.jar as an add-on in a directory of your choice. Installing Groovy and Gant usually takes a matter of minutes. If you follow my advice of placing Groovy add-ons -- including the gant-0.3.1.jar file -- in your <user.home>/.groovy/lib directory you do not even have to touch your original Groovy installation.

After the installation steps, you have to keep in mind the following artifacts that have been used or modified accordingly for my environment:

  • build.gant is the Gant script that contains Gant targets with Groovy syntax.
  • build.properties is needed to customize the Gant script and the Groovy WSDL checker.
  • A set of jar files (including the Gant module) to be placed in your <user.home>/.groovy/lib directory to enrich Groovy for the toolset.
  • An Eclipse Java project called JavaParser that is used to scan the generated Axis2 stub in order to modify it for use with HTTPS, chunking, etc. Two SSLProtocolFactory classes are included with this distribution for use with Axis2 and Java 6 (or ksoap2). They will be helpful when it comes to cipher suite handling.
  • An Eclipse Groovy project called WSDL-Checker that contains Groovy classes that check WSDL files by calling Web service framework code generators and analyzing the output files. WSDL-Checker also validates WSDL files using the CXF validator tool (if you have CXF installed and enabled). This project was created using the Eclipse-Groovy-Plugin.
  • A small Ant script that demonstrates how to call the Groovy WSDL checker from Ant as well as how to handle the checker's response.
  • A directory with sample WSDL files.
  • An Eclipse project to call a public Web service (GlobalWeather) provided as a JUnit test; it takes a generated Axis2 stub that has been modified by JavaParser (one JUnit test with Axis2 data binding "adb" and one with "xmlbeans").
  • An Eclipse workspace preferences file (Groovy_Gant.epf) to assist in setting all the needed Eclipse classpath variables and user libraries.
  • Two Groovy scripts that call public Web services by means of the Groovy SOAP extension -- just so you can see what Groovy has to offer in this area

Once you've set up your environment you'll be ready to begin familiarizing yourself with the Groovy and Gant Toolset. For those who need it, here is a quick primer on the client side of Web services, which you will need to understand in order to follow the discussion in the remainder of the article.

The client side of Web services

A WSDL file incorporates all the information that is needed to create a Web service client. In order to create the client, a Web service framework's code generator reads the WSDL file. Based on the definitions found in the WSDL file it creates a Java stub or proxy class that mimics the interface of the Web service. Depending on the switches, this stub code can become very large and include all the referenced data types as inner classes. A better option is to let the generator create a couple of classes in different packages that will be the basis for javadoc generation later on. All the resulting classes are Java source code that you can study. Nevertheless, it is generated code, and as such it has its own "look."

For functional and performance testing it might be necessary to modify the client stub. This is especially true when it comes to using HTTPS, instead of HTTP, to control the client-side debug level via a parameter (instead of using an XML config file), or to deal with HTTP chunking. (HTTP 1.1 supports chunked encoding, which allows HTTP messages to be broken up into several parts. Chunking is most often used by the server for responses, but clients can also chunk large requests.)

Particularly during testing, you may decide to just say "OK" to all self-signed SSL certificates. Therefore, when using HTTPS, you could need a specific SSL Socket Factory that allows for accepting all certificates in test mode. For performance tests, you might even want to control the number of opened connections, and you will want to re-use the same connection for different calls coming from the same client. This article is accompanied by a Java 5 parser and an Axis2 client stub modifier to deal with such scenarios.

Because you will get the parser/modifier as source code, you can customize it to "inject" some code for features not covered by my solution. What's more, you can use the whole code as a blueprint to write your own modifier for frameworks other than Axis2.

Gant targets for WSDL processing

A Gant script -- build.gant -- is the basis for all the features of the Groovy and Gant Toolset. Every task a user can perform is expressed as a Gant target. Using the directory that contains build.gant as your working dir, you can get an overview of all the supported targets, together with a description, by typing gant or gant -T in your command shell. The gant command refers to the "default" target you can implement, the gant -T feature (t stands for "table of contents") comes for free as part of the Gant implementation.

Listing 1. Calling Gant from the command line
   1: >gant


   2:  


   3:         USAGE:


   4:         gant available (checks your build.properties settings for available frameworks)


   5:         gant wsdls (prints all wsdl files with their target endpoints, together with wsdl file 'shortnames' to be used by other targets regex)


   6:         gant [-D "wsdl=<wsdl_shortname_regex>"] javagen (generate Axis2 based Java code from wsdl, compile, provide javadoc, and generate necessary jar/zip files)


   7:         gant [-D "wsdl=<wsdl_shortname_regex>"] check (check one or more wsdl files for compatibility with installed code generators & validator)


   8:         gant [-D "wsdl=<wsdl_shortname_regex>"] validate (validate one or more wsdls files using the CXF validator tool (if CXF is installed))


   9:         gant [-D collect] alljars (generates a directory with all Axis2 client jars, src/javadoc-ZIPs, and xsb resource files if xmlbeans is used)


  10:         gant -D "wsdl=<wsdl_shortname_regex>" [-D "replace=<old>,<new>"] ns2p (prints a namespace-to-package mapping for a wsdl file)


  11:  


  12:         Produced listings in your 'results' directory:


  13:                 output-<tool>.txt & error-<tool>.txt with infos/errors/warnings for the code generation


  14:  


  15:  




The following commands are available as features of the Groovy and Gant Toolset:




  • gant available examines your build.properties file for available and enabled frameworks


  • gant wsdls examines the wsdl directory and produces a sorted list of all available WSDL files together with a "short name" for every service to be used by other commands, as well as the target endpoint(s) for the service. (This command delivers an output, as shown in Listing 2.)


  • gant [-D "wsdl=<regex>"] check uses the code generators configured by build.properties to check all WSDL files, or a regular-expression matching subset of all WSDL files, and produces a summary of error/warning messages.


  • gant [-D "wsdl=<regex>"] validate is the same as "check but instead of the code generators only the CXF validator tool is used (if CXF is installed). It can check the WSDL file's conformance to the WSDL and SOAP schema as well as look at some WS-I Basic Profile recommendations.


  • gant [-D "wsdl=<regex>"] javagen generates Axis2 source code for all WSDL files, or a regular-expression matching subset thereof. It then modifies the client stub, compiles all the code, and generates javadoc information, and a jar file with the compiled bytecode.


  • gant [-D collect] alljars generates a directory containing the bytecode-jars/src-zip, and javadoc-zip archives for all WSDL files. If the collect option is specified it will not call javagen for every WSDL file, and will only copy jars and ZIPs to the directory defined in build.properties.


  • gant -D "wsdl=<regex>" [-D "replace=<old>,<new>"] ns2p examines a WSDL file and prints a namespace-to-package mapping, using the replacement you provide with the optional "replace" parameter This feature is useful if - for test purpose - you need access to application APIs that have no SOAP interface but only a business delegate (known from the Core J2EE Patterns). But -- unfortunately -- these delegates will require other delegate jar files that contain classes with just the same package names as you provide by your WSDL-based Axis2 generated code. Therefore, I did improve the Axis2 code generators package creation in order to assist you in generating "unique" package names for your SOAP parts that will not conflict with business delegate classes in your Eclipse project's classpath. In other words: you can even mix business delegate service calls and SOAP-based service calls in the same Eclipse project. A sample output would look like this:



       1: >gant -D "wsdl=Logon" -D "replace=myProject,myProject_soap" ns2p


       2:     ---> Provided regex matched 'Logon'


       3:     using the following replacement: myProject=myProject_soap


       4:     checking G:/JavaWorld/Groovy_Gant/wsdl/security/webservice/svc-logon/wsdl/Logon.wsdl


       5:     http\://myProject.myCompany.com/runtime=com.myCompany.myProject_soap.runtime


       6:     http\://myProject.myCompany.com/logon=com.myCompany.myProject_soap.logon


       7:     http\://myProject.myCompany.com/remoting/soap/types/headerelements=com.myCompany.myProject_soap.remoting.soap.types.headerelements


       8:     http\://myProject.myCompany.com/logon/generated/interf=com.myCompany.myProject_soap.logon.generated.interf




    Copy this output in a file "ns2p.properties" and save it in the same directory where "Logon.wsdl" is located. Then the javagen command will take care of this mapping when generating your Axis2 client stub for the "Logon" service.





All Gant targets are available as commands for the user and can be combined like so:



gant -D "wsdl=GlobalW.+" -D collect javagen alljars


Listing 2. Sample output from the 'gant wsdls' command




   1: G:\JavaWorld\Groovy_Gant>gant wsdls


   2: list all wsdls


   3:  


   4: AWSECommerceService [amazon/webservice/AWSECommerceService/wsdl/AWSECommerceService.wsdl]


   5:         [<soap:address location="http://soap.amazon.com/onca/soap?Service=AWSECommerceService"/>]


   6:  


   7: AddNumbers [handlers/webservice/svc-addNumber/wsdl/AddNumbers.wsdl]


   8:         [<soap:address location="http://localhost:9000/handlers/AddNumbersService/AddNumbersPort" />]


   9:  


  10: Callback [callback/webservice/svc-callback/wsdl/Callback.wsdl]


  11:         [<soap:address location="http://localhost:9005/CallbackContext/CallbackPort"/>


  12:         <soap:address location="http://localhost:9000/SoapContext/SoapPort"/>]


  13:  


  14: CurrencyConverter [public-services/webservice/svc-lookup/wsdl/CurrencyConverter.wsdl]


  15:         [<soap:address location="http://glkev.webs.innerhost.com/glkev_ws/Currencyws.asmx" />]


  16:  


  17: GlobalWeather [public-services/webservice/svc-lookup/wsdl/GlobalWeather.wsdl]


  18:         [<soap:address location="http://www.webservicex.net/globalweather.asmx" />]


  19:  


  20: HelloWorld [hello-world/webservice/svc-hello-world/HelloWorld.wsdl]


  21:         [<soap:address location="https://localhost:9001/SoapContext/SoapPort"/>]


  22:  


  23: JMSGreeterService [jms-greeter/webservice/svc-greeter/wsdl/JMSGreeterService.wsdl]


  24:         []


  25:  


  26: Logon [security/webservice/svc-logon/wsdl/Logon.wsdl]


  27:         [<soap:address location="http://localhost:4708/com/myCompany/myProject/logon/generated/interf/Logon"/>]


  28:  


  29: YellowPages [public-services/webservice/svc-lookup/wsdl/YellowPages.wsdl]


  30:         [<soap:address location="http://ws.soatrader.com/delimiterbob.com/0.1/YellowPages"/>]


  31:  


  32: G:\JavaWorld\Groovy_Gant>


  33:  




Gant configuration using build.properties



Like an Ant file, the Gant execution is configured by a property file called build.properties which resides in the same directory as build.gant, as shown in Listing 3.



Listing 3. Gant configuration using build.properties




   1: # Property file to customize the Gant script for WSDL processing.


   2:  


   3: # ------------------------------------------------------------------------------


   4: # Tell Gant which WSDL checker you have installed


   5: # ------------------------------------------------------------------------------


   6: axis.available=yes


   7: xfire.available=yes


   8: cxf.available=yes


   9: wsimport.available=yes


  10:  


  11:  


  12: # ------------------------------------------------------------------------------


  13: # Axis2 information


  14: # ------------------------------------------------------------------------------


  15: axis2.install.dir=./ThirdPartyTools/axis2-1.3


  16: axis2.lib.dir=${axis2.install.dir}/lib


  17: axis2.version=1.3


  18:  


  19: # ------------------------------------------------------------------------------


  20: # XFire information


  21: # ------------------------------------------------------------------------------


  22: xfire.install.dir=./ThirdPartyTools/xfire-1.2.6


  23: xfire.lib.dir=${xfire.install.dir}/lib


  24: xfire.version=1.2.6


  25: ant.jar=./ThirdPartyTools/groovy-1.0/lib/ant-1.6.5.jar


  26:  


  27: # ------------------------------------------------------------------------------


  28: # CXF information


  29: # ------------------------------------------------------------------------------


  30: cxf.install.dir=./ThirdPartyTools/apache-cxf-2.0.2-incubator


  31: cxf.lib.dir=${cxf.install.dir}/lib


  32: cxf.version=2.0.2


  33:  


  34: # ------------------------------------------------------------------------------


  35: # Java6 information (only necessary if you are running Java5)


  36: # ------------------------------------------------------------------------------


  37: java6.lib.dir=./ThirdPartyTools/Java6/lib


  38:  


  39: # ------------------------------------------------------------------------------


  40: # Java parser  information


  41: # ------------------------------------------------------------------------------


  42: java-parser.install.dir=./JavaParser


  43:  


  44: # ------------------------------------------------------------------------------


  45: # Code generation information


  46: # ------------------------------------------------------------------------------


  47:  


  48: wsdl.root.dir=./wsdl


  49: stubs.package.prefix=com.mycompany.myproject_axis


  50: # if 'namespace-to-package replacement' is enabled,


  51: # we try to provide a suitable mapping for Axis2


  52: # but - consider as an alternative - using a 'ns2p.properties' file per wsdl file


  53: # with the 'gant ns2p' command output as a basis


  54: ns2p.replace=no


  55: ns2p.namespace=myproject.mycompany.com


  56: ns2p.package=com.mycompany.myproject_axis


  57:  


  58: # Axis2 data binding= adb|xmlbeans


  59: axis.data.binding=adb


  60: output.axis.file=./results/output-axis.txt


  61: error.axis.file=./results/error-axis.txt


  62: output.axis.codeGenerator=./GeneratedCode_axis


  63:  


  64: output.xfire.file=./results/output-xfire.txt


  65: error.xfire.file=./results/error-xfire.txt


  66: output.xfire.codeGenerator=./GeneratedCode_xfire


  67:  


  68: output.cxf.file=./results/output-cxf.txt


  69: error.cxf.file=./results/error-cxf.txt


  70: output.cxf.codeGenerator=./GeneratedCode_cxf


  71:  


  72: output.wsimport.file=./results/output-wsimport.txt


  73: error.wsimport.file=./results/error-wsimport.txt


  74: output.wsimport.codeGenerator=./GeneratedCode_wsimport


  75:  


  76: output.validator.file=./results/output-validator.txt


  77: error.validator.file=./results/error-validator.txt


  78:  


  79: generated.code.dir=./GeneratedCode


  80: client.jar.dir=./ClientJarFiles


  81:  


  82: # ------------------------------------------------------------------------------


  83: # Javadoc information


  84: # ------------------------------------------------------------------------------


  85: javadoc.enabled=yes


  86: javadoc.packageNames=com.*,tools.*,org.*,net.*


  87: project.copyright=Copyright © 2007 - Mycompany.com


  88:  


  89:  


  90: # ------------------------------------------------------------------------------


  91: # Optional proxy information


  92: # ------------------------------------------------------------------------------


  93: proxy.enabled=no


  94: proxy.host=myProxyHost


  95: proxy.port=81




Expressing dependencies



With Gant you can express dependencies as you would do with Ant, and you can call all available Ant tasks that are provided by Groovy's ant-1.6.5jar (found in the <GROOVY_HOME>/lib directory). You do not have to create a Groovy AntBuilder for this purpose because Gant does it for you.



Looking at the build.gant script excerpts in Listing 4, you can see that a Gant script is really a Groovy script.



Listing 4. Gant script (excerpt from build.gant)




   1: import org.apache.commons.lang.StringUtils


   2:  


   3:     ...


   4:     boolean wsdlRegexProvided = false


   5:     boolean wsdlRegexOK = false


   6:     def wsdlFilesMatchingRegexList = []


   7:  


   8:     def readProperties() {


   9:         Ant.property(file: 'build.properties')


  10:         def props = Ant.project.properties


  11:         return props;


  12:     }


  13:     def antProperty = readProperties()


  14:  


  15:     long startTime = System.currentTimeMillis()


  16:  


  17:     // Classpath for Java code generation tool


  18:     def java2wsdl_classpath = Ant.path {


  19:         fileset(dir: antProperty.'axis2.lib.dir') {


  20:             include(name: '*.jar')


  21:         }


  22:     }


  23:  


  24:     def generateJavaCode = { wsdlFile, javaSrcDir ->


  25:         def ns2pValues = ''


  26:         if (antProperty.'ns2p.replace'.equals("yes")) {


  27:             //TODO: consider replacing regex by XmlSlurper


  28:             def pattern  = /^.*(?:targetNamespace=")([^"]+)"/


  29:             def matcher = pattern.matcher('')


  30:             def ns2pMap = [:]


  31:             new File(wsdlFile).eachLine {


  32:                 matcher.reset(it)


  33:                 while (matcher.find()) {


  34:                     String namespace = matcher.group(1)


  35:                     if (!ns2pMap.containsKey(namespace)) {


  36:                         ns2pMap.put(namespace, (namespace-'http://').replace('/', '.').replaceAll(antProperty.'ns2p.namespace',antProperty.'ns2p.package'))


  37:                     }


  38:                 }


  39:             }


  40:             Iterator iter = ns2pMap.keySet().iterator();


  41:             StringBuilder sbuf = new StringBuilder(256)


  42:             while (iter.hasNext()) {


  43:                 String key = iter.next();


  44:                 String value = ns2pMap.get(key)


  45:                 sbuf.append(key)


  46:                 sbuf.append('=')


  47:                 sbuf.append(value)


  48:                 sbuf.append(',')


  49:             }


  50:             if (sbuf.length() > 0) {


  51:                 ns2pValues = sbuf.replace(sbuf.length()-1, sbuf.length(), '').toString()


  52:             }


  53:             else {


  54:                 println 'No namespace-to-package replacement possible: nothing matched'


  55:             }


  56:         }


  57:  


  58:         def wsdlFileDir = StringUtils.substringBeforeLast(wsdlFile, '/')


  59:         def stubPackageSuffix = (wsdlFile - wsdlFileDir - '/' - '.wsdl').toLowerCase() + '.soap.stubs'


  60:         def outputDir = javaSrcDir-'/src' // Axis2 will generate a /src dir for us


  61:         def outFile = new File("${antProperty.'error.axis.file'}")


  62:         outFile << wsdlFile + NL


  63:         Ant.java(classname: 'org.apache.axis2.wsdl.WSDL2Java',


  64:             classpath: java2wsdl_classpath,


  65:             fork: true,


  66:             output: "${antProperty.'output.axis.file'}",


  67:             error: "${antProperty.'error.axis.file'}",


  68:             append: "yes",


  69:             resultproperty: "taskResult_$wsdlFile") {


  70:                 if (antProperty.'proxy.enabled'.equals("yes")) {


  71:                     println "Using proxy ${antProperty.'proxy.host'}:${antProperty.'proxy.port'}"


  72:                     jvmarg (value: "-Dhttp.proxyHost=${antProperty.'proxyhost'}")


  73:                     jvmarg (value: "-Dhttp.proxyPort=${antProperty.'proxyport'}")


  74:                 }


  75:                 arg (value: '-uri')


  76:                 arg (value: wsdlFile)


  77:                 arg (value: '-d')


  78:                 arg (value: "${antProperty.'axis.data.binding'}")


  79:                 arg (value: '-o')


  80:                 arg (value: outputDir)


  81:                 arg (value: '-p')


  82:                 arg (value: "${antProperty.'stubs.package.prefix'}.$stubPackageSuffix")


  83:                 arg (value: '-u')


  84:                 arg (value: '-s')


  85:                 if (new File("$wsdlFileDir/ns2p.properties").exists()) {


  86:                     println 'using provided namespace-to-package mapping per wsdl file'


  87:                     arg (value: '-ns2p')


  88:                     arg (value: "$wsdlFileDir/ns2p.properties")


  89:                 }


  90:                 else if (antProperty.'ns2p.replace'.equals("yes")) {


  91:                     println 'using provided namespace-to-package mapping for ALL wsdl files that should be processed'


  92:                     arg (value: '-ns2p')


  93:                     arg (value: ns2pValues)


  94:                 }


  95:                 arg (value: '-t')


  96:         }


  97:         print "$wsdlFile "


  98:         if (Ant.project.properties."taskResult_$wsdlFile" != '0') {


  99:             println '... ERROR'


 100:         }


 101:         else {


 102:             println '... OK'


 103:         }


 104:     }


 105:  




Gant specialities


You'll note that Gant targets are Groovy closures. Inside closures you can define new closures and assign them to variables, but you are not allowed to declare a method. So, the following is allowed:



target ( ) {
def Y = { }
}


But this isn't:



target ( ) {
def Y ( ) { }
}


Gant has two other characteristic features. First, every target has a name and a description, for instance



target (alljars: 'generate a directory with all client jar/src/doc/res archives') {
depends(javagen)
println alljars_description


To access this description string, you can use the variable "<target-name>_description", which is created by Gant for your convenience.



Second, if you want to hand over a WSDL regular expression "property" specified in the command line to the build.gant script, you can use the -D "wsdl=<regex>" idiom. The syntax is similar to that used for Java VM command-line property setting: java -Dproperty=value, but on Windows you really need the space after "-D", because Gant uses the Apache Commons CLI library. You can access this property in your Gant script by using the following "try/catch" trick (demonstrated for the property wsdl) that is evaluated in the init target:





   1: target (init: 'check wsdl regex match') {


   2:         try {


   3:             // referencing 'wsdl' in this try block checks for the existence of this variable


   4:             // if you call Gant with '>gant -D "wsdl=<wsdl_target'>" then the variable 'wsdl'


   5:             // will be created by Gant for you, otherwise the catch block will be executed


   6:             def wsdlRootDir = new File(antProperty.'wsdl.root.dir')


   7:             def wsdlList = []


   8:             boolean atLeastOneMatchingWsdlFileFound = false


   9:             wsdlRootDir.eachFileRecurse{


  10:                 if (it.isFile() && it.name.endsWith('.wsdl')) {


  11:                     def wsdlFile = it.canonicalPath.replace('\\', '/')


  12:                     def startIndex = wsdlFile.indexOf('/wsdl') + 1


  13:                     def endIndex = wsdlFile.lastIndexOf('.')


  14:                     def shortname = StringUtils.substringAfterLast(wsdlFile, '/')-'.wsdl'


  15:                     if (shortname == wsdl) {


  16:                         println "---> Provided regex matched '$shortname'"


  17:                         wsdlFilesMatchingRegexList << wsdlFile


  18:                         atLeastOneMatchingWsdlFileFound = true


  19:                         return;


  20:                     }


  21:                 }


  22:             }


  23:             wsdlRegexProvided = true


  24:             if (atLeastOneMatchingWsdlFileFound) {


  25:                 wsdlRegexOK = true


  26:             }


  27:             else {


  28:                 println "\tWarning: No wsdl file found that matches regex pattern '$wsdl'!"


  29:                 wsdlRegexOK = false


  30:             }


  31:         }


  32:         catch (Exception e) {


  33:             // no regex evaulation necessary


  34:         }


  35:     }




In build.gant Groovy's syntax is used to handle foreach loops, proxy usage, and regular expressions to narrow the WSDL processing to a few files only. The WSDL checker and validator, however, are a set of pure Groovy files that have the facade WsdlChecker.main () as a central entry point. Although Groovy comes with nice GStrings and a powerful "GDK" (Groovy Development Kit) it appears, that Jakarta Commons Lang StringUtils is still a valuable add-on.



Checking WSDL files -- a Groovy task



As I mentioned in the introduction, the Groovy and Gant Toolset should help testers and developers check WSDL files coming from different sources for compatibility with a variable set of Web service frameworks, including WSDL validation. Testing for compatibility, in this context, means: call every Web service framework's WSDL-to-Java code generator and check the resulting files for exceptions, errors, and warnings. If CXF is installed and enabled in the build.properties file, then you can run CXF's validator tool, too. All these tasks are performed by pure-Groovy classes. The design is simple: every framework's code generator and its output handling is mapped to a Groovy class. Each class provides two basic methods that are called by the WsdlChecker "controller" class:




  • def checkWsdl(wsdlURI) calls the corresponding code generator for a WSDL file.


  • boolean findErrorsOrWarnings() tells the "controller" if errors or warnings are found in the generator output files. If so, it prints them to the console.



In Java you would define an interface that your concrete (code generator strategy) classes would implement (you could also use an abstract class). In Groovy, however, you don't need to declare explicit interfaces -- though you could. Instead, you create a list of "checker" objects (depending on the Web service frameworks installed and enabled in build.properties) making use of Groovy closures, as shown in Listing 5.



Listing 5. Groovy closures in the WsdlChecker class (excerpt)




   1: def wsdlcheck() {


   2:    def sortedList = wsdlList.sort()


   3:    for (wsdlLongName in sortedList) {


   4:       def wsdlURI = wsdlLongName.suffix


   5:       availableCheckers.each{ it.checkWsdl(wsdlURI) }


   6:    }


   7:    // print individual results and report overall result to Ant (errors only)


   8:    new File('wsdl_errors.txt').write(Boolean.toString(findWsdlErrors()))


   9: }


  10:  


  11: boolean findWsdlErrors() {


  12:    println()


  13:    boolean errorsFoundInAnyChecker = false


  14:    availableCheckers.each{


  15:    errorsFoundInAnyChecker = errorsFoundInAnyChecker |


  16:       it.findErrorsOrWarnings()


  17:    }


  18:    return errorsFoundInAnyChecker


  19: }


  20:  




A typical output in your shell would look like this:





   1: Logon.wsdl... Axis2 check done.


   2: Logon.wsdl... XFire check done.


   3: Logon.wsdl... Java 6 wsimport check done.


   4: Logon.wsdl... CXF check done.


   5: Logon.wsdl... CXF validator check done.


   6:  


   7: No Axis2 errors found


   8:  


   9: No XFire errors found


  10:  


  11: Java6 wsimport warnings found in:


  12:     security/webservice/svc-logon/wsdl/Logon.wsdl:


  13:     src-resolve: Cannot resolve the name 'impl_1:RuntimeException2' to a(n) 'type definition' component.


  14:  


  15: No Java6 wsimport errors found


  16:  


  17: No CXF errors found


  18:  


  19: CXF Validation passed




Using the Groovy-Eclipse-Plugin, you can start this WsdlChecker.groovy program just like a "normal" Java program. The plugin will create a separate Groovy entry in your Eclipse environment's Run settings.



Running the WSDL checker from inside Ant



It is also possible to run the WsdlChecker as part of an Ant target. To see how you can incorporate the Groovy classes in an Ant script, look at the following Ant build.xml, which is part of the Groovy-WSDL-Checker Eclipse project:



Listing 6. Calling the Groovy WSDL checker from inside Ant




   1: <project name="wsdl2java check" default="check-wsdls" basedir=".">


   2:     <taskdef resource="net/sf/antcontrib/antcontrib.properties"/>


   3:     <taskdef name="groovy"


   4:         classname="org.codehaus.groovy.ant.Groovy">


   5:         <classpath location="lib/groovy-all-1.0.jar" />


   6:     </taskdef>


   7:  


   8:     <target name="check-wsdls">


   9:         <groovy src="src/tools/webservices/wsdl/checker/WsdlChecker.groovy">


  10:           <classpath>


  11:             <pathelement location="bin-groovy"/>


  12:               <pathelement location="lib/commons-lang-2.3.jar"/>


  13:           </classpath>


  14:         </groovy>


  15:  


  16:         <loadfile property="wsdlErrorsFound"


  17:                       srcfile="wsdl_errors.txt"></loadfile>


  18:         <if>istrue value="${wsdlErrorsFound}" />


  19:             <then>


  20:                 <echo message="we found errors in wsdl files" />


  21:             </then>


  22:             <else>


  23:                 <echo message="NO errors found in wsdl files" />


  24:             </else>


  25:         </if>


  26:     </target>


  27:  


  28: </project>




The Groovy checker is controlled by the same build.properties file as is build.gant. If you want to work with the Groovy classes in the Eclipse project in artefacts (instead of calling the checker via build.gant regex to control which WSDL file to check/validate, and the checker's mode parameter (that tells the tool whether to check for compatibility using the installed code generators or to validate with the CXF validator tool). Please note, that I could not set any values in Ant's property hashtable that would be available to Ant afterwards, because in Groovy you are working with a copy of the table. Therefore, I chose to deliver the results via Ant's loadfile task. Working with Ant's limited condition handling capabilities just isn't my cup of tea. That's why I decided to use ant-contrib with its if-then-else feature to demonstrate the checker's result evaluation in Listing 6.



Code generation and modification -- Gant and Java play together



So far, you have only seen the possibility of generating Java source code from WSDL files and checking the generated output using different Web service frameworks. I'll conclude my introduction to the Groovy and Gant Toolset with a more concrete example based on the generated code of Axis2. The corresponding Gant target in the Groovy and Gant Toolset is called javagen.



To make working with the client stub more comfortable, I'll show you how to do much more than generate code with Axis2's wsdl2Java tool You will first generate source code, but then you will modify it with a Java parser/modifier. After that you will compile it using Sun's javac (so your JAVA_HOME environment variable should point to the JDK rather that the JRE root directory), generate javadoc information, and finally produce a jar file containing the compiled code. If you're using xmlbeans you will also get an xsb resources jar file to be included in your project's classpath.



This jar file -- together with the parser/modifier bytecode delivered as japa.jar -- is the basis for client-side unit or performance tests. Code modification allows you to provide the user with a handy client stub factory that incorporates, and encapsulates, Log4J debug-level setting, HTTP/HTTPS handling (including chunking and adaption to proper cipher suite exchange), and connection control (for the underlying Jakarta httpclient).



About JavaCC



It took me some time to find a suitable Java parser. I wanted one that would be capable of handling Java 5 syntax, that was free, and that would let me plug in my source code modification feature. I chose JavaCC. You can also get the parser source, together with my modifications/add-ons, from the Resources section: see JavaParser Eclipse project.



I extended two of the original files: JavaParser.java and DumpVisitor.java. The first one provides the entry point for Groovy as a main method and calls the DumpVisitor for the Axis2-generated client stub. As you may guess, the latter is based on the Gang of Four Visitor pattern. (You may have to look closely in order to spot the changes I made to reach my targets, however. Anyone who has used the Visitor pattern knows that its usage is not easily digested!)



Additionally, I provide a package, tools.webservices.wsdl.stubutil, that incorporates all the code that is required by the modified client stub to compile and work properly. You can generate all of this as a jar file; just use the japa.jardesc file, which is part of the JavaParser Eclipse project for your (and my) convenience.



For those who want to count on Java 6 Web service support based on java.net.URL, I have included a special HttpSSLSocketFactory.java class as part of the Eclipse JavaParser project that takes care of the cipher suite handling ("accept all certificates"). You can use an instance of this class to set the default SSLSocketFactory calling, for example, javax.net.ssl.HttpsURLConnection.setDefaultSSLSocketFactory. Socket factories are used when creating sockets for secure HTTPS URL connections. Creating a client stub typically looks like this:



GlobalWeather stub = GlobalWeatherStub.createStub(ProtocolType.http, Level.INFO, Chunking.yes,
"http://www.webservicex.net/globalweather.asmx", "Connection1");



Not-so-stupid bean property settings



My Gant script, build.gant, calls the Axis2 wsdl2java code generator with the flag -s (synchronous calls only) and the hint to use the default Axis2 data binding ADB (Axis2 Databinding Framework). This will produce service-method calls with properties wrapped as Java beans. Imagine you have a lot of properties that are optional for your call, and one of the required properties tells the Web service to ignore the others. With a normal RPC, you would set parameters to null (or with xmlbeans you can try the WSDL "nillable" attribute), but this is not possible with ADB. During serialization, every object property is tested to be not equal to null. Therefore, you might write many lines of source code doing nothing more than just setting a "default" value for every property of your bean before handing it over to your Axis2 stub. To overcome this "stupid work" I have written a smart Java utility class, NOB (NullObjectBean), that impressively demonstrates the power of using reflection and Jakarta commons -- the beanutils project in this case -- assuming that performance is not a requirement when you are setting your bean properties. Populating a bean with "default" values is now as easy as calling:



WebConferenceDTO webConferenceDTO = (WebConferenceDTO) NOB.create(new WebConferenceDTO());


As a current limitation, the NOB utility can only provide values for "beans" that have a public default (empty) constructor, that have public final constant objects, or that have "simple" interface types as properties. So the algorithm would not work, for instance, for the xmlbeans data binding where interfaces with factory methods are created. (But it could be extended to do so!) Nevertheless, having this class and the Groovy and Gant Toolset in place, nothing stops you from playing around with WSDL and the client-side part of Web services.



Listing 7. Helper class 'NOB' simplifies Axis2 ADB bean handling




   1: package tools.webservices.wsdl.stubutil;


   2:  


   3: import ...


   4:  


   5: /**


   6:  * Provide a special kind of "Null Object Bean (NOB)" where every primitive and non primitive bean


   7:  * property has a default value not equal 'null'.


   8:  *


   9:  * Whereas the Null Object pattern would provide , we simply want to abstract the handling of Axis2


  10:  * ADB 'null' parameters away from the client.


  11:  * Because this class is used in the Axis2 environment with ADB as the default data binding, it


  12:  * checks if the bean that is handed over to our Factory method implements


  13:  * 'org.apache.axis2.databinding.ADBBean'. You can easily disable this check if you want to use this


  14:  * class in another context or replace the check if you need such a helper class for, e.g., XMLBeans


  15:  * data binding.


  16:  *


  17:  * @author Klaus-Peter Berg (Klaus P. Berg@web.de)


  18:  */


  19: public class NOB {


  20:  


  21:     private static int recursionCounter = 0;


  22:  


  23:     private NOB() {


  24:         // hide constructor, we are a helper class with static methods only


  25:     }


  26:  


  27:     /**


  28:      * Create a Java bean object that can be used as a "Null Object Bean" in Axis2 parameters calls


  29:      * using ADB data binding.


  30:      * We do our best to process property types as interfaces and helper classes that provide constants, too,


  31:      * but abstract classes are ignored. So, every effort is made to create a useful "default" Null Object bean,


  32:      * but I cannot guarantee that Axis2.serialize() will be able to really handle it propertly ;-)


  33:      *


  34:      * @param emptyBean


  35:      *            the bean that should be populated with non-null property values as a default


  36:      * @return the "populated Null Object Bean" to be casted to the bean type you will actually need


  37:      */


  38:     @SuppressWarnings("unchecked")


  39:     public static Object create(final Object emptyBean) {


  40:         recursionCounter++;


  41:         final boolean firstCall = recursionCounter == 1;


  42:         if (firstCall


  43:                 && !org.apache.axis2.databinding.ADBBean.class.isAssignableFrom(emptyBean


  44:                         .getClass())) {


  45:             throw new IllegalArgumentException(


  46:                     "'emptyBean' argument must implement 'org.apache.axis2.databinding.ADBBean'");


  47:         }


  48:         final BeanMap beanMap = new BeanMap(emptyBean);


  49:         final Iterator<String> keyIterator = beanMap.keyIterator();


  50:         while (keyIterator.hasNext()) {


  51:             final String propertyName = keyIterator.next();


  52:             if (beanMap.get(propertyName) == null) {


  53:                 final Class propertyType = beanMap.getType(propertyName);


  54:                 try {


  55:                     if (propertyType.isArray()) {


  56:                         final Class<?> componentType = propertyType.getComponentType();


  57:                         final Object propertyArray = Array.newInstance(componentType, 1);


  58:                         beanMap.put(propertyName, propertyArray);


  59:                     } else {


  60:                         final Object propertyValue = ConstructorUtils.invokeConstructor(


  61:                                 propertyType, new Object[0]);


  62:                         beanMap.put(propertyName, create(propertyValue));


  63:                     }


  64:                 } catch (final NoSuchMethodException e) {


  65:                     if (propertyType.isInterface()) {


  66:                         processInterfaceType(beanMap, propertyName, propertyType);


  67:                     } else {


  68:                         processHelperClassType(beanMap, propertyName, propertyType);


  69:                     }


  70:                 } catch (final IllegalAccessException e) {


  71:                     throw new RuntimeException(e);


  72:                 } catch (final InvocationTargetException e) {


  73:                     throw new RuntimeException(e);


  74:                 } catch (final InstantiationException e) {


  75:                     processInterfaceType(beanMap, propertyName, propertyType);


  76:                 }


  77:             }


  78:         }


  79:         recursionCounter--;


  80:         return beanMap.getBean();


  81:     }


  82:  


  83:     private static void processHelperClassType(final BeanMap beanMap, final String propertyName,


  84:             final Class propertyType) {


  85:         // Class.newInstance() will throw an InstantiationException


  86:         // if an attempt is made to create a new instance of the


  87:         // class and the zero-argument constructor is not visible.


  88:         // Therefore, we look for public final constants of the type we need,


  89:         // because we assume that we process a helper class with constants (only)...


  90:         final Field[] fields = propertyType.getDeclaredFields();


  91:         for (final Field field : fields) {


  92:             final int mod = field.getModifiers();


  93:             final boolean acceptField = Modifier.isPublic(mod) && Modifier.isStatic(mod)


  94:                     && Modifier.isFinal(mod) == true && field.getType().equals(propertyType);


  95:             if (acceptField) {


  96:                 try {


  97:                     beanMap.put(propertyName, field.get(null));


  98:                     break; // we will take the first constant that satifies our needs


  99:                 } catch (final Exception e1) {


 100:                     throw new RuntimeException(e1);


 101:                 }


 102:             }


 103:         }


 104:     }


 105:  


 106:     private static void processInterfaceType(final BeanMap beanMap, final String propertyName,


 107:             final Class propertyType) {


 108:         if (propertyType.isInterface()) {


 109:             final Object interfaceToImplement = java.lang.reflect.Proxy.newProxyInstance(Thread


 110:                     .currentThread().getContextClassLoader(), new Class[] { propertyType },


 111:                     new java.lang.reflect.InvocationHandler() {


 112:                         public Object invoke(@SuppressWarnings("unused")


 113:                         Object proxy, @SuppressWarnings("unused")


 114:                         Method method, @SuppressWarnings("unused")


 115:                         Object[] args)


 116:                                 throws Throwable {


 117:                             return new Object();


 118:                         }


 119:                     });


 120:             beanMap.put(propertyName, interfaceToImplement);


 121:         } else if (Modifier.isAbstract(propertyType.getModifiers())) {


 122:             // ignore abstract class: we cannot create an instance of it ;-)


 123:         }


 124:     }


 125: }


 126: