Monday

Message Auditing in SOA

Any serious service-oriented system implementation can be comprised of a number of services assembled into compositions. Actually, this is a strategy for overcoming, or partially solve the problem of web services “sea” as posted before. When you have a maze of services being called from many different clients, it can easily result in a formidable amount of information being passed around in form of messages. If principles of service-oriented design are followed, these messages would be in a standardized format. leading to a variety of possibilities as to how mechanisms can be positioned to filter and persist the messages and to also extract business intelligence from them in an arbitrary manner. The impact of standards-based messaging on auditing is as deep and as sweeping as its impact on the field of integration in general.

A Case for Auditing Messages in Service-Oriented Systems

Not much attention has been given to auditing messages in service-oriented eco-systems. Most modern infrastructure and service bus platforms provide simple message auditing mechanisms, for example, by writing messages to a log file or in a proprietary database. Typically, these mechanisms are not optimized for performance and are recommended for diagnostic purposes only.

This traditional ignorance towards auditing is most likely due to the fact that auditing is usually not seen as primary business need. Also, the requirements for auditing can be so daunting and unpredictable that it can be very difficult to comprehend and achieve in a consistent manner, especially across heterogeneous environments.

With the advent of service-orientation, service design principles, standardized message structures and commercial infrastructures that directly enable service-oriented computing, it is becoming significantly easier to build a system that is generic enough to plug into a variety of platforms and also specific enough to capture only certain messages on an "as need" basis. As a result, we can now build message auditing systems without having to creep inside of and customize the service logic.

This effectively enables us to:

  1. respond to arbitrary queries pertaining to compliance requirements
  2. gather business intelligence from arbitrary perspectives by running queries against persisted messages
  3. set up observation systems to raise alarms when certain events happen
  4. extract diagnostic information about systems to better optimize their resources
  5. create a dashboard to observe the overall state of systems in real-time

Let's now explore these requirements in more detail and then establish a high-level design and implementation.

Typical Requirements

Any reasonable service-oriented implementation can have multiple services (and even multiple versions of services) along with multiple XML schemas. The services may be implemented on more than one platform and may be called by diverse clients. The volume and size of messages are not predictable – because well-designed services are typically interoperable, reusable and compassable, we cannot predict how and when new usages of existing services will emerge.

This leads to the following requirements that a message auditing system may need to address:

id

Requirement

Description

1

Flexibility to Handle Messages that may have Different Structures

The auditing system cannot assume any specific structure or schema for the messages. A request or response message at one service may look very different from the request or response at another service. Even different versions of same service may have different message structures

2 Ability to Specify Criteria to Filter Messages so that only Specific Messages are Audited The message auditing system must support message filtering to specify which messages really need to be audited. It is not incomprehensible to visualize that not all messages need to be audited. An ability to filter the messages reduces the load on the auditing system and creates a meaningful message warehouse, which can be easily managed and processed to gather on-going business intelligence. The challenge in this requirement is that since the messages can be of any structure, the criteria for filtering can also vary.
3 Ability to Map messages to the Service Instances from which they Emanated The message auditing must be able to support the mapping of an audited message to the instance of the service from which it originated, so that, in case more details have to be found, the log files on the server may be referenced. In the absence of this type of mapping logic, it may be difficult to gain the needed amount of insight into how message were previously processed.
4 Ability to Support Multiple Destinations where Message need to be Stored The message auditing system must be able to support multiple destinations for the audited messages. Typical examples of where the message may be sent are: the database, e-mail, files and folders, queues and other services. The database is commonly considered the single most important destination, since it is permanent and can be used to generate business and reporting intelligence. However, it is important to support other destinations for real-time reporting, events, and for help with diagnosing problems
5 Ability to Scale to Accommodate Increasing Load As mentioned previously, because it is impossible to predict how a service eco-system will grow and evolve, it is also not possible to accurately estimate future loads placed upon the message auditing system. Therefore, the system needs to be inherently and seamlessly scalable so that any required infrastructure upgrades can be added without affecting existing services
6

Must be Not be Intrusive to Service Logic

An auditing system should not impose significant change upon existing services. Services must be designed so that their autonomy is maximized while their coupling to the infrastructure is minimized. When incorporating a an auditing system (be it accessed directly by service logic or made available via separate utility services), care needs to be taken so that it should not introduce negative service coupling requirements.
7 Must be Autonomous Message auditing systems should not be dependent on services. They may depend on the standardized schemas being used to define service contract types (following the service design principle of Standardized Service Contract), but they should not have any direct dependencies on business services, their message structures, or service logic. This enables auditing services (or systems) to exist as independent parts of the enterprise with their own life cycles.
8 Must Act as an Optional Feature Not all services need auditing and those that do may not need auditing all the time. Message auditing logic must therefore be designed to be as "pluggable" as possible, so that it can be easily enabled and disabled without affecting services and their supporting infrastructure.
9 Must be Flexible and Extensible

The auditing requirements can be so different from one organization to another or even from one service inventory to another in the same company, that the auditing system should not be built to address a predefined set of auditing requirements. Instead, the system must act as a platform which can be customized or extended to accommodate evolving requirements.

10 Ability to Define Arbitrary Queries for Reporting or Event Generation Purposes Ad-hoc reporting is often ignored in traditional auditing systems. In order to support this requirement, messages must be stored in a manner that makes it is possible to query them to generate both reports and events. Keeping in mind that each service operation has its own message structure, it must be possible to specify the query in an arbitrary manner. This is also due to the fact that the requirements for reports and events cannot be predicted since the auditing on a system may lead to the need for ad-hoc queries.
11 Ability to Generate Automated Reports It has become increasingly common for systems to comply to regulation-based auditing requirements. With the advent of Sarbanes-Oxley Act in particular, companies are required to provide high-quality reporting capabilities in order to respond to auditing-based compliance queries.

Solution

There are several SOA infrastructure platforms that simply have insufficient auditing capabilities. Building your solutions on such a foundation may force you to later adopt a third-party auditing solution. However, auditing requirements can become so varied that it simply may not be feasible to build or use one system that can address them. Therefore, you need to place reasonable assumptions and constraints on what a given auditing system can and cannot do.

For example:

  • Service instances must be identifiable by some unique ID that can be stored as metadata along with the messages.
  • The auditing system should not be responsible for decrypting messages.
  • The auditing system may be allowed to add its own information as headers to the messages.
  • It must be possible to identify the version of service, operation and schemas being used from the message.


To limit the scope of a message auditing system, the following points must also be considered:

  • Auditing logic should not be used for routing purposes. The message processing chain should be part of service-logic.
  • Sometimes, the word "auditing" is used to describe a business requirement for applications. For example, in a customer management application, the administrator must be able to see the history of how customer information was modified and by whom. This should not be confused with "message auditing", which is logic that is considered to be part of an agnostic system (a system that is not bound to a specific business context).
  • Auditing systems may alter messages by adding their own header information. This should not be considered as a "breakage" in the message integrity because these headers are generally used for auditing purposes only.
  • The message store should not be used for non-repudiation. This is again is due to the fact that headers may be added to the message for auditing.


Architecture and Design

There can be many different ways to build an auditing system. Here, we will explore one possible approach based on the context and requirements discussed so far.

Overall the solution can be broken down into the following parts:

  1.   Service Instance Manager
  2.   Message Interceptor
  3.   Filter Manager
  4.   Filters-Service Instance Mapper
  5.   Destination Manager
  6.   Filter-Destination Manager
  7.   Report Manager
  8.    Report Scheduler
1 Service Instance Manager The message auditing must be done in a way that it should be possible to track from which instance of what service the message was captured. So, it is important to have a mechanism in place that consistently and uniquely defines each service instance. It may also be a good idea to implement a self-registration process so that when an instance of a service is started, it can register itself in a database.
2 Message Interceptor

The capturing mechanism must be pluggable so that it is easily enabled or disabled. It has to be non-intrusive to service logic and to enable high levels of scalability, it should be designed to simply capture and send messages to a queue.

This module is also responsible for gathering metadata about the message (e.g. the service instance identifier to map the messages to the service log files). All of this metadata must be either added to the messages via headers or it may be captured as separate fields.

The module must be able to capture request and response messages together and send them as pairs for auditing. This ensures that request and response message information will be bundled for future processing and reporting purposes.

This module should not apply filters to the messages since that may introduce time-consuming logic that could adversely affect the runtime service performance. Typically, a message capturing mechanism can be developed by implementing message handlers or interceptors.

3 Filter Manager

A filter is a simple function that would take request and response messages and associated metadata and determine whether a given set of messages needs to be audited or not. Typically a filter would look into the message content and apply a rule to find out if certain conditions resolve to true or not.

Some common types of filters that can be implemented and readily used are

• look for a certain string in the message to be present or absent

• apply a given XPath query to the message that results in a Boolean response

• apply a given XQuery query to the message that results in a Boolean response

• apply interfaces to define arbitrary criteria based on the message content

While defining the filter, it must be indicated whether to apply it on the request or the response. The definitions of these filters may be created and stored in a database or configuration file

4 Filter-Service Instance Mapper A mechanism to map the filters with service instances is needed to determine which filters to apply for a given set of messages and their metadata. This can also be done in a database or a configuration file. Note, however, that a database may be more suitable do to the need to sometimes establish many-to-many relationships.
5 Destination Manager The destination for filtered messages represents the location they need to be sent to for storage. Some possible destinations can be the database, a queue, e-mail, files and folders, URLs, other services, or any implementation of a simple interface that can accept the request, response, message metadata and filter and is able to send them to whatever destination they need to go. All destinations may be stored in configuration file or a database.
6 Filter-Destinations Mapper It needs to be specified for each filter where the messages must be sent. This can be achieved by specifying a list of destinations for each filter register.
7 Report Manager The database is a mandatory destination if it is important to run reports on audited messages. Defining reports at a basic level would require setting up query that needs to be executed to select only relevant messages and further provide some way to render these messages. A simple yet powerful solution for this is storing the messages in XML type columns and specifying queries in terms of XPath or XQuery queries. As mentioned previously, the messages may be based on an arbitrary structure and it may therefore be easy to specify rendering formats with technologies like XSLT
8 Report Scheduler This is a regular scheduling mechanism that can run reports based on the provided schedule and report definition.

Putting It All Together

Figure 1 identifies some of the interfaces that will enable the creation of the previously listed solution parts. For the sake of brevity, only important interfaces are listed.


Figure 1


Once all these components are in place, we can study how the system will work at runtime and how the original requirements can be met. At service start-up, the service instance must register itself with a central registry as displayed in Figure 2.


Figure 2: At service startup, the service instance object is created and registered

The following steps demonstrate how the auditing system would work at runtime:

  1. A service client sends a message to a service instance.
  2. The auditing interceptor captures the service instance identity, request, and its metadata, and stores this information locally.
  3. The service logic processes the request and constructs the response.
  4. The auditing interceptor captures the response.
  5. The auditing interceptor combines the request, response, metadata, and service instance identity, and sends this combined packet to a queue.
  6. The response is returned to the client.

These steps are illustrated in Figure 3.


Figure 3: This figure shows how audit interceptor captures the request and response and simply submits to a queue. This makes the auditing system less intrusive and more scalable. Note that filters are not applied at this point to reduce the additional processing.


It evident that the auditing interceptor acts in a non-intrusive and scalable manner. It does not depend on message structures and individual service instances.

Let's now see how the messages can be processed by the queue listeners:

1. The queue listener picks the message packet that has request, response, service instance identifier, and other message metadata.

2. The listener finds the mapped filters for given service instance.

3. The filters are applied to the messages and, as a result, each filter indicates whether the message was a match or not. It is possible that many filters are provided for a given instance of the service and more than one may indicate a match.

4. If no filter is specified or no filter indicates that the message must be audited, the message is simply discarded.

5. For every filter that indicates that the given set of messages must be audited, it is determined what the destinations for that filter are.

6. The message packet is then sent to every destination mapped for the given filter.


These steps are further illustrated in Figure 4.


Figure 4: This diagram shows how the messages are actually audited by queue listener. First, for given service instance, the registered filters are looked up, then the filters are applied to messages to check if they need to be audited. If yes, then destinations are looked up for given filter and the message are sent to all destinations associated with each filter.

The sequence of processing steps shown in Figure 4 represents a flexible processing model that allows for customizable behaviour for each service instance and filter. It is extensible since any new filter may be added to the service instance and any new destination may be provided for each filter.

Once the reports have been defined using XPath or XQuery, the queries can be run against the messages. Since the message structure can be arbitrary, it is important to save them in XML type columns.

Let's now consider some sample cases and see how the auditing system can support these requirements:

1. You want to track all activities of a user

It is a common situation where you may want to track a user's activity (e.g. for the purpose of diagnosing a problem). Assuming that each message would have a user ID embedded in it, you can create an XPath filter and add that to all service instances. If you want to quickly diagnose the problem, you could specify your e-mail address as the destination. Once this filter is enabled, the messages will start coming to your inbox. After you have collected relevant messages, you can disable the filter.

2. You want to find how a lookup service is being used.

You might have a service that lets users lookup some service providers in a given area. You may want to find out how this is being used to improve user interface. You can capture all messages for this service and query using XQuery or XPath to count how many times zip, state or city were provided, what mile radius is usually selected by users, and so on. Based on this, the user-interface may pre-select some values.

Care must be taken to ensure that interaction with the queue is efficient (for example, it may be executed in a separate thread). If the messages hold sensitive data, the message database must be designed and managed so that the data in not compromised. Since the message auditing will simply store the messages without association with entitlements, it may not be possible to create reports based on some access control. Messages with attachments pose another challenge since the attachments themselves may also need to be stored and queried.

Conclusion

With the advent of XML-based technologies, the evolution of service design principles, service design patterns, and industry standards, message auditing has become viable commodity within the modern SOA eco-system. Adding auditing logic to your systems opens up a flood gate of opportunities for extracting business intelligence from messages.

The beauty of message auditing with XML is that it does not bind you to a specific data structure and therefore allows for the extraction of information in an arbitrary manner. This can be the best defence against the random nature of service-oriented system auditing, which can span from performance data to compliance requirements, from user behaviour to drawing business intelligence, and many more usages.

BPEL Environments and Complexity management

Service-oriented architecture (SOA) has become mainstream technology for integrating disparate systems and applications. For building composite applications, Business Process Execution Language (BPEL) has emerged as the standard for business process flow orchestration and application integration within organizations. Many IT organizations are deploying composite applications that use BPEL to automate critical business processes. If your application needs to be available 24/7, you need a reliable BPEL infrastructure. In this article, we first present a typical BPEL engine architecture and then outline its manageable entities, and the typical management functions you need to worry about as an administrator. Finally, we conclude with approaches and best practices for managing your complete BPEL infrastructure.

To effectively manage BPEL infrastructure, administrators need to manage all the components that make up the BPEL ecosystem. Before diving into management functions you need to concern yourself with as an administrator, let's look briefly at the BPEL ecosystem.

The BPEL Ecosystem

By BPEL ecosystem, we mean the system components and resources used for executing the BPEL process and those that ensure guaranteed availability of the BPEL process. Although some of those systems may not be directly under your control, you need to take some action to monitor their availability. Unequivocally, if one or more of these components go down, it will put the business process execution at risk. Figure 1 depicts the architecture of a typical BPEL engine.

A BPEL process typically depends on one or more partner links. A partner link might be a service owned by the organization, such as an EJB running an application server or a database stored procedure over which you have immediate control. It might also be a Web service provided by a partner or vendor over which you do not have any control.

The BPEL processes are run in a runtime environment called a BPEL server or engine. The availability and service level associated with BPEL processes are completely dependent on the BPEL engine. BPEL engines can run in an application server environment. You can have a single instance or a clustered application server environment if your composite application requires high availability. A BPEL server may depend on some of the resources, such as data sources or messaging services, made available by the application server environment.

BPEL processes are long running, and hence they need a persistent store to persist state. Typically the persistence store is a relational database management system. This persistence store is called a dehydration store.

All of these software components run on several nodes, or hosts, that constitute the critical piece of your BPEL ecosystem.

To summarize, a BPEL ecosystem comprises the following:

  • BPEL processes and partner links
  • BPEL engine
  • Dehydration store
  • Gateway to BPEL engine
  • The application server on which the BPEL server is running
  • Hosts on which the BPEL server, the dehydration store, and the gateway are running

The health of a BPEL process that you manage is a function of any of the entities that constitute the BPEL ecosystem, and you need to concern yourself with managing and monitoring all of these entities.

Let's dive down into the management functions for each component and which approaches you should follow to manage them, starting with the management aspects of BPEL processes.

BPEL Runtime Governance and Suitcase Deployment

As an administrator, you are responsible for deploying BPEL processes to a single server or multiple servers. Think about transitioning a BPEL process from test to production or deploying the process to additional BPEL engines to add scalability and availability to your system.

Typically BPEL processes are packaged in an archive known as a BPEL suitcase, which contains a .bpel file, WSDL for partner links, and classes and schemas required by the BPEL process. If you are using an SCA-compliant composite application, it may contain a BPEL module. As an administrator, you may be deploying the BPEL suitcase to more than one BPEL engine. You may want to deploy BPEL processes during off-peak times, so you want to automate the deployment procedure.

To automate deployment of BPEL processes, you can build scripts by using tools provided by your BPEL server vendors. If you're using a management tool, it may provide support for uploading the BPEL suitcases to the software library and then enable you to deploy them as needed. Figure 2 outlines the deployment of BPEL suitcases from a software library.

If your BPEL processes depend on resources such as adapters, it makes sense for you to automate the deployment and configuration of them if you want to avoid any manual errors.

Manage Versioning of BPEL Processes

In a dynamic environment, BPEL processes may change, depending on business needs. You need to implement a method of storing different versions of BPEL that have previously been deployed to the BPEL engine. This will help you track changes between different versions and back track any changes, should you run into issues. You may need to compare different versions of processes within one BPEL server or across BPEL servers. Management tools may provide the ability to store/retrieve and compare versions of a process and dependent artifacts such as WSDL and XML Schema that helps in the quick resolution of issues. The software library will allow you to roll back to a previous version of the BPEL process, should the need arise.

Monitor BPEL Process and Partner Links

Monitoring the health of BPEL processes is critical to meeting service-level agreements for your processes. You can probably ensure availability and meet service-level requirements by ensuring performance and functioning of the partner links your BPEL processes depend on and identifying problems in a proactive manner. You can develop some SOAP tests to verify the availability and performance of partner links. Use either a SOAP testing tool provided by a vendor or an open source Web service testing tool such as soapUI. The correct approach is to automate tests to be performed at regular intervals. You want to be alerted when a particular partner link has an unscheduled service shutdown or a longer-than-expected response time. If you're using a management tool to manage your BPEL infrastructure, check whether it provides such a capability.

SOAP Test Example

The Elevation Query Web Service returns the elevation in feet or meters for a specific latitude/longitude (WGS 1984) point from the USGS Seamless Elevation datasets hosted at http://eros.usgs.gov.

WSDL

Operation
getElevation
Input Parameter X: 45.890610
Input Parameter Y: 7.157390
other parameters: left empty
Output Parameter: elevation of latitude/longitude provided.

You can monitor this Web Service by executing a SOAP test at regular intervals through automated tests. These SOAP tests monitor the availability and perceived performance of the Web service. Figure 3 depicts automating testing of Web services.

Monitoring BPEL Processes from the End-User Perspective

Your BPEL process may be accessed from different locations, and you want to monitor the availability and performance from those locations. Say you're part of a global organization and users in North America, Europe, and Asia access your BPEL processes. Although it's challenging to monitor the performance of BPEL processes from the end-user perspective, you can create an agent at the customer locations to test the performance from their end. However, manually performing these types of tasks from a BPEL process that is accessed from multiple customer sites could be challenging. Hence, you can use the help of management tools to monitor the performance of your BPEL applications from the end-user perspective as depicted in Figure 3.

Error/Fault Monitoring

A BPEL process may raise errors and faults for several reasons, including data exceptions or non-availability of partner links, although you need to limit the number of faults in BPEL processes by proactively monitoring the processes, as we described earlier. You need to drill down into the processes that have raised faults, and you should be able to diagnose the cause of the faults and make sure that this does not happen again. You may want to monitor the processes that failed and help diagnose issues by using tools provided by your BPEL vendor and take corrective action as appropriate. It will also help if you implement some methodology to monitor SOAP messages sent to and received by the BPEL process and the partner links.

Gateway to BPEL Engines

BPEL process instances are created when invoked by a client and the traffic is routed through Web servers and load balancers. You want to be alerted when these Web servers or load balancers are not available or not meeting the expected performance goals.

Managing the BPEL Engine

The BPEL engine is by far the most important part of the BPEL ecosystem. You have to monitor the availability and performance of BPEL engines and need to be alerted when a BPEL engine is not performing as expected. You may have multiple instances of clustered BPEL engines. Hence, you need to know if any one of the engines is not performing or whether the load balancer is working properly. More important, you want to find out proactively when all the engines are not performing because it will lead to total disruption of service.

Configuration Management and Versioning

You may change the configuration of your BPEL engines from time to time. You may want to track any interim changes and to backtrack changes to a previous version. If you follow ITIL practices, this should sound familiar. You can store the configuration of a BPEL engine in a configuration management database (CMDB) or version control system. It can make your life easier to have your management tool provide integration with the CMDB in which you store your configuration.

Keeping a gold image of your BPEL ecosystem configuration and using it to compare with your current configuration will help in identifying issues attributable to configuration changes. And keeping track of components added to and/or removed from your BPEL system will also help in analyzing performance issues - for example, performance was bad during the first week of January but improved afterward, because there was one extra BPEL engine in the cluster after the first week of January. By comparing the current configuration with the gold image, you can monitor changes in some parameters that can have a big impact on BPEL instance throughput, such as dispatcher thread min/max parameters.

BPEL Engine Monitoring

The health of the BPEL engine is critical for your applications. Besides the status of the BPEL engine, there are some statistics you should focus on. They include system statistics such as memory; CPU consumed; and business metrics such as open and closed instances, synchronous and asynchronous process latency, and load factors. In the event of abnormal behavior such as high process latency or load factors, you have to proactively resolve the issues before they affect BPEL processes deployed in the engine.

As an administrator for your BPEL engine, you have to perform some of the following operations:

  • Archive information about completed BPEL processes
  • Remove from the database all XML messages that have been successfully delivered and resolved
  • Purge stale instances
  • Re-execute failed processes

Ideally, these mundane tasks should be automated in a production environment. You can build some automated scripts to perform these operations and schedule them to be performed at regular intervals. Many management tools help you with automation by providing a graphical user interface.

Dehydration Store

The BPEL engine stores process data in the dehydration store. This is a critical piece of the puzzle. As we discussed earlier, the dehydration store is typically a relational database. If you want high availability for your dehydration store, you will probably use a clustered database. Your database administrators should make sure that the database is available and performing properly.

Monitoring Adapters

Your BPEL processes may depend on adapters that access resources such as databases, messaging services, and EIS that may reside locally or remotely. The status and performance of adapters may have a heavy impact on the response time of your BPEL engine and processes.

Managing the Application Server

A typical BPEL engine runs in an application server environment. For example, Oracle BPEL Process Manager can be deployed on a J2EE-compliant application server such as Oracle Application Server, Oracle WebLogic, IBM WebSphere, or JBoss Application Server. The BPEL engine may depend on several resources and services provided by the application server, such as JDBC DataSource, JMS providers, JCA connectors, and shared libraries. The health of the application server, along with the performance of these resources, may directly influence the performance of your BPEL engine. Each application server provides several health indicator metrics. You should automate a mechanism that would proactively issue an alert before anything goes wrong with your application server.

Managing the Hosts/Nodes

Your BPEL infrastructure may be running on several nodes or machines. These server nodes may run into several resource issues during runtime. For example, they may run out of disk space due to excessive logging or run out of memory or CPU due to a spinning process. You need to proactively monitor these metrics and fix any issues before they become a problem for your infrastructure.

Bringing It All Together

It's now clear that the BPEL infrastructure includes a lot of entities. Monitoring these as independent entities could be a challenge.

A typical SOA environment will have hundreds of artifacts like Web services, BPEL processes, EJBs, adapters, and other resources. Monitoring a flat list of these artifacts is not productive. The dependency between these components and the impact of any changes you make is critical to determine. Distributed ownership of these components also requires that service-level agreements be established between providers and consumers of these components. These issues are addressed with a runtime governance offering. So it's important to employ a management tool that provides:

  • Translation of design-time discovery into production to help manage all the components and their dependencies in context
  • SLA capabilities to capture and measure SL compliance of your SOA application and SOA components

Think about a complicated BPEL process that uses several heterogeneous systems and applications. Say you're using Oracle's BPEL engine on an Oracle WebLogic server running on a Linux box. Your business processes involve partner links that depend on adapters that connect to IBM MQSeries applications and applications from Oracle's PeopleSoft Human Resources product family that run IBM WebSphere. Your BPEL engine uses an Oracle Real Application Clusters Database as the dehydration store. You want to monitor all these products together from a single management console. There are several management products on the market that may meet your needs. Figure 4 shows how you can manage the Oracle BPEL engine running on an Oracle WebLogic server from Oracle Enterprise Manager Grid Control 10g

An integrated management solution lets IT administrators manage complexity within a BPEL and SOA environment. Without this management solution, IT will see an increase in incidents, lower service levels, and increased end-user frustration. Moreover, as IT scales out with a new BPEL infrastructure, new personnel will be required to manage this complex distributed architecture. In addition, a move from legacy to SOA architecture can increase costs without a management strategy and toolset. You should consider investing in the right management product that makes sense for your infrastructure.

Another compelling reason why you should consider a management tool is for alignment between IT and your business. As an IT administrator, you need to monitor and report issues in terms of business processes. You should have monitoring statistics aligned with business processes. To accomplish this you need to have a view of business process flows that should be synched up with actual flows in the BPEL server. BPEL management tools provide you with a view of business processes and reports monitoring statistics for each business process.

Best Practices

Here are some of the best practices you can follow to ensure the availability of your BPEL infrastructure:

  • Establish service-level objectives for your BPEL processes and partner links.
  • Keep a library of BPEL suitcases in your software library. It will help in rebuilding a system in case of server failure.
  • Automate routine operations such as purging old process instances.
  • Monitor the performance of partner links.
  • Monitor the whole BPEL ecosystem, not just the BPEL engine.
  • Keep track of BPEL ecosystem membership/topology changes.
  • Keep a gold image of your configuration when everything is stable and keep updating it after every configuration change. This will help you find the cause of any possible problem due to configuration changes.
  • Monitor BPEL server-specific J2EE artifacts such as the JMS queues and data sources used by the BPEL server in addition to J2EE constructs used by BPEL processes.
  • Make sure that you select a management solution that can maximize your productivity and help you deliver maximum service through automation.

Conclusion

With SOA, managing the complete infrastructure - involving lots of heterogeneous components and products - has become challenging. For IT administrators, managing the service-level agreements for composite applications can be a monumental task. Using the right tools and methodology to make sure your BPEL infrastructure is highly available can make your job much easier.

Wednesday

SOAs ‘nature’ and integration.

An interesting post from Joe McKendrick contains the views of some consultants, as concerning SOA and integration. The issue, deals with a Shakespearian quest ‘to be or not to be’ with ‘be’ as integration.

“SOA is integration. SOA is not integration. Got that?

There’s a  renewed debate raging about the relationship between the practices of service oriented architecture and integration.

SOA is more than EAI 2009, but where do we start? […]”

As Gartner analyst Yefim Natis states, few organizations have one comprehensive SOA — most organizations have multiple islands of SOA that will eventually need to be brought together. Ann Thomas Manes, pointed out that “Many organizations mistakenly perceive SOA as an integration strategy. But it is not. SOA is about architecture. To achieve SOA, you must re-architect your systems.” But Loraine takes more of a pro-SOA-is-integration stance, noting that “most companies aren’t getting into SOA for a complete rebuild. Most companies deploy SOA because it’s so darn helpful with simplifying integration:”

“…companies don’t want Extreme Makeover. They’re looking for a slight update, something that ties the room together, as interior designers like to say. … And here’s another hard truth: Although David Linthicum and others believe that agility is the ROI for SOA, many companies are realizing SOA ROI through integration.”

Ann Thomas Manes urges practitioners to look beyond integration when planning SOA efforts. “It’s fine to use service oriented middleware to implement integration projects, but then you need to readjust your expectations. Most organizations that I speak with say that the goals of their SOA initiative are to reduce costs and increase agility. Unfortunately, these organizations aren’t likely to achieve these goals if their projects only focus on integration….”. SOA should be about more than simply linking app A to app B, and so on. SOA is about innovation. Every SOA-related proposal out there should include the word “innovation” somewhere in the text. And, let’s face it, SOA would be downright boring (make that Boring with a capital B) if it was just another means of app or systems integration, and nothing more.

But, nevertheless, integration is still an essential phase in the evolution from soloed, proprietary systems to full-functioning service oriented architecture. In many cases, it can help provide the initial justification for commencing with an SOA approach. And for many SOA proponents, especially those facing organizational resistance, the key is to just start.

And, remember, SOA may involve integration, but it truly is about innovation.

Actually, the truth, or the need (with the Aristotelian notion) behind doing SOA is that the main goals of SOA initiatives are to reduce costs and increase agility. The getting out from silos and proprietary systems is just the good reason of taking SOA approach in projects and have the acceptance of CEOs. How many times the “monolithic-systems” overcome has been stated in such meetings and proposals? And of coarse, at the end, how many times did a true SOA solution broke that “monolithity”. Some will say always, in the context of the project. But the true SOA should guide towards “the standardization of the core architecture elements (…) and furthermore to establish an optimized integration approach for existing assets of the environment.” as stated in the SOA and the real Service Integration issue post. The problems of real life environment (both IT, infrastructure and financials) force towards an analytical solution, meaning solving one problem at a time. This is the main headache for considering SOAs in every extend you may wish to use it.

The most useful approach to SOA, is the change of the adopted philosophy. The ideal SOA, considered somewhere in the Platonian world of Ideas, as a successful concept, has to do with a non-analytical approach, but mostly with a holistic one. The web services and the integration (imagine behind that word how many systems/applications/business approaches/files/tools… might be hiding) the business needs and the time constrains are giving the first bird-eye view, or the context of the environment to be SOA-ised. But this is not enough. If it was, then correctly, we should be speaking about re-engineering and therefore reinventing the wheel. A solution of this type, is something like.. “drop everything out and put the new, SOA”. SOA should deal with ALL of these. According to the project at hand, an holistic view should be taken in every aspect of the analysis. We should thing with the duality in mind (this project, the whole enterprise environment) and in a time agnostic attitude (as-is and to-be in one context). Ok, the magicians are out of job nowadays, but none said that doing SOA should be easy. In the beginning.  After having a good and approved idea of the total-holistic SOA environment in the context enterprise, then the compliance rules are straight forward and therefore the base of the SOA for the project(s) at hand to start with. SOA is about innovation, integration and re-architecture. But more of all, is about the way considering the very same topics, issues, environments and enterprises with a new eye.

more to follow, stay tuned and if you would like help establishing a SOA strategy for your specific case, give me a shout!

Tuesday

SOA and the real Service Integration issue

A real problem considering a SOA project, either in its beginning or in extending an all ready existing one, is the Web Services (ws) hell, that is hiding in the corner. Beginning the SOA approach, especially when having limited time to deliver and / or pure business analysis results, defining and architecting a solution that is compliant with your client (both in the kick-off architecture and in the way of working – aka good adaptation of the new environment) is possibly to provide a sea of ws. From the other hand, extending or taking a project in an already existing SOA environment that was developed by someone else, you have a small daemon hiding in the corner that always play with you in order to make you build more and more new ws in order to build your components.

In any case, in the SOAs you are always face the problem of exponential increase of the ws pool. In order to avoid such a problem, you have to give effort in focusing on the standardization of the core architecture elements (imagine that in an existing case, developed by someone else, you have to understand and comply with the already taken way) and furthermore to establish an optimized integration approach for existing assets of the environment.

Having these in mind, you have either to use some existing ground of work to extent by, or to … re-invent the wheel.

Re-inventing the wheel, means that you are prepared to give much more effort in business analysis and re-engineering and furthermore to develop all the necessary tools and bridges, both for the interoperability elements of the infrastructure, as well as for the semantics and conceptual transformations that will take place in order to implement the needed physical (system) bridges.

Or using some existing assets…

Java Business Integration (JBI) is an effort focused on standardizing the core architecture elements of integration architectures. It is a specification developed under the Java Community Process (JCP) for an approach to implementing a Service Oriented Architecture (SOA). The JCP reference is JSR-208. JBI extends Java EE and Java SE with business integration service provider interfaces (SPIs). It enables the creation of a Java business integration environment for the creation of composite applications. It defines a standard runtime architecture for assembling integration components to enable a SOA in an enterprise information system.

Following the JBI road, you might be able to use another ally for the SOA stacks you gonna need.

Open ESB is an open source integration platform based on JBI technology. It implements an Enterprise Service Bus (ESB) using JBI as the foundation. This allows easy integration of Web Services to create loosely coupled, enterprise-level integration solution.

Open ESB Architecture

Because Open ESB is built on top of the JBI specification, it makes sense that, before diving into the architecture of Open ESB, we take a look at the JBI architecture, which is illustrated in Figure 1.

As the figure shows, JBI adopts a pluggable architecture. At the heart of a JBI runtime is a messaging infrastructure called Normalized Message Router (NMR) that is connected to a bunch of JBI components. JBI components are architectural building blocks of a JBI instance and are plugged into the NMR to interact with each other. The interaction among components carries out the functional logics of the JBI instance.

Normalized Message Router

The primary function of the NMR is to route normalized messages from one component to another. It enables and mediates the inter-component communication. When a component needs to interact with another component, the component does so by generating a normalized message and sending it to the NMR. The NMR will route the message to the destined component based on some routing rules. After the destined component gets and processes the message, it will generate a response message if required, and will send the response message to the NMR. It is the NMR's responsibility to deliver the response message back to the original component.

The NMR uses a WSDL-based messaging model to mediate the message exchanges between components. From the NMR's point-of-view, all JBI components are service providers and/or service consumers. The WSDL-based model defines operations as a message exchange between a service provider and a service consumer. A component can be a service provider, a service consumer, or both. Service consumers identify needed services by a WSDL service name rather than end-point address. This provides the necessary level of abstraction and decouples the consumer from the provider, allowing the NMR to select the appropriate service provider transparently to the consumer.

An instance of an end-to-end interaction between a service consumer and a service provider is referred to as a service invocation. JBI mandates four types of interactions: One-Way, Reliable One-Way, Request-Response, and Request Optional-Response. These interactions map to the message exchange patterns (MEPs) defined by WSDL 2.0 Predefined Extensions.

The NMR also supports various levels of quality of service for message delivery depending on application needs and the nature of the messages being delivered.Component - Service Engine and Binding Component
A JBI component is a collection of software artifacts that provide or consume Web Services. As mentioned previously, JBI components are plugged into the NMR to interact with other components. JBI defines two types of components, Service Engines (SEs) and Binding Components (BCs).

A Service Engine is a component that provides or consumes services locally within the JBI environment. Service Engines are business logic drivers of the JBI system. An XSLT Service Engine, for example, can provide data transformation services, while a BPEL Service Engine can execute a BPEL process to orchestrate services, or enable execution of long-lived business processes. A Service Engine can be a service provider, a service consumer, or both.

A Binding Component is used to send and receive messages via particular protocols and transports to systems that are external to the JBI environment. They serve to isolate the JBI environment from the particular protocol by providing normalization and de-normalization from and to the protocol-specific format, allowing the JBI environment to deal only with normalized messages.

Distinguishing between these two types of components is more functional. In fact, JBI uses only a flag to distinguish these two. The programming model and APIs of these two types are otherwise identical. However, by convention, Service Engines and Binding Components implement different functionality in JBI.

Service Unit and Service Assembly

JBI runtime hosts components and so acts as a container for components. Components in turn act as containers for Service Units (SUs). A Service Unit is a collection of component-specific configuration artifacts to be installed on SEs or BCs. One can also think of a Service Unit as a single deployment package destined for a single component. The content of a Service Unit are completely opaque to JBI, but transparent to the component it is deployed to. An SU contains a single JBI-defined descriptor file that defines the static services provided and consumed by the Service Unit.

Service Units are often grouped into an aggregated deployment file called a Service Assembly (SA). A Service Assembly includes a composite service deployment descriptor, detailing to which component each SU contained in the SA is to be deployed. A service assembly represents a composite service.

Lifecycle Management

JBI also defines a JMX-based infrastructure for lifecycle management, environmental inspection, administration, and reconfiguration to ensure a predictable environment for reliable operations.

Components interfere with JBI via two mechanisms: service provider interfaces (SPIs) and application program interfaces (APIs). SPIs are interfaces implemented by the binding or engine; APIs are interfaces exposed to bindings or engines by the framework. The contracts between framework and component define the obligations of both framework and JBI component to achieve particular functional goals within the JBI environment.

Unless you're doing component development, it's unlikely that you need to work with these SPIs and APIs directly.

Open ESB

Once we've understood the architecture of JBI, Open ESB becomes really simple. Open ESB is an implementation of the JBI specification. It extends the JBI specification by creating an ESB from multiple JBI instances. The instances are linked by a proxy-binding based on Java Message Service (JMS). This lets components in separate JBI instances interoperate in the same fashion as local ones (see Figure 2).

ESB administration is done by the Centralized Administration Server (CAS), a bus member that lets the administrator control the system directly.

Open ESB includes a variety of JBI components, such as the HTTP SOAP Binding Component, the Java EE Service Engine, and the BPEL Service Engine

 

A Sample Service Integration Scenario

In this section, we'll examine a simple use case and illustrate a possible service integration solution using Open ESB.

Problem Description

ABC Movie Theatres is a fast-growing movie theatre chain. To better serve its customers, the management has decided to put in place a new ticket booking service, the Booking Service, which will be responsible for handling most of the ticket purchasing requests generated from various systems such as Web-based applications, ticket vending machines and points of sale (POS) at box offices.

The business logic of handling a booking request is somewhat complicate. But in a nutshell, it involves the following steps:

  1. When a booking request is received, the Booking Service will first try to process the ticket information. This could include checking for ticket availability; holding the tickets for the customer if tickets are indeed available or provide alternatives to the customer otherwise; applying any applicable promotions and calculating the total dollar amount
  2. Then the Booking Service will charge the customer using the payment information included in the booking request
  3. Finally the Booking Service will send a confirmation to the customer using the contact information included in the request.
High-Level Solution Description

Considering the fact that a significant amount of the logic needed by the new booking service has been implemented in different applications over time, and that these applications have been proved stable over time, it makes a lot sense to leverage these existing IT assets for cost efficiency. So the architect team at ABC comes up with the solution of service-enabling and -consolidating existing logics, bringing the services into the ESB and exposing them as new composite services that are made available via different protocols.

To do this, they have to identify candidates for service-enabling. Currently the company has a billing system developed and used by the finance department and it has been working perfectly over years. This system runs over the HTTP protocol. Another system is the notification system developed by the customer relationship department. This system listens to a JMS message queue and, when it gets a new message, sends out notifications to the customer by the means specified in the message, e.g., an e-mail or a voice mail message. These two systems become the ideal candidates for billing customers and sending confirmations.

The company also has a system for processing ticket orders. This system, however, is severely old and impossible to scale up to handle the ever-increasing transaction volume due to recent acquisitions. The architect team has decided it's time to write a replacement application; it's also decided that the replacement application, the Ticket System, will be built using EJB technologies due to the transactional nature of the application.

The architect team also decided that the new Booking Service will be made available via the SOAP-over-HTTP protocol as well as the File protocol to support different client systems.

Solution Details

When it comes to Open ESB development, it's all about creating JBI Service Units and packing them in a Composite Application (or Service Assembly, in JBI terms). Figure 3 illustrates the Service Units for this solution and the interactions among those units. As mentioned earlier, Service Units are deployed to their corresponding JBI components. For simplicity's sake, we'll use the term Service Unit and Component interchangeably provided it doesn't cause any confusion in the particular context.

At a high level, two BCs, the Booking Service SOAP BC and the Booking Service File BC, are created to enable the Booking Service and expose it to the outside world via the SOAP and File protocol respectively, allowing different kinds of clients to consume the service. A client application can invoke the Booking Service via either of these protocols. Invocation requests generated by client applications are then routed to the Booking Process, a BPEL Service Engine. The Booking Process orchestrates the Ticket Service, the Billing Service, and the Notification Service to fulfil the request.

File and SOAP BC - The Booking Service

In Open ESB, a File BC is a JBI binding component that binds file systems to WS-I Web Services. A File BC scans a pre-configured file location for new files. If a new file is found, the component generates a Web Service call using the content in the file as the payload of the Web Service input message. Response messages will also be written to the file system by the component. Many of the properties of a File BC, such as the file location, file name, and time interval for the component to scan the specified location for new file, can be configured.

A SOAP BC works the same way as the File BC, only instead of scanning a file system directory, a SOAP BC accepts WS-I SOAP messages over the HTTP protocol.

A File BC and a SOAP BC are created in this solution to enable the Booking Service and expose it to external systems via these protocols. External systems that wish to consume the service do so by either sending a WS-I-compliant message, or dropping the message in the specific file location.

External requests received by the previous two BCs will be routed, by the NMR, to the Booking Process, a BPEL Service Engine that executes a BPEL business process to orchestrate services.

BPEL Service Engine - The Booking Process

A BPEL Service Engine is a JBI runtime component that provides services for executing WS-BPEL-compliant business processes. The contract between a business process and partner services is described in WSDL.

A BPEL SE can save business process data to a persistent store if configured to do so. This is required for recovering from system failure and running long-lived processes. A BPEL SE can be deployed to a clustered environment to achieve high scalability. The service engine's clustering algorithm automatically distributes processing across multiple engines. When the business process is configured for clustering, the BPEL Service Engine's failover capabilities ensure throughput of running business process instances. When business process instances encounter an engine failure, any suspended instances are picked up by all available BPEL Service Engines.

The Booking Process in this solution is a BPEL SE and is at the heart of this solution. It does some simple message transformation and, most importantly, invokes the Ticket Service, Billing Service and the Notification Service. Figure 4 shows a simplified version of the Booking Service process.

Java EE Service Engine - The Ticket Service

A Java EE Service Engine brings Java EE components into the Open ESB runtime as Web Services. A Java EE Service Engine acts as a bridge between a Java EE application server and a JBI environment for Web Service providers and Web Service consumers deployed in the application server. Java EE Web components or EJB components that are packaged and deployed as Web Services on a Java EE container can be transparently exposed as service providers in JBI environment.

In this solution, the Ticket System is implemented using EJB Session Beans and wrapped as a JAX-WS Web Service. The service is then brought into the Open ESB runtime by the Ticket Service SE.

HTTP and JMS BC - The Billing Service & the Notification Service

Let's face it - all BCs work the same way. This is the beauty of Open ESB architecture. We've looked at two BCs, the File and the SOAP BC. Similarly an HTTP BC binds the HTTP protocol to the Web Service, and a JMS BC binds the JMS protocol.

In this solution, the HTTP BC and the JMS BC are used to bring the Billing Service and the Notification Service into the Open ESB runtime respectively - as stated previously, the Billing Service runs over the HTTP protocol and the Notification Service over the JMS.

It's important to be aware that, although these components are shown connected directly in the figure, they never communicate directly to each other. Instead, components send massages to and receive messages from the NMR. The NMR is responsible for transforming messages and routing messages to the appropriate destinations.

GlassFish & NetBeans - Development & Deployment

Open ESB runs on any OSGi R4-compliant runtime. GlassFish has a built-in JBI runtime and is bundled with NetBeans for easy development. NetBeans provides a comprehensive GUI development environment. The java.net community is working on a new project, GlassFish ESB aimed at creating a community-driven ESB for the Glassfish Enterprise Server platform.

Developing the process in NetBeans involves creating the needed JBI modules and including them in a composite application. Figure 5 shows a screenshot of creating the composite application in the NetBeans IDE.

Conclusion
Open ESB provides a robust and flexible platform for building service-oriented integration solutions. Its component-based architecture allows maximum extensibility and interoperability. It's based on industry standards and is easy to use. It seamlessly integrates with other Java enterprise technologies.

References
Open JBI Components

The overall goal of Project Open JBI Components is to foster community-based development of JBI components that conform to the Java Business Integration specification (JSR208). You can join this project as a JBI component developer or as part of an existing JBI component development team.

About JBI Components

The JSR208 specification provides for three installable JBI components: Service Engines, Bindings, and Shared Libraries. JBI components operate within a JBI container, which is defined by the JSR208 specification. Two popular implementations of JBI containers are Project Open ESB and ServiceMix, an alternative approach which has been mentioned in previous posts.

SOA agility continued: Leveraging Data Services and provide agility in enterprise

 

The promise of SOA is flexibility and agility.  SOA people give the definition of agility as the ability to adapt to change at the speed of business.  In today’s global economy which has been fuelled by collaboration and Internet technologies, businesses change at a much faster rate than ever before.  So how does SOA help companies become agile?  In previous post i have consider my case of agility concerning the developing and architecting of SOA enabled applications, after facing some real world bottlenecks and difficulties.

Continuing with the notion of developing and expanding it to the agility of the enterprise (yours or mainly yours customer as an asset to provide to), the main issue should be to abstract the necessary data services that the other layers of the architecture use. The trick here is the ‘magic’ logical (or conceptual) representation of an entity (in this example and in many real world cases, the customer), not the physical implementation. From the point that in software we can deal with symbolic transformations, the option of abstracting the semantics into simpler (but yet transformable back) entities, is the magic. (Although, here i don’t refer to semantics, keep in mind, that in some legacy SOA platforms of biggest vendors, the semantics is on main focus as a next step for tools to by – they mention it as the heart of orchestration, organizing and identifying the correct services in the SOA-ws sea – which is the next step of my consideration here).

Firstly, let’s look at a logical view of a typical SOA.

You can see in this diagram how each layer of the architecture has been abstracted and is mutually exclusive or “loosely coupled” from the other layers of the architecture.  Why is this important?  The answer is simple…..ease of change!  Here are some advantages of this approach.

  • Share services, components, rules, etc. across the enterprise (reuse)
  • Isolate changes and reduce dependencies (speed to market)
  • Minimize impact of business changes (speed to market)
  • Easier to maintain (maintainability)

Finalizing here, to give a clear applicable explanation of the above, let me give an example of how this approach helps companies become agile.

Use Case: New customer data from a new client

Let’s say your company provides a service for customers in the retail industry.  You have a website that offers online services for consumers and your white label solution is tailored to look like it is hosted by the individual retailers.  The problem is that each retailer has their own customer database.  In the past you would have to write a ton of code for each retailer that you signed up to use your services.  With SOA, now it is a simple data mapping exercise.  By abstracting data services, the other layers of the architecture use the logical representation of customer, not the physical implementation.  Behind the scenes, the data services layer is translating the request for customer data from logical view to physical implementation.  So when you bring on new clients, you simply use a tool in the data services layer to map the new clients customer data to the standard logical definition as defined by the architecture.

You can see from this example that all three retailers have an entirely different implementation of their customer database including different naming conventions and even different attributes. In the data services layer, you can map all of these physical implementations to one standard customer definition. You can also see how the business processes all use the logical view of customer. This allows us to add and change customer definitions on the back end without changing code on the front end. If two more retailers were to sign up tomorrow, we can map their definitions to the logical customer view and be done. No Code!!! That is agile!

If you would like help establishing a data services strategy, give me a shout!

The above schematics and context (reason) for writing this post are from Mike Kavis . Also I would like to thank him for the definition of the Use Case above and the schemes usage.

SOA Open Source stacks

 

Dave Linthicum wrote a post called Open Source SOA provides some major advantages. In his post Dave stated:

When it comes to SOA, I think open source provides two major advantages:

  • First, it’s typically much less expensive than the tools and the technology that are proprietary.
  • Second, they are typically much more simplistic and easier to understand and use.

To the second point, simplicity. The open source SOA vendors seem to take a much more rudimentary approach to SOA, and their tools seem to be much easier to understand and, in some cases, use. While some people want complex, powerful tools, the reality is that most SOAs don’t need them. If you’re honest with the requirements of the project, you’ll see that good enough is, well, good enough.

Great points. I would also add another clear advantage which I learned the hard way. On a previous enterprise wide SOA initiative, I drank the cool-aid that the vendor stack was an integrated stack and was simpler to deploy and manage over a stack of a mix of vendors. What I found out is that the mega vendors (IBM, Oracle, etc.) have bought so many pure play tools (rules engines, BPMs tools, data services and MDM tools, governance tools, etc.) that the smooth integration ends when the Power Point decks are closed. In reality, the mega vendor stacks are a hodge podge of rushed acquisition and integration efforts.The important thing, is that the underlying architecture of each tool within the stack are completely different and there are very few people (if any) within the organization who understands the complete stack. In fact, we were dealing with two very different organizations when dealing with support and they were not in sync. Eventually the entire company was consumed by another mega vendor (you can probably guess which acquisition this was) and the whole product roadmap was turned upside down.

Now let’s look at some of the well established open source stack vendors like WSO2, MuleSource, and RedHat. These vendors do not suffer from acquisition madness and chaos. If fact, they are all built on a consistent architecture and do offer smooth integration between the various layers of the stack. Do they have all of the features of the commercial products? No. Do they have enough features for most SOA initiatives. Definitely. MikeKavis wrote a post on CIO.com called Tight Budgets? Try open source SOA. Here is a quick summary of the advantages he discussed:

  1. Try before you buy
  2. Lower cost of entry
  3. Cost effective support
  4. Core competency
  5. For the people by the people

So what open source options do I have, you might ask? The following picture shows the open source tools that some people prefer for their new SOA initiative. There are using a combination of WSO2, Intalio, Drools, Liferay, and PushToTest.

This is just one example of many. You can mix and match tools from different open source communities or you could standardize on one community. Here is an example of Red Hat’s jBoss SOA stack.

And MuleSource has a well known suite of tools as well.

Many organizations are still not very comfortable with open source for mission critical initiatives. There is a debunk for many of the open source myths in the past (here, here, and here).

If there ever was a time to embrace open source, the time is now in this harsh economy. As commercial SOA vendors continue to get gobbled up by the mega vendors, it is time to seriously consider alternatives.

The above schematics and the first place to step on (context) for writing this post are from  Mike Kavis . Also I would like to thank him for using his comments and graphs (although their are not his - Please refer to Mule, JBoss and WSO2 sites for these and extended info for these tools).

Adapting with the SOA – Mashing up, the case of Enterprise Mashups

 

The Dilemma

When defining a SOA scheme, or developing web services (of coarse in a SOA compliant way or SOA enabled) you try to sell or develop a reusable asset both to you (for easier future development) and your clients (in order to deliver them a dynamic, adaptable and easy to extent asset). Nowadays, lots are mentioned around enterprise mashups. Actually, when working in web development projects 4-5 years ago, the idea of mashups, was very helpful and many times the focus of my work. Thing of the Microsoft’s Sharepoint logic, whereas you just by dropping in web-parts, you were able to develop in just a ‘lego’ -way some potral, an intranet and what so ever web presence. Knowing the tool’s abilities and the business/project requirement needs the whole job was straight forward. The same for example, holds for the Plone CMS system, which just by re-using existing parts of it (that were tested and user friendly) you were able to deliver a very user friendly CMS system, making your clients very happy that they could manage some web aspects and deploys  from a desk and by paying a web developer.

In the world of the SOA era, this “lego” approach is hiding behind the re-usage promise.  Companies and developers, software architects and business, are now familiar with the bottlenecks of the above notion. The managers and so, of the SOA ‘environments’, came above the notion of SaaS. In general, Software as a Service (SaaS, typically pronounced 'sass') is a model of software deployment where an application is hosted as a service provided to customers across the Internet. By eliminating the need to install and run the application on the customer's own computer, SaaS alleviates the customer's burden of software maintenance, ongoing operation, and support. Conversely, customers relinquish control over software versions or changing requirements; moreover, costs to use the service become a continuous expense, rather than a single expense at time of purchase. Using SaaS also can conceivably reduce that up-front expense of software purchases, through less costly, on-demand pricing. From the software vendor's standpoint, SaaS has the attraction of providing stronger protection of its intellectual property and establishing an ongoing revenue stream. The SaaS software vendor may host the application on its own web server, or this function may be handled by a third-party application service provider (ASP). This way, end users may reduce their investment on server hardware too.

the…Philosophy

As a term, SaaS is generally associated with business software and is typically thought of as a low-cost way for businesses to obtain the same benefits of commercially licensed, internally operated software without the associated complexity and high initial cost. Consumer-oriented web-native software is generally known as Web 2.0 and not as SaaS. Many types of software are well suited to the SaaS model, where customers may have little interest or capability in software deployment, but do have substantial computing needs. Application areas such as Customer relationship management (CRM), video conferencing, human resources, IT service management, accounting, IT security, web analytics, web content management and e-mail are some of the initial markets showing SaaS success. The distinction between SaaS and earlier applications delivered over the Internet is that SaaS solutions were developed specifically to leverage web technologies such as the browser, thereby making them web-native. The data design and architecture of SaaS applications are specifically built with a 'multi-tenant' backend, thus enabling multiple customers or users to access a shared data model. This further differentiates SaaS from client/server or 'ASP' (Application Service Provider) solutions in that SaaS providers are leveraging enormous economies of scale in the deployment, management, support and through the Software Development Lifecycle.

Key characteristics

The key characteristics of SaaS software, in general, include:

  • network-based access to, and management of, commercially available software
  • activities that are managed from central locations rather than at each customer's site, enabling customers to access applications remotely via the Web
  • application delivery that typically is closer to a one-to-many model (single instance, multi-tenant architecture) than to a one-to-one model, including architecture, pricing, partnering, and management characteristics
  • centralized feature updating, which obviates the need for downloadable patches and upgrades.
  • SaaS is often used in a larger network of communicating software - either as part of a mashup or as a plugin to a platform as a service. Service oriented architecture is naturally more complex than traditional models of software deployment.

SaaS applications are generally priced on a per-user basis, sometimes with a relatively small minimum number of users and often with additional fees for extra bandwidth and storage. SaaS revenue streams to the vendor are therefore lower initially than traditional software license fees, but are also recurring, and therefore viewed as more predictable, much like maintenance fees for licensed software.

Ok. This is how the thing is defined to be like. This is the encyclopaedias bird-eye view. Now lets apply these. And we have a real world case. From the above Key characteristics,  the 3rd and the 5th are of great importance. The business analysis should give some hinds, but there is not only that.  The new startup that I am exposing here as example, is building a SaaS solution that will be consumed by various types of customers and partners. These customers and partners may want to consume our data services as a RSS feed, gadget, SMS message, web page, within a portal or portlet, or a number of different ways. I do not want to spend the rest of my life developing new output mediums for our services. Instead, I would rather spend my time adding new business services to enhance our product and service offerings hence contributing to the bottom line.

Enterprise Mashups to the rescue

Enterprise mashups will allow me to offer my partners and customers the ultimate flexibility to access our products and services in ways that are convenient for them without having to wait on my IT shop to decide if (a) we think the request is important enough in our priority list, (b) if we have the time and resources to work on it, and (c) how much we will charge them. On the IT side of the house, with an enterprise mashup strategy in place we can be assured that whatever mashups our customers and partners create, they will be subject to the same security and governance as the services we have developed. The diagram below shows a logical view of how our SOA will be designed.

As you can see, we have clearly abstracted the various layers within the architecture and they all inherit our overall security policies. SOA governance is applied to this architectural approach to enforce our standards and design principles. Overall IT governance provides oversight over the entire enterprise which includes legacy systems (we don’t mention any legacy yet), third party software, etc.

Now let’s add the enterprise mashup layer. We want to hide the complexity of our architecture from the end user and expose data services to them to consume. At the same time we want these mashups to be equally secure as the services we write and adhere to the same governing principles. Enterprise mashup products provide tools to make managing this layer easy and efficient. The diagram below shows the enterprise mashup layer inserted into the architecture as a layer on top of SOA.

 

Enterprise Mashups in simple terms

 Deepak Alur (JackBe’s VP of Engineering) discussed how enterprises have been focusing more on infrastructure and technology and not on the consumers of data. As he coined it, many shops are “developing horizontally and not addressing the needs of the users”. He talked about how users were doing their own brute force mashups by cutting and pasting data from various places into Excel. This creates various issues within the enterprise due to lack of data integrity, security, and governance. It is ironic how corporations spend huge amounts of money on accounting software and ERP systems, yet they still run the business out of user created Excel spreadsheets! The concept of enterprise mashups addresses this by shifting the focus back to the user consumption of data. Here are some of the requirements for mashups that Deepak pointed out:

  • User driven & user focused
  • Both visual & non-visual
  • Client & server side (although most are server)
  • Plug-n-Play
  • Dynamic, Adhoc, Situational
  • Secure & Governed
  • Sharable & Customizable
  • Near zero cost to the consumer

Jackbe’s enterprise mashup tool is called Presto. Presto is an Enterprise Mashup Platform that allows consumers to create “mashlets” or virtual services. IT’s role is to provide the security and governance for each data service that will be exposed for consumer use.

Presto Wires is a user friendly tool to allow users to create their own mashups by joining, filtering, and merging various data services (as shown in the picture below).

In this example the user is combing multiple data points from many different organizations in an automated fashion. They could then present this data to multiple different user interfaces and devices. All without waiting on IT.

How this solves my Dilemma


Back to my dilemma. By leveraging a tool like Jackbe’s Presto or WSO2’s Mashup Server, I can now present various data services in a secured and governed fashion to my customers and partners without being concerned on how they want to consume it. Whether they want the mashup on their own intranet, as a desktop gadget, as an application on Facebook, or what ever they dream of, all I need to be concerned with is the SLA of my data services. This also makes my product offering more competitive than my competitors who have proprietary user interfaces that do not provide the flexibility and customization that the customers desire.

As mentioned in the title, this is the Adaptation with your SOA cake. For those organizations who are disciplined enough to implement SOA and follow the best practices of design and governance, the reward can be an simple addition of an Enterprise Mashup Platform on top of your SOA stack. This is the ultimate flexibility and agility that SOA promises.

Wednesday

Automatically generate Java Web service clients with Axis2, XFire, CXF, and Java 6, including WSDL compatibility checks

Client-side WSDL processing with Groovy and Gant

Like it or not, service-oriented architecture (SOA) is a hot topic, and SOAP-based Web services have emerged as the most common implementation of SOA. But, as happens with all new-comings, "SOA reality brings SOA problems." You can mitigate these problems by creating useful Web service clients, and also by thoroughly testing your Web services on both the server side and the client side. WSDL files play a central role in both of these activities, so in this post i will extent the client autogeneration approach and dynamic client for a web service going into an alternative (and many time quicker) path, using an extensible toolset that facilitates client-side WSDL processing using Gant and Groovy.

The real life needs-requirements emerging …

The real SOA-wise for Web services, as real life (and furthermore mature SOA standards) demand, should be interoperability. Although, in various projects the sensitivity and depth of the term interoperability should be defined well, in those projects always, there should be a cross-platform Web service testing team responsible for testing:

  1. functional aspects as well as the
  2. performance,
  3. load, and
  4. robustness

of Web services. In the ‘open sea’ of open source and legacy related applications, with mixture of standards and a batch of tools available around for various jobs and tasks, I realized the need for a….

  • small,
  • easy-to-use,
  • command-line-based solution

for WSDL processing. I wanted the toolset to help testers and developers check and validate WSDL 1.1 files coming from different sources for compatibility with various Web service frameworks, as well as generating test stubs in Java to make actual calls. For the Java platform, that meant using Java 6 wsimport, Axis2, XFire, and CXF.

Searching for solution…

I’ve started client-side test development with XFire, but then switched to Axis2 because of changing customer requirements in our agile project. (Axis2 was considered to be more popular and widespread than XFire.) I have also used ksoap2 -- a lightweight Web service framework especially for the Java ME developer. We didn't expand the toolset to use ksoap2 because it has no WSDL-to-Java code generator.

Besides being controllable via simple commands, the toolset had to be able to integrate at least the WSDL checker into an automated build environment like Ant. One solution would have been to develop everything as a set of Ant targets. But executing everything with Ant is cumbersome when tasks become more complex, and you need control structures like if-then-else or for loops.

Even using ant-contrib binds you to XML-structures that are not easy to read, although you will have more functionality available. Anyhow, in the end you might need to implement some jobs as Ant tasks.

solution’s profile, overview:

All of this is possible, of course, but I was looking for a more elegant solution. Finally, I decided to use Groovy and a smart combination of Groovy plus Ant, called Gant. The components I have developed for the resulting Toolset can be divided into two groups:

  • The Gant part is responsible for providing some "targets" for the tester's everyday work, including the WSDL-checker and a Java parser/modifier component.
  • The WSDL-checker part is implemented with Groovy, but callable inside an Ant environment (via Groovy's Ant task) as part of the daily build process

That is an overview of the programming and scripting languages I used to build the Groovy and Gant Toolset. Now let's consider the technologies in detail.

Overview of Ant, Groovy, and Gant

Apache Ant is a software tool for automating software build processes. It is similar to make but is written in the Java language, requires the Java platform, and is best suited to building Java projects. Ant is based on an XML description of targets and their dependencies. Targets include tasks that every project needs, like clean, compile, javadoc, and jar. Ant is the de-facto standard build tool for Java, although Maven is making inroads.

Groovy is an object-oriented programming and scripting language for the Java platform, with features like those of Perl, Ruby, and Python. The nice thing is that, Groovy sources are dynamically compiled to Java bytecode that works seamlessly with your own Java code or third-party libraries. By means of the Groovy compiler, you can also produce bytecode for other Java projects. It is fair to say that I am biased towards Groovy compared to other scripting languages such as Perl or Ruby. While other people's preferences and experiences may be different from mine, the integration between Groovy and Java code is thorough and smooth. It was also easy, coming from Java, to get familiar with the Groovy syntax. What made Groovy especially interesting for solving my problems was its integration with Ant, via AntBuilder.

Gant (shorthand for Groovy plus Ant) is a build tool and Groovy module. It is used for scripting Ant tasks using Groovy instead of XML to specify the build logic. A Gant build specification is just a Groovy script and so -- to quote Gant author Russel Winder -- can deliver "all the power of Groovy to bear directly, something not possible with Ant scripts." While it might be considered a competitor to Ant, Gant relies on Ant tasks to actually do things. Really it is an alternative way of doing builds using Ant, but using a programming language, instead of XML, to specify the build rules. Consider using Gant if your Ant XML file is becoming too complex, or if you need the features and control structures of a scripting language that cannot be easily expressed using the Ant syntax.

What you see is what you get -- toolset contents and prerequisites

All the code above for the Groovy and Gant Toolset has been tested on a Windows XP and Windows 2003 Server OS with Java 5 and Java 6 using Eclipse 3.2.2 . In order to play with the toolset in its entirety, you will need to install Java 6, Axis2-1.3, XFire-1.2.6, CXF-2.0.2, Groovy-1.0, and Gant-0.3.1. But you can configure the toolset using a normal property file and exclude WSDL checks for frameworks you are not interested in, including Java 6's wsimport.

Installing the frameworks is simple: just extract their corresponding archive files. I assume you have either Java 5 or Java 6 running on your machine. You can use the Java 6 wsimport WSDL checker even with Java 5; you just need Java 6's rt.jar and tools.jar as an add-on in a directory of your choice. Installing Groovy and Gant usually takes a matter of minutes. If you follow my advice of placing Groovy add-ons -- including the gant-0.3.1.jar file -- in your <user.home>/.groovy/lib directory you do not even have to touch your original Groovy installation.

After the installation steps, you have to keep in mind the following artifacts that have been used or modified accordingly for my environment:

  • build.gant is the Gant script that contains Gant targets with Groovy syntax.
  • build.properties is needed to customize the Gant script and the Groovy WSDL checker.
  • A set of jar files (including the Gant module) to be placed in your <user.home>/.groovy/lib directory to enrich Groovy for the toolset.
  • An Eclipse Java project called JavaParser that is used to scan the generated Axis2 stub in order to modify it for use with HTTPS, chunking, etc. Two SSLProtocolFactory classes are included with this distribution for use with Axis2 and Java 6 (or ksoap2). They will be helpful when it comes to cipher suite handling.
  • An Eclipse Groovy project called WSDL-Checker that contains Groovy classes that check WSDL files by calling Web service framework code generators and analyzing the output files. WSDL-Checker also validates WSDL files using the CXF validator tool (if you have CXF installed and enabled). This project was created using the Eclipse-Groovy-Plugin.
  • A small Ant script that demonstrates how to call the Groovy WSDL checker from Ant as well as how to handle the checker's response.
  • A directory with sample WSDL files.
  • An Eclipse project to call a public Web service (GlobalWeather) provided as a JUnit test; it takes a generated Axis2 stub that has been modified by JavaParser (one JUnit test with Axis2 data binding "adb" and one with "xmlbeans").
  • An Eclipse workspace preferences file (Groovy_Gant.epf) to assist in setting all the needed Eclipse classpath variables and user libraries.
  • Two Groovy scripts that call public Web services by means of the Groovy SOAP extension -- just so you can see what Groovy has to offer in this area

Once you've set up your environment you'll be ready to begin familiarizing yourself with the Groovy and Gant Toolset. For those who need it, here is a quick primer on the client side of Web services, which you will need to understand in order to follow the discussion in the remainder of the article.

The client side of Web services

A WSDL file incorporates all the information that is needed to create a Web service client. In order to create the client, a Web service framework's code generator reads the WSDL file. Based on the definitions found in the WSDL file it creates a Java stub or proxy class that mimics the interface of the Web service. Depending on the switches, this stub code can become very large and include all the referenced data types as inner classes. A better option is to let the generator create a couple of classes in different packages that will be the basis for javadoc generation later on. All the resulting classes are Java source code that you can study. Nevertheless, it is generated code, and as such it has its own "look."

For functional and performance testing it might be necessary to modify the client stub. This is especially true when it comes to using HTTPS, instead of HTTP, to control the client-side debug level via a parameter (instead of using an XML config file), or to deal with HTTP chunking. (HTTP 1.1 supports chunked encoding, which allows HTTP messages to be broken up into several parts. Chunking is most often used by the server for responses, but clients can also chunk large requests.)

Particularly during testing, you may decide to just say "OK" to all self-signed SSL certificates. Therefore, when using HTTPS, you could need a specific SSL Socket Factory that allows for accepting all certificates in test mode. For performance tests, you might even want to control the number of opened connections, and you will want to re-use the same connection for different calls coming from the same client. This article is accompanied by a Java 5 parser and an Axis2 client stub modifier to deal with such scenarios.

Because you will get the parser/modifier as source code, you can customize it to "inject" some code for features not covered by my solution. What's more, you can use the whole code as a blueprint to write your own modifier for frameworks other than Axis2.

Gant targets for WSDL processing

A Gant script -- build.gant -- is the basis for all the features of the Groovy and Gant Toolset. Every task a user can perform is expressed as a Gant target. Using the directory that contains build.gant as your working dir, you can get an overview of all the supported targets, together with a description, by typing gant or gant -T in your command shell. The gant command refers to the "default" target you can implement, the gant -T feature (t stands for "table of contents") comes for free as part of the Gant implementation.

Listing 1. Calling Gant from the command line
   1: >gant


   2:  


   3:         USAGE:


   4:         gant available (checks your build.properties settings for available frameworks)


   5:         gant wsdls (prints all wsdl files with their target endpoints, together with wsdl file 'shortnames' to be used by other targets regex)


   6:         gant [-D "wsdl=<wsdl_shortname_regex>"] javagen (generate Axis2 based Java code from wsdl, compile, provide javadoc, and generate necessary jar/zip files)


   7:         gant [-D "wsdl=<wsdl_shortname_regex>"] check (check one or more wsdl files for compatibility with installed code generators & validator)


   8:         gant [-D "wsdl=<wsdl_shortname_regex>"] validate (validate one or more wsdls files using the CXF validator tool (if CXF is installed))


   9:         gant [-D collect] alljars (generates a directory with all Axis2 client jars, src/javadoc-ZIPs, and xsb resource files if xmlbeans is used)


  10:         gant -D "wsdl=<wsdl_shortname_regex>" [-D "replace=<old>,<new>"] ns2p (prints a namespace-to-package mapping for a wsdl file)


  11:  


  12:         Produced listings in your 'results' directory:


  13:                 output-<tool>.txt & error-<tool>.txt with infos/errors/warnings for the code generation


  14:  


  15:  




The following commands are available as features of the Groovy and Gant Toolset:




  • gant available examines your build.properties file for available and enabled frameworks


  • gant wsdls examines the wsdl directory and produces a sorted list of all available WSDL files together with a "short name" for every service to be used by other commands, as well as the target endpoint(s) for the service. (This command delivers an output, as shown in Listing 2.)


  • gant [-D "wsdl=<regex>"] check uses the code generators configured by build.properties to check all WSDL files, or a regular-expression matching subset of all WSDL files, and produces a summary of error/warning messages.


  • gant [-D "wsdl=<regex>"] validate is the same as "check but instead of the code generators only the CXF validator tool is used (if CXF is installed). It can check the WSDL file's conformance to the WSDL and SOAP schema as well as look at some WS-I Basic Profile recommendations.


  • gant [-D "wsdl=<regex>"] javagen generates Axis2 source code for all WSDL files, or a regular-expression matching subset thereof. It then modifies the client stub, compiles all the code, and generates javadoc information, and a jar file with the compiled bytecode.


  • gant [-D collect] alljars generates a directory containing the bytecode-jars/src-zip, and javadoc-zip archives for all WSDL files. If the collect option is specified it will not call javagen for every WSDL file, and will only copy jars and ZIPs to the directory defined in build.properties.


  • gant -D "wsdl=<regex>" [-D "replace=<old>,<new>"] ns2p examines a WSDL file and prints a namespace-to-package mapping, using the replacement you provide with the optional "replace" parameter This feature is useful if - for test purpose - you need access to application APIs that have no SOAP interface but only a business delegate (known from the Core J2EE Patterns). But -- unfortunately -- these delegates will require other delegate jar files that contain classes with just the same package names as you provide by your WSDL-based Axis2 generated code. Therefore, I did improve the Axis2 code generators package creation in order to assist you in generating "unique" package names for your SOAP parts that will not conflict with business delegate classes in your Eclipse project's classpath. In other words: you can even mix business delegate service calls and SOAP-based service calls in the same Eclipse project. A sample output would look like this:



       1: >gant -D "wsdl=Logon" -D "replace=myProject,myProject_soap" ns2p


       2:     ---> Provided regex matched 'Logon'


       3:     using the following replacement: myProject=myProject_soap


       4:     checking G:/JavaWorld/Groovy_Gant/wsdl/security/webservice/svc-logon/wsdl/Logon.wsdl


       5:     http\://myProject.myCompany.com/runtime=com.myCompany.myProject_soap.runtime


       6:     http\://myProject.myCompany.com/logon=com.myCompany.myProject_soap.logon


       7:     http\://myProject.myCompany.com/remoting/soap/types/headerelements=com.myCompany.myProject_soap.remoting.soap.types.headerelements


       8:     http\://myProject.myCompany.com/logon/generated/interf=com.myCompany.myProject_soap.logon.generated.interf




    Copy this output in a file "ns2p.properties" and save it in the same directory where "Logon.wsdl" is located. Then the javagen command will take care of this mapping when generating your Axis2 client stub for the "Logon" service.





All Gant targets are available as commands for the user and can be combined like so:



gant -D "wsdl=GlobalW.+" -D collect javagen alljars


Listing 2. Sample output from the 'gant wsdls' command




   1: G:\JavaWorld\Groovy_Gant>gant wsdls


   2: list all wsdls


   3:  


   4: AWSECommerceService [amazon/webservice/AWSECommerceService/wsdl/AWSECommerceService.wsdl]


   5:         [<soap:address location="http://soap.amazon.com/onca/soap?Service=AWSECommerceService"/>]


   6:  


   7: AddNumbers [handlers/webservice/svc-addNumber/wsdl/AddNumbers.wsdl]


   8:         [<soap:address location="http://localhost:9000/handlers/AddNumbersService/AddNumbersPort" />]


   9:  


  10: Callback [callback/webservice/svc-callback/wsdl/Callback.wsdl]


  11:         [<soap:address location="http://localhost:9005/CallbackContext/CallbackPort"/>


  12:         <soap:address location="http://localhost:9000/SoapContext/SoapPort"/>]


  13:  


  14: CurrencyConverter [public-services/webservice/svc-lookup/wsdl/CurrencyConverter.wsdl]


  15:         [<soap:address location="http://glkev.webs.innerhost.com/glkev_ws/Currencyws.asmx" />]


  16:  


  17: GlobalWeather [public-services/webservice/svc-lookup/wsdl/GlobalWeather.wsdl]


  18:         [<soap:address location="http://www.webservicex.net/globalweather.asmx" />]


  19:  


  20: HelloWorld [hello-world/webservice/svc-hello-world/HelloWorld.wsdl]


  21:         [<soap:address location="https://localhost:9001/SoapContext/SoapPort"/>]


  22:  


  23: JMSGreeterService [jms-greeter/webservice/svc-greeter/wsdl/JMSGreeterService.wsdl]


  24:         []


  25:  


  26: Logon [security/webservice/svc-logon/wsdl/Logon.wsdl]


  27:         [<soap:address location="http://localhost:4708/com/myCompany/myProject/logon/generated/interf/Logon"/>]


  28:  


  29: YellowPages [public-services/webservice/svc-lookup/wsdl/YellowPages.wsdl]


  30:         [<soap:address location="http://ws.soatrader.com/delimiterbob.com/0.1/YellowPages"/>]


  31:  


  32: G:\JavaWorld\Groovy_Gant>


  33:  




Gant configuration using build.properties



Like an Ant file, the Gant execution is configured by a property file called build.properties which resides in the same directory as build.gant, as shown in Listing 3.



Listing 3. Gant configuration using build.properties




   1: # Property file to customize the Gant script for WSDL processing.


   2:  


   3: # ------------------------------------------------------------------------------


   4: # Tell Gant which WSDL checker you have installed


   5: # ------------------------------------------------------------------------------


   6: axis.available=yes


   7: xfire.available=yes


   8: cxf.available=yes


   9: wsimport.available=yes


  10:  


  11:  


  12: # ------------------------------------------------------------------------------


  13: # Axis2 information


  14: # ------------------------------------------------------------------------------


  15: axis2.install.dir=./ThirdPartyTools/axis2-1.3


  16: axis2.lib.dir=${axis2.install.dir}/lib


  17: axis2.version=1.3


  18:  


  19: # ------------------------------------------------------------------------------


  20: # XFire information


  21: # ------------------------------------------------------------------------------


  22: xfire.install.dir=./ThirdPartyTools/xfire-1.2.6


  23: xfire.lib.dir=${xfire.install.dir}/lib


  24: xfire.version=1.2.6


  25: ant.jar=./ThirdPartyTools/groovy-1.0/lib/ant-1.6.5.jar


  26:  


  27: # ------------------------------------------------------------------------------


  28: # CXF information


  29: # ------------------------------------------------------------------------------


  30: cxf.install.dir=./ThirdPartyTools/apache-cxf-2.0.2-incubator


  31: cxf.lib.dir=${cxf.install.dir}/lib


  32: cxf.version=2.0.2


  33:  


  34: # ------------------------------------------------------------------------------


  35: # Java6 information (only necessary if you are running Java5)


  36: # ------------------------------------------------------------------------------


  37: java6.lib.dir=./ThirdPartyTools/Java6/lib


  38:  


  39: # ------------------------------------------------------------------------------


  40: # Java parser  information


  41: # ------------------------------------------------------------------------------


  42: java-parser.install.dir=./JavaParser


  43:  


  44: # ------------------------------------------------------------------------------


  45: # Code generation information


  46: # ------------------------------------------------------------------------------


  47:  


  48: wsdl.root.dir=./wsdl


  49: stubs.package.prefix=com.mycompany.myproject_axis


  50: # if 'namespace-to-package replacement' is enabled,


  51: # we try to provide a suitable mapping for Axis2


  52: # but - consider as an alternative - using a 'ns2p.properties' file per wsdl file


  53: # with the 'gant ns2p' command output as a basis


  54: ns2p.replace=no


  55: ns2p.namespace=myproject.mycompany.com


  56: ns2p.package=com.mycompany.myproject_axis


  57:  


  58: # Axis2 data binding= adb|xmlbeans


  59: axis.data.binding=adb


  60: output.axis.file=./results/output-axis.txt


  61: error.axis.file=./results/error-axis.txt


  62: output.axis.codeGenerator=./GeneratedCode_axis


  63:  


  64: output.xfire.file=./results/output-xfire.txt


  65: error.xfire.file=./results/error-xfire.txt


  66: output.xfire.codeGenerator=./GeneratedCode_xfire


  67:  


  68: output.cxf.file=./results/output-cxf.txt


  69: error.cxf.file=./results/error-cxf.txt


  70: output.cxf.codeGenerator=./GeneratedCode_cxf


  71:  


  72: output.wsimport.file=./results/output-wsimport.txt


  73: error.wsimport.file=./results/error-wsimport.txt


  74: output.wsimport.codeGenerator=./GeneratedCode_wsimport


  75:  


  76: output.validator.file=./results/output-validator.txt


  77: error.validator.file=./results/error-validator.txt


  78:  


  79: generated.code.dir=./GeneratedCode


  80: client.jar.dir=./ClientJarFiles


  81:  


  82: # ------------------------------------------------------------------------------


  83: # Javadoc information


  84: # ------------------------------------------------------------------------------


  85: javadoc.enabled=yes


  86: javadoc.packageNames=com.*,tools.*,org.*,net.*


  87: project.copyright=Copyright © 2007 - Mycompany.com


  88:  


  89:  


  90: # ------------------------------------------------------------------------------


  91: # Optional proxy information


  92: # ------------------------------------------------------------------------------


  93: proxy.enabled=no


  94: proxy.host=myProxyHost


  95: proxy.port=81




Expressing dependencies



With Gant you can express dependencies as you would do with Ant, and you can call all available Ant tasks that are provided by Groovy's ant-1.6.5jar (found in the <GROOVY_HOME>/lib directory). You do not have to create a Groovy AntBuilder for this purpose because Gant does it for you.



Looking at the build.gant script excerpts in Listing 4, you can see that a Gant script is really a Groovy script.



Listing 4. Gant script (excerpt from build.gant)




   1: import org.apache.commons.lang.StringUtils


   2:  


   3:     ...


   4:     boolean wsdlRegexProvided = false


   5:     boolean wsdlRegexOK = false


   6:     def wsdlFilesMatchingRegexList = []


   7:  


   8:     def readProperties() {


   9:         Ant.property(file: 'build.properties')


  10:         def props = Ant.project.properties


  11:         return props;


  12:     }


  13:     def antProperty = readProperties()


  14:  


  15:     long startTime = System.currentTimeMillis()


  16:  


  17:     // Classpath for Java code generation tool


  18:     def java2wsdl_classpath = Ant.path {


  19:         fileset(dir: antProperty.'axis2.lib.dir') {


  20:             include(name: '*.jar')


  21:         }


  22:     }


  23:  


  24:     def generateJavaCode = { wsdlFile, javaSrcDir ->


  25:         def ns2pValues = ''


  26:         if (antProperty.'ns2p.replace'.equals("yes")) {


  27:             //TODO: consider replacing regex by XmlSlurper


  28:             def pattern  = /^.*(?:targetNamespace=")([^"]+)"/


  29:             def matcher = pattern.matcher('')


  30:             def ns2pMap = [:]


  31:             new File(wsdlFile).eachLine {


  32:                 matcher.reset(it)


  33:                 while (matcher.find()) {


  34:                     String namespace = matcher.group(1)


  35:                     if (!ns2pMap.containsKey(namespace)) {


  36:                         ns2pMap.put(namespace, (namespace-'http://').replace('/', '.').replaceAll(antProperty.'ns2p.namespace',antProperty.'ns2p.package'))


  37:                     }


  38:                 }


  39:             }


  40:             Iterator iter = ns2pMap.keySet().iterator();


  41:             StringBuilder sbuf = new StringBuilder(256)


  42:             while (iter.hasNext()) {


  43:                 String key = iter.next();


  44:                 String value = ns2pMap.get(key)


  45:                 sbuf.append(key)


  46:                 sbuf.append('=')


  47:                 sbuf.append(value)


  48:                 sbuf.append(',')


  49:             }


  50:             if (sbuf.length() > 0) {


  51:                 ns2pValues = sbuf.replace(sbuf.length()-1, sbuf.length(), '').toString()


  52:             }


  53:             else {


  54:                 println 'No namespace-to-package replacement possible: nothing matched'


  55:             }


  56:         }


  57:  


  58:         def wsdlFileDir = StringUtils.substringBeforeLast(wsdlFile, '/')


  59:         def stubPackageSuffix = (wsdlFile - wsdlFileDir - '/' - '.wsdl').toLowerCase() + '.soap.stubs'


  60:         def outputDir = javaSrcDir-'/src' // Axis2 will generate a /src dir for us


  61:         def outFile = new File("${antProperty.'error.axis.file'}")


  62:         outFile << wsdlFile + NL


  63:         Ant.java(classname: 'org.apache.axis2.wsdl.WSDL2Java',


  64:             classpath: java2wsdl_classpath,


  65:             fork: true,


  66:             output: "${antProperty.'output.axis.file'}",


  67:             error: "${antProperty.'error.axis.file'}",


  68:             append: "yes",


  69:             resultproperty: "taskResult_$wsdlFile") {


  70:                 if (antProperty.'proxy.enabled'.equals("yes")) {


  71:                     println "Using proxy ${antProperty.'proxy.host'}:${antProperty.'proxy.port'}"


  72:                     jvmarg (value: "-Dhttp.proxyHost=${antProperty.'proxyhost'}")


  73:                     jvmarg (value: "-Dhttp.proxyPort=${antProperty.'proxyport'}")


  74:                 }


  75:                 arg (value: '-uri')


  76:                 arg (value: wsdlFile)


  77:                 arg (value: '-d')


  78:                 arg (value: "${antProperty.'axis.data.binding'}")


  79:                 arg (value: '-o')


  80:                 arg (value: outputDir)


  81:                 arg (value: '-p')


  82:                 arg (value: "${antProperty.'stubs.package.prefix'}.$stubPackageSuffix")


  83:                 arg (value: '-u')


  84:                 arg (value: '-s')


  85:                 if (new File("$wsdlFileDir/ns2p.properties").exists()) {


  86:                     println 'using provided namespace-to-package mapping per wsdl file'


  87:                     arg (value: '-ns2p')


  88:                     arg (value: "$wsdlFileDir/ns2p.properties")


  89:                 }


  90:                 else if (antProperty.'ns2p.replace'.equals("yes")) {


  91:                     println 'using provided namespace-to-package mapping for ALL wsdl files that should be processed'


  92:                     arg (value: '-ns2p')


  93:                     arg (value: ns2pValues)


  94:                 }


  95:                 arg (value: '-t')


  96:         }


  97:         print "$wsdlFile "


  98:         if (Ant.project.properties."taskResult_$wsdlFile" != '0') {


  99:             println '... ERROR'


 100:         }


 101:         else {


 102:             println '... OK'


 103:         }


 104:     }


 105:  




Gant specialities


You'll note that Gant targets are Groovy closures. Inside closures you can define new closures and assign them to variables, but you are not allowed to declare a method. So, the following is allowed:



target ( ) {
def Y = { }
}


But this isn't:



target ( ) {
def Y ( ) { }
}


Gant has two other characteristic features. First, every target has a name and a description, for instance



target (alljars: 'generate a directory with all client jar/src/doc/res archives') {
depends(javagen)
println alljars_description


To access this description string, you can use the variable "<target-name>_description", which is created by Gant for your convenience.



Second, if you want to hand over a WSDL regular expression "property" specified in the command line to the build.gant script, you can use the -D "wsdl=<regex>" idiom. The syntax is similar to that used for Java VM command-line property setting: java -Dproperty=value, but on Windows you really need the space after "-D", because Gant uses the Apache Commons CLI library. You can access this property in your Gant script by using the following "try/catch" trick (demonstrated for the property wsdl) that is evaluated in the init target:





   1: target (init: 'check wsdl regex match') {


   2:         try {


   3:             // referencing 'wsdl' in this try block checks for the existence of this variable


   4:             // if you call Gant with '>gant -D "wsdl=<wsdl_target'>" then the variable 'wsdl'


   5:             // will be created by Gant for you, otherwise the catch block will be executed


   6:             def wsdlRootDir = new File(antProperty.'wsdl.root.dir')


   7:             def wsdlList = []


   8:             boolean atLeastOneMatchingWsdlFileFound = false


   9:             wsdlRootDir.eachFileRecurse{


  10:                 if (it.isFile() && it.name.endsWith('.wsdl')) {


  11:                     def wsdlFile = it.canonicalPath.replace('\\', '/')


  12:                     def startIndex = wsdlFile.indexOf('/wsdl') + 1


  13:                     def endIndex = wsdlFile.lastIndexOf('.')


  14:                     def shortname = StringUtils.substringAfterLast(wsdlFile, '/')-'.wsdl'


  15:                     if (shortname == wsdl) {


  16:                         println "---> Provided regex matched '$shortname'"


  17:                         wsdlFilesMatchingRegexList << wsdlFile


  18:                         atLeastOneMatchingWsdlFileFound = true


  19:                         return;


  20:                     }


  21:                 }


  22:             }


  23:             wsdlRegexProvided = true


  24:             if (atLeastOneMatchingWsdlFileFound) {


  25:                 wsdlRegexOK = true


  26:             }


  27:             else {


  28:                 println "\tWarning: No wsdl file found that matches regex pattern '$wsdl'!"


  29:                 wsdlRegexOK = false


  30:             }


  31:         }


  32:         catch (Exception e) {


  33:             // no regex evaulation necessary


  34:         }


  35:     }




In build.gant Groovy's syntax is used to handle foreach loops, proxy usage, and regular expressions to narrow the WSDL processing to a few files only. The WSDL checker and validator, however, are a set of pure Groovy files that have the facade WsdlChecker.main () as a central entry point. Although Groovy comes with nice GStrings and a powerful "GDK" (Groovy Development Kit) it appears, that Jakarta Commons Lang StringUtils is still a valuable add-on.



Checking WSDL files -- a Groovy task



As I mentioned in the introduction, the Groovy and Gant Toolset should help testers and developers check WSDL files coming from different sources for compatibility with a variable set of Web service frameworks, including WSDL validation. Testing for compatibility, in this context, means: call every Web service framework's WSDL-to-Java code generator and check the resulting files for exceptions, errors, and warnings. If CXF is installed and enabled in the build.properties file, then you can run CXF's validator tool, too. All these tasks are performed by pure-Groovy classes. The design is simple: every framework's code generator and its output handling is mapped to a Groovy class. Each class provides two basic methods that are called by the WsdlChecker "controller" class:




  • def checkWsdl(wsdlURI) calls the corresponding code generator for a WSDL file.


  • boolean findErrorsOrWarnings() tells the "controller" if errors or warnings are found in the generator output files. If so, it prints them to the console.



In Java you would define an interface that your concrete (code generator strategy) classes would implement (you could also use an abstract class). In Groovy, however, you don't need to declare explicit interfaces -- though you could. Instead, you create a list of "checker" objects (depending on the Web service frameworks installed and enabled in build.properties) making use of Groovy closures, as shown in Listing 5.



Listing 5. Groovy closures in the WsdlChecker class (excerpt)




   1: def wsdlcheck() {


   2:    def sortedList = wsdlList.sort()


   3:    for (wsdlLongName in sortedList) {


   4:       def wsdlURI = wsdlLongName.suffix


   5:       availableCheckers.each{ it.checkWsdl(wsdlURI) }


   6:    }


   7:    // print individual results and report overall result to Ant (errors only)


   8:    new File('wsdl_errors.txt').write(Boolean.toString(findWsdlErrors()))


   9: }


  10:  


  11: boolean findWsdlErrors() {


  12:    println()


  13:    boolean errorsFoundInAnyChecker = false


  14:    availableCheckers.each{


  15:    errorsFoundInAnyChecker = errorsFoundInAnyChecker |


  16:       it.findErrorsOrWarnings()


  17:    }


  18:    return errorsFoundInAnyChecker


  19: }


  20:  




A typical output in your shell would look like this:





   1: Logon.wsdl... Axis2 check done.


   2: Logon.wsdl... XFire check done.


   3: Logon.wsdl... Java 6 wsimport check done.


   4: Logon.wsdl... CXF check done.


   5: Logon.wsdl... CXF validator check done.


   6:  


   7: No Axis2 errors found


   8:  


   9: No XFire errors found


  10:  


  11: Java6 wsimport warnings found in:


  12:     security/webservice/svc-logon/wsdl/Logon.wsdl:


  13:     src-resolve: Cannot resolve the name 'impl_1:RuntimeException2' to a(n) 'type definition' component.


  14:  


  15: No Java6 wsimport errors found


  16:  


  17: No CXF errors found


  18:  


  19: CXF Validation passed




Using the Groovy-Eclipse-Plugin, you can start this WsdlChecker.groovy program just like a "normal" Java program. The plugin will create a separate Groovy entry in your Eclipse environment's Run settings.



Running the WSDL checker from inside Ant



It is also possible to run the WsdlChecker as part of an Ant target. To see how you can incorporate the Groovy classes in an Ant script, look at the following Ant build.xml, which is part of the Groovy-WSDL-Checker Eclipse project:



Listing 6. Calling the Groovy WSDL checker from inside Ant




   1: <project name="wsdl2java check" default="check-wsdls" basedir=".">


   2:     <taskdef resource="net/sf/antcontrib/antcontrib.properties"/>


   3:     <taskdef name="groovy"


   4:         classname="org.codehaus.groovy.ant.Groovy">


   5:         <classpath location="lib/groovy-all-1.0.jar" />


   6:     </taskdef>


   7:  


   8:     <target name="check-wsdls">


   9:         <groovy src="src/tools/webservices/wsdl/checker/WsdlChecker.groovy">


  10:           <classpath>


  11:             <pathelement location="bin-groovy"/>


  12:               <pathelement location="lib/commons-lang-2.3.jar"/>


  13:           </classpath>


  14:         </groovy>


  15:  


  16:         <loadfile property="wsdlErrorsFound"


  17:                       srcfile="wsdl_errors.txt"></loadfile>


  18:         <if>istrue value="${wsdlErrorsFound}" />


  19:             <then>


  20:                 <echo message="we found errors in wsdl files" />


  21:             </then>


  22:             <else>


  23:                 <echo message="NO errors found in wsdl files" />


  24:             </else>


  25:         </if>


  26:     </target>


  27:  


  28: </project>




The Groovy checker is controlled by the same build.properties file as is build.gant. If you want to work with the Groovy classes in the Eclipse project in artefacts (instead of calling the checker via build.gant regex to control which WSDL file to check/validate, and the checker's mode parameter (that tells the tool whether to check for compatibility using the installed code generators or to validate with the CXF validator tool). Please note, that I could not set any values in Ant's property hashtable that would be available to Ant afterwards, because in Groovy you are working with a copy of the table. Therefore, I chose to deliver the results via Ant's loadfile task. Working with Ant's limited condition handling capabilities just isn't my cup of tea. That's why I decided to use ant-contrib with its if-then-else feature to demonstrate the checker's result evaluation in Listing 6.



Code generation and modification -- Gant and Java play together



So far, you have only seen the possibility of generating Java source code from WSDL files and checking the generated output using different Web service frameworks. I'll conclude my introduction to the Groovy and Gant Toolset with a more concrete example based on the generated code of Axis2. The corresponding Gant target in the Groovy and Gant Toolset is called javagen.



To make working with the client stub more comfortable, I'll show you how to do much more than generate code with Axis2's wsdl2Java tool You will first generate source code, but then you will modify it with a Java parser/modifier. After that you will compile it using Sun's javac (so your JAVA_HOME environment variable should point to the JDK rather that the JRE root directory), generate javadoc information, and finally produce a jar file containing the compiled code. If you're using xmlbeans you will also get an xsb resources jar file to be included in your project's classpath.



This jar file -- together with the parser/modifier bytecode delivered as japa.jar -- is the basis for client-side unit or performance tests. Code modification allows you to provide the user with a handy client stub factory that incorporates, and encapsulates, Log4J debug-level setting, HTTP/HTTPS handling (including chunking and adaption to proper cipher suite exchange), and connection control (for the underlying Jakarta httpclient).



About JavaCC



It took me some time to find a suitable Java parser. I wanted one that would be capable of handling Java 5 syntax, that was free, and that would let me plug in my source code modification feature. I chose JavaCC. You can also get the parser source, together with my modifications/add-ons, from the Resources section: see JavaParser Eclipse project.



I extended two of the original files: JavaParser.java and DumpVisitor.java. The first one provides the entry point for Groovy as a main method and calls the DumpVisitor for the Axis2-generated client stub. As you may guess, the latter is based on the Gang of Four Visitor pattern. (You may have to look closely in order to spot the changes I made to reach my targets, however. Anyone who has used the Visitor pattern knows that its usage is not easily digested!)



Additionally, I provide a package, tools.webservices.wsdl.stubutil, that incorporates all the code that is required by the modified client stub to compile and work properly. You can generate all of this as a jar file; just use the japa.jardesc file, which is part of the JavaParser Eclipse project for your (and my) convenience.



For those who want to count on Java 6 Web service support based on java.net.URL, I have included a special HttpSSLSocketFactory.java class as part of the Eclipse JavaParser project that takes care of the cipher suite handling ("accept all certificates"). You can use an instance of this class to set the default SSLSocketFactory calling, for example, javax.net.ssl.HttpsURLConnection.setDefaultSSLSocketFactory. Socket factories are used when creating sockets for secure HTTPS URL connections. Creating a client stub typically looks like this:



GlobalWeather stub = GlobalWeatherStub.createStub(ProtocolType.http, Level.INFO, Chunking.yes,
"http://www.webservicex.net/globalweather.asmx", "Connection1");



Not-so-stupid bean property settings



My Gant script, build.gant, calls the Axis2 wsdl2java code generator with the flag -s (synchronous calls only) and the hint to use the default Axis2 data binding ADB (Axis2 Databinding Framework). This will produce service-method calls with properties wrapped as Java beans. Imagine you have a lot of properties that are optional for your call, and one of the required properties tells the Web service to ignore the others. With a normal RPC, you would set parameters to null (or with xmlbeans you can try the WSDL "nillable" attribute), but this is not possible with ADB. During serialization, every object property is tested to be not equal to null. Therefore, you might write many lines of source code doing nothing more than just setting a "default" value for every property of your bean before handing it over to your Axis2 stub. To overcome this "stupid work" I have written a smart Java utility class, NOB (NullObjectBean), that impressively demonstrates the power of using reflection and Jakarta commons -- the beanutils project in this case -- assuming that performance is not a requirement when you are setting your bean properties. Populating a bean with "default" values is now as easy as calling:



WebConferenceDTO webConferenceDTO = (WebConferenceDTO) NOB.create(new WebConferenceDTO());


As a current limitation, the NOB utility can only provide values for "beans" that have a public default (empty) constructor, that have public final constant objects, or that have "simple" interface types as properties. So the algorithm would not work, for instance, for the xmlbeans data binding where interfaces with factory methods are created. (But it could be extended to do so!) Nevertheless, having this class and the Groovy and Gant Toolset in place, nothing stops you from playing around with WSDL and the client-side part of Web services.



Listing 7. Helper class 'NOB' simplifies Axis2 ADB bean handling




   1: package tools.webservices.wsdl.stubutil;


   2:  


   3: import ...


   4:  


   5: /**


   6:  * Provide a special kind of "Null Object Bean (NOB)" where every primitive and non primitive bean


   7:  * property has a default value not equal 'null'.


   8:  *


   9:  * Whereas the Null Object pattern would provide , we simply want to abstract the handling of Axis2


  10:  * ADB 'null' parameters away from the client.


  11:  * Because this class is used in the Axis2 environment with ADB as the default data binding, it


  12:  * checks if the bean that is handed over to our Factory method implements


  13:  * 'org.apache.axis2.databinding.ADBBean'. You can easily disable this check if you want to use this


  14:  * class in another context or replace the check if you need such a helper class for, e.g., XMLBeans


  15:  * data binding.


  16:  *


  17:  * @author Klaus-Peter Berg (Klaus P. Berg@web.de)


  18:  */


  19: public class NOB {


  20:  


  21:     private static int recursionCounter = 0;


  22:  


  23:     private NOB() {


  24:         // hide constructor, we are a helper class with static methods only


  25:     }


  26:  


  27:     /**


  28:      * Create a Java bean object that can be used as a "Null Object Bean" in Axis2 parameters calls


  29:      * using ADB data binding.


  30:      * We do our best to process property types as interfaces and helper classes that provide constants, too,


  31:      * but abstract classes are ignored. So, every effort is made to create a useful "default" Null Object bean,


  32:      * but I cannot guarantee that Axis2.serialize() will be able to really handle it propertly ;-)


  33:      *


  34:      * @param emptyBean


  35:      *            the bean that should be populated with non-null property values as a default


  36:      * @return the "populated Null Object Bean" to be casted to the bean type you will actually need


  37:      */


  38:     @SuppressWarnings("unchecked")


  39:     public static Object create(final Object emptyBean) {


  40:         recursionCounter++;


  41:         final boolean firstCall = recursionCounter == 1;


  42:         if (firstCall


  43:                 && !org.apache.axis2.databinding.ADBBean.class.isAssignableFrom(emptyBean


  44:                         .getClass())) {


  45:             throw new IllegalArgumentException(


  46:                     "'emptyBean' argument must implement 'org.apache.axis2.databinding.ADBBean'");


  47:         }


  48:         final BeanMap beanMap = new BeanMap(emptyBean);


  49:         final Iterator<String> keyIterator = beanMap.keyIterator();


  50:         while (keyIterator.hasNext()) {


  51:             final String propertyName = keyIterator.next();


  52:             if (beanMap.get(propertyName) == null) {


  53:                 final Class propertyType = beanMap.getType(propertyName);


  54:                 try {


  55:                     if (propertyType.isArray()) {


  56:                         final Class<?> componentType = propertyType.getComponentType();


  57:                         final Object propertyArray = Array.newInstance(componentType, 1);


  58:                         beanMap.put(propertyName, propertyArray);


  59:                     } else {


  60:                         final Object propertyValue = ConstructorUtils.invokeConstructor(


  61:                                 propertyType, new Object[0]);


  62:                         beanMap.put(propertyName, create(propertyValue));


  63:                     }


  64:                 } catch (final NoSuchMethodException e) {


  65:                     if (propertyType.isInterface()) {


  66:                         processInterfaceType(beanMap, propertyName, propertyType);


  67:                     } else {


  68:                         processHelperClassType(beanMap, propertyName, propertyType);


  69:                     }


  70:                 } catch (final IllegalAccessException e) {


  71:                     throw new RuntimeException(e);


  72:                 } catch (final InvocationTargetException e) {


  73:                     throw new RuntimeException(e);


  74:                 } catch (final InstantiationException e) {


  75:                     processInterfaceType(beanMap, propertyName, propertyType);


  76:                 }


  77:             }


  78:         }


  79:         recursionCounter--;


  80:         return beanMap.getBean();


  81:     }


  82:  


  83:     private static void processHelperClassType(final BeanMap beanMap, final String propertyName,


  84:             final Class propertyType) {


  85:         // Class.newInstance() will throw an InstantiationException


  86:         // if an attempt is made to create a new instance of the


  87:         // class and the zero-argument constructor is not visible.


  88:         // Therefore, we look for public final constants of the type we need,


  89:         // because we assume that we process a helper class with constants (only)...


  90:         final Field[] fields = propertyType.getDeclaredFields();


  91:         for (final Field field : fields) {


  92:             final int mod = field.getModifiers();


  93:             final boolean acceptField = Modifier.isPublic(mod) && Modifier.isStatic(mod)


  94:                     && Modifier.isFinal(mod) == true && field.getType().equals(propertyType);


  95:             if (acceptField) {


  96:                 try {


  97:                     beanMap.put(propertyName, field.get(null));


  98:                     break; // we will take the first constant that satifies our needs


  99:                 } catch (final Exception e1) {


 100:                     throw new RuntimeException(e1);


 101:                 }


 102:             }


 103:         }


 104:     }


 105:  


 106:     private static void processInterfaceType(final BeanMap beanMap, final String propertyName,


 107:             final Class propertyType) {


 108:         if (propertyType.isInterface()) {


 109:             final Object interfaceToImplement = java.lang.reflect.Proxy.newProxyInstance(Thread


 110:                     .currentThread().getContextClassLoader(), new Class[] { propertyType },


 111:                     new java.lang.reflect.InvocationHandler() {


 112:                         public Object invoke(@SuppressWarnings("unused")


 113:                         Object proxy, @SuppressWarnings("unused")


 114:                         Method method, @SuppressWarnings("unused")


 115:                         Object[] args)


 116:                                 throws Throwable {


 117:                             return new Object();


 118:                         }


 119:                     });


 120:             beanMap.put(propertyName, interfaceToImplement);


 121:         } else if (Modifier.isAbstract(propertyType.getModifiers())) {


 122:             // ignore abstract class: we cannot create an instance of it ;-)


 123:         }


 124:     }


 125: }


 126: