Wednesday

The OSGi - Open Service Gateway Initiative

a quick technical description

I have mention before the case of OSGi, mainly when referring to ServiceMix4 case. Although, the main idea was expressed there, I was probed for a little (and short) deeper description. 

Overview

First of all it must be stated that, OSGi is a specification of a service platform in Java, e.g. OSGI runs on top of a Java Runtime environment. The core of OSGi defines a component and service model. This component model allows to activate, de-activate, update and de-install existing components and services and to install new components / services dynamically.

The smallest unit of modularization in OSGi is a bundle. OSGi defines a registry which bundles can use to publish services or register to other services.

OSGI has several implementations, for example Knopflerfish OSGI or Apache Felix. For those bundled into Eclipse, the latest Eclipse Equinox is currently the reference implementation of the OSGI specification.

Eclipse Equinox is the runtime environment on which the Eclipse IDE and Eclipse RCP application are based. In Eclipse the smallest unit of modularization is a plug-in. The terms plug-in and bundle are interchangeable. An Eclipse plug-in is also an OSGi bundle and vice versa.

The OSGI bundles

OSGI bundles are .jar files with additional meta information. This meta information is stored in the manifest.mf file.

Via this manifest.mf a bundle can define its dependency to other bundles and services and can also restrict the Java classes which should be available to other bundles. This restriction is done based on the packages name. This restriction is enforced in OSGi via a special Java class loader. Access to the restricted classes is not possible, also not via reflection.

OSGI services

A bundles can register and use services in OSGI. OSGI provides therefore a central registry for this purpose. The power and one of the most promising points of OSGi approach, is that a service is defined by a Java interface (POJI - plain old Java interface).

Access to the service registry is performed via the class BundleContext. OSGI injects the BundleContext into each bundle during the startup of the bundle. A bundle can also register itself to the BundleContext ServiceEvents which are for example triggered if a new service is installed or de-installed.

OSGI dependency management

OSGI is responsible for the dependency management between the bundles. These dependencies can be divided into:

  1. Bundle (Package) Dependencies
  2. Service Dependencies

OSGI reads the manifest.mf of a bundle during the installation of the plug-in and ensures that all dependent bundles are also loaded if the bundle is activated. If the dependencies are not meet then the bundle is not loaded. Bundle / package dependencies are based on dependencies between standard Java objects and in case OSGI can not resolve all dependencies this results in a ClassNotFoundException.

As service in OSGI can be dynamically started and stopped, therefore the bundles must manage these dependencies themselves. The bundles can use the service listeners to get informed if a services is stared or stopped.

 

This is the high level technical knowledge that an intro can be of use before using, installing or adopting the OSGi approach. From this point and on, use the Eclipse, or Felix or Knopflerfish or…  approaches suites to you.

Java Spring Integration Framework … or more on Camel

In a previous post I have described the Apache Camel framework and also the a .. revelation or a change of view for the integration issue. I’m glad to see that in Java Spring Based Integration Framework another usage of the framework came out.  But more the less, there is a correction needed to be made. In the above  posting, it is stated that :

Apache Camel is a Spring based Integration Framework which implements the Enterprise Integration Patterns with powerful Bean Integration.

Correct, but not exactly ... Camel is not a Spring Based Integration Framework. It has been used for Spring integration, but not only. As I have stated,

Camel is a Java API that allows you to do message routing very easily. It implements many of the patterns found in Enterprise Integration Patterns. It doesn't require a container and can be run in any Java-based environment. Camel has a whole bunch of components (Bruce is showing a 6 x 10 grid with a component name in each grid. In other words, there's 60 components that Camel can use. Examples include: ActiveMQ, SQL, Velocity, File and iBATIS).

and the blog continues nicely…

Camel lets you create the Enterprise Integration Patterns to implement routing and mediation rules in either a Java based Domain Specific Language (or Fluent API), via Spring based Xml Configuration files or via the Scala DSL. This means you get smart completion of routing rules in your IDE whether in your Java, Scala or XML editor.

Apache Camel uses URIs so that it can easily work directly with any kind of Transport or messaging model such as HTTP, ActiveMQ, JMS, JBI, SCA, MINA or CXF Bus API together with working with pluggable Data Format options. Apache Camel is a small library which has minimal dependencies for easy embedding in any Java application.

Apache Camel can be used as a routing and mediation engine for the following projects:

  • Apache ActiveMQ which is the most popular and powerful open source message broker
  • Apache CXF which is a smart web services suite (JAX-WS)
  • Apache MINA a networking framework
  • Apache ServiceMix which is the most popular and powerful distributed open source ESB and JBI container

Friday

SOA and the Integration Patterns

SOA is simply a way to think when designing systems. Service oriented integration is a way to leverage investments in existing IT systems using the principles of SOA. As I have mention before, Apache ServiceMix is an enterprise service bus (ESB) that provides a platform for system integration utilizing reusable components in a service oriented manner. Also I have stated the ServiceMix 4.0, the next generation of the ServiceMix ESB.
Among the issues of the SOA case is actually the integration of infrastructure ‘places’, new and more important existing ones. Bruce Snyder gave the session ‘Taking Apache Camel for a Ride’ were he looked up into the Enterprise Integration Patterns.

The revered Enterprise Integration Patterns (EIP) book is indispensable for handling messaging-based integration, but utilizing these patterns in your own code can be tedious, especially if you have to write the code from scratch every time. Wouldn't it be nice if you had a simple API for these patterns that makes this easier? Enter Apache Camel, a message routing and mediation engine that provides a POJO-based implementation of the EIP patterns and a wonderfully simple Domain Specific Language (DSL) for expressing message routes.”

From this point of view, a question – revelation came up:

Camel is a Java API that allows you to do message routing very easily. It implements many of the patterns found in Enterprise Integration Patterns. It doesn't require a container and can be run in any Java-based environment. Camel has a whole bunch of components (Bruce is showing a 6 x 10 grid with a component name in each grid. In other words, there's 60 components that Camel can use. Examples include: ActiveMQ, SQL, Velocity, File and iBATIS).

The revelation

Chris Richardson asks "What's left inside of ServiceMix". Why use ServiceMix if you have Camel? ServiceMix is a container that can run standalone or inside an app server. You can run distributed ServiceMix as a federated USB. Camel is much smaller and lightweight and is really just a Java API. ServiceMix 4 changed from a JBI-based architecture to OSGi (based on Apache Felix). They also expect to create your routes for ServiceMix 4 with Camel instead of XML. To process messages, you can use many different languages: BeanShell, JavaScript, Groovy, Python, PHP, Ruby, JSP EL, OGNL, SQL, XPath and XQuery.

Camel has a CamelContext that's similar to Spring's ApplicationContext. You can initialize it in Java and add your routes to it:

CamelContext context = new DefaultCamelContext();
context.addRoutes(new MyRouterBuilder());
context.start();

Or you can initialize it using XML:

    com.acme.routes

Camel's RouteBuilder contains a fluid API that allows you to define to/from and other criteria. At this point, Bruce is showing a number of examples using the Java API. He's showing a Content Based Router, a Message Filter, a Splitter, an Aggregator, a Message Translator, a Resequencer, a Throttler and a Delayer.

Bruce spent the last 10 minutes doing a demo using Eclipse, m2eclipse, the camel-maven-plugin and ActiveMQ. It's funny to see a command-line guy like Bruce say he can't live w/o m2eclipse. I guess Maven's XML isn't so great after all.

Camel is built on top of Spring and has good integration. Apparently, the Camel developers tried to get it added to Spring, but the SpringSource guys didn't want it. Coincidentally, Spring Integration was released about a year later.

Camel also allows you to use "beans" and bind them to Camel Endpoints with annotations. For example:

public class Foo {
 
    @MessageDriven (uri="activemq:cheese")
    public void onCheese(String name) {
        ...
    }
}

Other annotations include @XPath, @Header and @EndpointInject.

Camel can also be used for BAM (Business Activity Monitoring). Rather than using RouteBuilder, you can use ActivityBuilder to listen for activities and create event notifications.

full window

Thursday

ws serving... quick and dirties? part 1

From the architecture point of view, either in Web APIs or into the interoperability and connectivity parts of the app, there is the web services part. The architecture should be able to address and solve (or conform to) various constrains of the existing environment of applications, or the external resources that should be used by the services.

I will address various cases under the SOA and tools posts labels over the next days.

For now one the issue is the ability to make the quick way an set of web-services. So, after from architecting weve just conclude to the types, number and functionality o the main ws set. There are lots of times (specially in case of existing infrastuctures), that we need to have a
  1. fast way to createWebServices and
  2. we don't want to deal/manage with J2EE containers(packaging/deployment ...)
There were two projects that i was involved in with this and in the second one, I have tried the OpenEJB option.

Apache OpenEJB is an embeddable and lightweight EJB 3.0 implementationthat can be used as a standalone server or embedded into Tomcat, JUnit,TestNG, Eclipse, IntelliJ, Maven, Ant, and any IDE or application.OpenEJB is included in Apache Geronimo, IBM WebSphere ApplicationServer CE, and Apple's WebObjects.

In other words, OpenEJB isa way in which you can have everything that is available in the EJB3container, but 'outside' the container (well, not really but is close to).

Consider the next step by step tutorial for creating a webservice.
  • Create a sampleproject
  • Download openejb-3.0.zip from http://openejb.apache.org/download.html and unpack it
  • Add to the classpath of your sampleproject every .jar file that's in unpacked/lib
  • Create a META-INF folder inside your src directory and create afile inside that folder called ejb-jar.xml and just put in it thisstring "" (without quotes)
  • Create now the interface that defines the contract of your webservice and anotate it with
@WebService(targetNamespace="http://thecompanyname.com/wsdl")
public interface TestWs { ... }
  • Create the class that implements your interface and anotate it:
@Stateless
@WebService(portName = "TestPort",
serviceName = "TestWsService",
targetNamespace = "http://thecompanyname.com/wsdl",
endpointInterface = "full.qualified.name.of.your.interface.TestWs")
public class TestImpl { ... }
  • To start the Webservice you need to do this:
pubilc static void main(String[] args) {
Properties properties = new Properties();
properties.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.openejb.client.LocalInitialContextFactory");
properties.setProperty("openejb.embedded.remotable", "true");
new InitialContext(properties); // here the magic starts ....
}

Thats all. After these steps, you have a stateless bean exposed as a webservice in this location: http://localhost:4204/TestImpl?wsdl

You can use a dynamic proxy (or
apache axis2) to generate a Java client for it. The dynamic proxy goes like this:


// this code can be placed rigth after InitialContext call, or
// can be in another program or in another thread in the same program
// The program that made the call to InitContext need to be alive,
//
in order to expose the EJB3 services

URL serviceUrl = new URL("http://localhost:4204/TestImpl?wsdl");
Service testService = Service.create(serviceUrl, null);

TestWs testWs = testService.getPort(TestWs.class);

// test the methods you've implemented ...



Wondering how does it works ?


While requesting the the initial context, openejbautomatically searchs for clases in your classpath containingejb3-related anotations, it automaticaly exposes them as it founds,more or less as it happens when you deploy an application as a .ear inany aplication server, but simpler and with great flexibility due toits embedable architecture.

The reason (expect for the quickness above) for using openejb is that, after doing the 2nd ad third bullet above you have the following major features of the OpenEJB:

Major features

  • Supports EJB 3.0, 2.1, 2.0, 1.1 in all modes; embedded, standalone or otherwise.
  • JAX-WS support
  • JMS support
  • J2EE connector support
  • Can be dropped into Tomcat 5 or 6 adding various JavaEE 5 and EJB 3.0 features to a standard Tomcat install.
  • CMP support is implemented over JPA allowing to freely mix CMP and JPA usage.
  • Complete support for Glassfish descriptors allowing those users to embedded test their applications.
  • Incrediblyflexible jndi name support allows you to specify formats at macro andmicro levels and imitate the format of other vendors.
  • Allows for easy testing and debugging in IDEs such as Eclipse, Idea Intellij or NetBeans with no plugins required.
  • Usable in ordinary JUnit or other style test cases without complicated setup or external processes.
  • Validatesapplications entirely and reports all failures at once, with threeselectable levels of detail, avoiding several hours worth of "fix,recompile, redeploy, fail, repeat" cycles.
and therefore you have some... dirtiness above the quickness.
Of coarse, that was the case (for my case) that needed the EJB compliance (ohhh the constrains of the architecture analysis).



In part 2 i'll show you the usage of another tool that helped me lot in another project, just by using xml-xsls but in a very practical (quick & dirty) way...

Friday

agility continued: (b) Web API -ing

continuing from the previous post, after ESB-ing and having the requirements in place, we can see the possible usage of web APIs.

(b)Web API-ing

In general, there are various sides to view the same things. On the developer’s side of the world, were things should be more binary, APIs are the usual way to make computers talk together and help to build various layers of an application or application’s components. They can help in building business layers on top of other business layers on top of etc. On the other side of the world, the user’s or client’s side, where things tend to be less binary, APIs represent a two-way strong strategic tool (”two-way” because releasing an API impacts both the original developers of a web application and third-party developers building a third-party web application).

Let’s start scoping with the third-party developer’s view, which might be you in the (common) case of using an existing API, lets say an Open Source API. The reason of using an API means that you are willing to programmatically get data out of a website or another web application in general. For simplicity, lets stick to the website (meaning that we may have a web exposition of infos/attributes etc from a legacy system). For any third-party web developer, using an API is great because it allows to:

  • backup existing data: say you are not so confident in the future of website (application) you need to connect to, you can programmatically backup your data stored in there through the API
  • leverage existing data and, for example, display it in a more innovative way: You could build a great third-party application relying on the website (application)’s API and provide an alternative way of exposing (front end) data (e.g. showing an interactive map with the members’ names updated in real time)
  • catch new users: say you have done with the building of your app, then (at least!) a percentage of original website (application) users will be interested in it and will start using it
  • unload some server resources: some web services do things well and fast, so why bother reinventing the wheel and consume server resources? For example, you could save storage for your pictures by using Flickr’s API (or Amazon S3’s), you could create charts using Google’s Chart API etc.

If we look at things from website (application) point of view, releasing an API is a strong signal and should be part of a well-thought strategy, aimed at:

  • extending reach by bringing in new users from third-party apps using his/her API (there’s a mutually beneficial exchange of users between the original application and the third-party application, each one becoming a new entry point for the other)
  • becoming a local expert: this is a really important point, because simply, releasing a clever API can erase competition from your niche
  • fighting against web scraping: third-party web developers would now have an official and documented way to get data out of an application (of course they can still keep on scraping your web pages but releasing an API greatly simplifies their lives and encourages them to keep to the straight and narrow)

Becoming more practical, you can communicate the above simplified view in your proposal to the client, denoting that:

  • APIs can be among a company’s greatest assets

® Customers invest heavily: buying, writing, learning

® Cost to stop using an API can be prohibitive

® Successful public APIs capture customers

  • Can also be among company’s greatest liabilities

® Bad APIs result in unending stream of support calls

  • Public APIs are forever – one chance to get it right

API design (or what to look in yours candidate for usage APIs)

When releasing an API, developers should follow a comprehensive checklist to ensure third-party developers will start using it smoothly. In particular, it is essential to:

  • offer the widest range of protocols: some developers love REST, so you have to release a RESTful version or your API; some other developers can’t live without XML-RPC so open a XML-RPC access to them; some other have existing libraries built for SOAP so release a SOAP API; some other love JSON so build a custom “JSON-RPC” API for them etc.
  • document all the API’s functions for every protocol: yes, this is a huge work and this is were you will spend most of your time; documentation is key and the clearer it is, the more developers will use you API. Check out Flickr’s or Facebook’s APIs documentation: they’re real jewels for web developers, filled with substantial code snippets helping them to get quickly started with the API.
  • version your API: remember that once released, an API is forever. This means you can’t go backwards and change existing functions in your API because this would be a disaster for third-party applications built upon it and, therefore, for your users and your reputation. With an API, you can only go onwards. So it is extremely important to explicitly define versions for your API, for example by inserting version numbers right in the URL
  • make it secure: web APIs have to be secure, both technically and strategically. Technically because accessing the data should happen the way it was meant to be without disrupting existing processes on the server or causing mess in the database. Strategically because releasing a web API is opening a documented door to the very heart of your content. An easy way to enhance security for your API is to release API keys: third-party developers have to request a key (usually a hash-like sequence of digits and letters) and then explicitly show this key in every single call made through the API. Doing so allows you to better monitor traffic and usage of your API.
  • anticipate traffic surge: releasing an API means facing traffic spikes and, if everything goes well, facing a global increase in the load. It is crucial to anticipate this and avoid outages.

Again, in a practical mood, you can communicate that you are going to provide:

  • Service Provider Interface (SPI)
    → Plug-in interface enabling multiple implementations, e.g Java Cryptography Extension (JCE)
  • Write multiple plug-ins before release
    → If you write one, it probably wont support another
    → If two, it will support more with difficulty
    → if three, it will work fine

Protecting web property

Here is a simple schema to better understand what’s going on when releasing an API (as graphed here) :


In this example, website 1 is the usual interface developed to display the data (for example: website 1 is the website (application) that you intent to connect with). Website 2 is a parallel website accessing website (applications)’s data but not developed by the vendor of website (application). For example, it’s an aggregator designed to consolidate profiles from different online communities. For website (application), releasing a web API is interesting to extend its reach and bring in new users (for example, users of the aggregator service at website 2 who didn’t know about the website (application) until they discover some of its members) but it also means organizing a documented data leak towards third-party apps, hence the need to focus on technical and strategic security.

From the practical point of view, you should

· Minimize Accessibility of Everything
→ Making classes and members as private as possible
→ Public classes should have no public fields (excepting constants)
→ (therefore) miximization of information hiding
→ Allow components modules to be used, understood, build, tested and debugged independently

and of coarse always having in mind that:

· API (must) Coexist with Platform
→ obey standard naming conventions
→ avoid obsolete parameter and return types
→ mimic patterns in core APIs and language

· API friendly features
→ Generics, varargs, enums, default arguments

· API avoids traps and pitfalls
Finalizers, public static final arrays

· Intelligent Exception Design
→Throw Exceptions to Indicate Exceptional Conditions
don’t force client to use exceptions for control flow
private byte[] a = new byte[BUF_SIZE];
void processBuffer (byteBuffer buf) {
try {
while (true){
buf.get(a);
processBytes(tmp, BUF_SIZE);
}
} catch (BufferUnderflowException e) {
int remaining = buf.remaining();
buf.get(a, 0, remaining);
processBytes(bufArray, remaining);
}
}
conversely, don’t fail silently
ThreadGroup.enumerate(Thread[] list)
→ Favor Unchecked Exceptions
Checked – client must take recovery action
Unchecked – programming error
Overuse of checked exceptions causes boilerplate
try {
Foo f=(Foo) super.clone ();
….
} catch (CloneNotSupportException e) {
// this can’t happen, since we’re Cloneable
throw new AsserionError();
}
→ Include Failure-Capture Information in Exceptions
Allows diagnosis and repair or recovery
For unchecked exceptions , message suffices
For checked exceptions, provide accessors

Thursday

More on OSGi...

I was introduced to OSGi quite by coincidence. While searching for standardized approaches for using standards and techniques for an architecture approach, I started looking into it. I was quite surprised to find the ease with which it provided the solutions to some of the problems that we face in the web application development.

One of the concepts of OSGi that really intrigued me, was how it allows bundles to export services that can be consumed by other bundles without knowing anything about the exporting bundle.

OSGi takes care of this by introducing Service Registry where the exporting bundle registers the interfaces that it want to expose and any other bundle which wants to use those interface can just look up in the registry to use the implementation.

The other concept of OSGi which I also found interesting was how OSGi uses version management to allow different versions of the same java class to be used within the project.

The OSGi notion is crucial in web oriented solutions and special in SOA, ESB and java related projects. As reposted the OSGi begins to be adopted by various tools. The SeviceMix4 case is also used in the FUSE ESB 4 case, which is an enterprise version of Apache ServiceMix 4. ServiceMix 4 supports OSGi but does not fully support JBI, and therefore the FUSE team recommends that developers who have been using FUSE ESB 3 or JBI continue to use FUSE ESB 3. Users of OSGi should use FUSE ESB 4. The 4.1 release of both ServiceMix and FUSE ESB will fully support both JBI and OSGi.

FUSE ESB 4 continues to provide support for widely adopted integration standards like JBI 1.0 and JMS while also ensuring support for the latest emerging standards like OSGi and JBI 2.0. The new FUSE ESB 4 provides a single platform that makes it easy for developers to implement the integration patterns they need with the programming model of their choice.

FUSE ESB 4 includes the following features not included in FUSE ESB 3:

  • Normalized message router – a standard way for components to plug in and talk to the ESB, now supports multiple programming models in addition to JBI
  • OSGi framework – a faster and standard way to create, deploy, and easily provision integration components as modules
  • JBI 1.0 and 2.0 compatibility – support for the latest version of the emerging JBI 2.0 standard and backwards compatibility with JBI 1.0 so components developed for FUSE/ServiceMix 3.x can be seamlessly deployed onto FUSE ESB 4
  • Native Spring support – enables Spring users to quickly create components using Spring XML
  • FUSE Integration Designer – graphical user interface to integrate systems using Enterprise Integration Patterns (EIPs)

Actually, the Integration Designer is the difference from the alone-ServiceMix4. And the approach provides a hind on using Open Source in a more professional manner.


full window

The Agility issue

A previous post comment, was the inspiration for setting this subject. Actually, the issue described was fuzzing me in the past. The content is agility. Agile with the wide notion of the term and not bided into the for-come notion of agile development, projecting and so on.

In almost all projects related with software development, my main issue was the ability to be as more agile as possible. Not only from the technological perspective but also from the managerial and marketing orientation that any complete work needs to fulfil. Meaning that expect from the agility to use yours favourite, loving soft tools kits etc (ability which comes usually afterwards in a second time), you might have to establish a prospect relation with customer, cooperators and sell thyself ;) . Of coarse all these when you are not part of a big company or organisation which either has an establish name and image or has the proper persons for doing that.
The above approach (correct or not) was (and is) always tingle me when trying propose a work to be done. Especially in the past years, whereas the open source wasn't so approved from the clients. Agility but moreover stability and reliability were (and are) the main issues that make clients sceptics about thinking on open source. Of coarse, the budgeting and real agility issues were the real allies. And all the above for the small to middle clients (which are the majority in my cases).

For the cases whereas you want to use/build SOA applications, you have to start considering the parts which will make your life easier.

(a)ESB-ing


The Enterprise Service Bus (ESB)- which can be defined as middlewarethat brings together both integration technologies and runtime servicesto make business services widely available for reuse - offers the bestsolution for meeting today's enterprise application integrationchallenges by providing a software infrastructure that enables SOA.However, there are currently a number of different vendors that provideESB solutions, some of which focus purely on SOAP/HTTP and others whoprovide multi-protocol capabilities. Because these vendors span thehorizon from big enterprise generalists (app servers), to mid-tierenterprise integration providers, all the way to smaller,ESB/integration specific-providers; there doesn't seem to be anestablished consensus regarding the key requirements for an ESB.

As application architects, we have often thought about whatrequirements would define an ESB designed specifically to cater to theneeds of an agile, enterprise integration model.

... main characteristics of an Agile ESB

The main criteria we were looking for in our ESB are as follows:

1. Standards based

While standards-based support is marketed by many ESB vendors, thesupport is provided externally, requiring developers to work withproprietary APIs when directly interacting with internal APIs.A good example is ServiceMix was designed with the requirement to eliminate product APIlock-in, by being built from the ground up to support the Java BusinessIntegration specification (JSR 208). Our agile ESB needs to use JBI asa first class citizen, but also support POJO deployment for ease of useand testing.

2. Flexible

Another characteristic of an agile ESB is the flexibility with whichit can be deployed within enterprise application integration framework:standalone, embedded in an application component, or as part of theservices supported by an application server. This allows for componentre-use throughout the enterprise. For example, the binding for areal-time data feed might be aggregated as a web-service running withinan application server, or streamed directly into a fat client on atraders desk. An agile ESB should be able to run both types ofconfigurations seamlessly.

To provide rapid prototyping, an agile ESB should support both scripting languages and embedded rule engines, allowing businessprocesses to be modeled and deployed quickly.

3. Reliable

Our ESB needs to handle network outages and system failures and tobe able to reroute message flows and requests to circumvent failures.

4. Breadth of Connectivity

An agile ESB must support both two way reliable Web-services andMessage Oriented-Middleware and needs to co-operate seamlessly with EISand custom components, such as batch files.


Having the above in mind, the steping described on the getting-started post, will give the ability that the requirements constrains define, to have tool-ing freedom and open source approaches. For example it will be nice to be able to have an ESB to be vendor independent and open source,in order to promote user control of source code and direction. An added benefitof this is not only the zero purchase cost, but the total cost ofownership will be reduced where users are actively contributing andmaintaining our ESB. Making therefore, a possible quick -win be more realisable.

Furtheremore, the constrains and the business needs will provide the guidance for using other associated tools in order to develop and implement the necessary parts for web services, pojos, etc. The tooling know goes one step further to what can fit in the scheme proposed by the solution. And this is the reason that in the post comment i mention about seperation of 'concerns' meaning an agile seperation in order to extend the scope of freedom in the tool-ing adjenta available.

Wednesday

Artificial intelligence took a step closer to becoming a reality

Artificial intelligence took a step closer to becoming a reality on Sunday as machines edged closer to passing a key test.

as posed at BuilderAU.com.au the long lived Turing test is close to be passed from a machine, awakening the AI dream...:

This weekend saw artificial conversations entities (ACEs) compete for the 18th annual Loebner Prize, a US$100,000 jackpot for a machine that can fool judges into thinking it's human - a prize that has eluded entrants in the contest's 18-year history.

To clinch the prize, ACEs have to convince around a third of volunteers, a mixture made up mainly of computer scientists and journalists, that they are having a conversation with a person.

Every one of the five top ACEs tested at the University of Reading last week managed to convince at least one person into believing they were talking to a human rather than a machine.

But while one ACE managed to fool a quarter of the people during text chats, none managed to hit the crucial 30 per cent threshold laid down by British mathematician Alan Turing in 1950.

Turing said that a machine could be "attributed with intelligence" if it managed to have a "text-based conversation indistinguishable from a human".

Test organiser professor Kevin Warwick, from the University of Reading's school of systems engineering, said in a statement: "This has been a very exciting day with two of the machines getting very close to passing the Turing test for the first time.

"Although the machines aren't yet good enough to fool all of the people all of the time, they are certainly at the stage of fooling some of the people some of the time.

"Today's results actually show a more complex story than a straight pass or fail by one machine."

The program Elbot, created by Fred Roberts, was named as the best machine in this year's event and was awarded the US$3,000 Loebner Bronze Award.

Speaking to Builder AU sister site silicon.com recently, prize founder Hugh Loebner told silicon.com he believes the Turing Test will eventually be passed.

"Oh I think so - I don't know about the timescale," he said.

-------------

although, the Turing test is not quite a straight forward prove for AI, nevertheless conforms with the notion of a Platonian approach to science (as stated in Timaios, science is trying to .. establish a reasonable and plausible story-telling of the observable world -- "εικώς μύθος" ). From this point of view, the above AIs approaches might hold as a Artificial Intelligent even without the envelopes of internal world formation and cognition or even internal mind. If it can fool us then let it be...


Tuesday

Open Source Alliances

According to Gartner, adoption of open-source BI being used as an enterprise-wide standard platform will triple by 2012 [Gartner, “Who’s Who in Open-Source Business Intelligence,” Andreas Bitterer, April 2008.]. To meet the widespread demand of open source BI applications, Pentaho and Spring will further integrate their respective product lines to equip developers, ISVs, and enterprises with a mature, scalable and proven embeddable open source Java reporting engine. The two organizations will also conduct joint marketing, sales and support activities. Pentaho, the established commercial open source alternative for business intelligence, is also a SpringSource Certified Solution Partner.

“The modular, easy-to-use Spring Portfolio has become the backbone for building and deploying a new generation of enterprise applications, including business intelligence and reporting,” said Mitch Ferguson, vice president of business development for SpringSource. “Our partnership with Pentaho delivers tight integration of SpringSource Enterprise with their powerful reporting capabilities, coupled with the SpringSource dm Server as a deployment option. The integration of our technologies and the ongoing partnership between our companies make this a very attractive offering for developers and enterprises alike.”

“Integrated reporting and business intelligence capabilities have become an expected and necessary component of almost any business application,” said Lars Nordwall, senior vice president of business development for Pentaho. “Our partnership with SpringSource makes it even easier for developers to deliver robust, integrated, and supportable applications with the SpringSource Application Platform.”

this was announces in springsource

full window

Testing Web Services

Over the past year I have been working on several projects that have kept me quite busy. Therefore, it has been a while since I've posted anything to my blog. But during some recent discussions with co-workers, I realized that a post regarding how we used SoapUI as a functional testing framework would be of value to service developers and architects. And that's what this post is going to describe. It is not intended to be a discussion on the pros and cons of testing web services using SoapUI versus JUnit versus other testing frameworks. It is simply a discussion of how we used SoapUI's Test Suite capabilities to validate that our service was meeting its requirements and functioning correctly after deployments.

The Service and How Should We Test It?
The service that we were developing was one needed for basic authentication and user profile management. The data was stored in an Oracle database and there were requirements supplied from the business regarding data validation rules and business logic. Very standard stuff. Once the WSDL and database designs were completed, we discussed the best way to test our service once implementation began. We started developing test cases with JUnit and subsequently testing our deployments with SoapUI. After a very short period of time (and a better understanding of the capabilities of SoapUI), we realized that we were starting to duplicate our testing logic between the two frameworks. After some more consideration, it became apparent that through the creation of SoapUI Test Cases and Test Suites, we could perform the same level of testing as we could writing JUnit tests. The additional benefit that we got by using SoapUI was that we could validate deployments in various environments very quickly and easily. This allowed us to confidently notify service consumers that deployments were available and ready for consumption.

System Requirements
The requirements for using SoapUI for testing this kind of service are quite simple. First, you need SoapUI (obviously). For our needs, the open source version was more than capable. You need a service to test. Our service was deployed to Weblogic, and I have it running locally. If you intend for your Test Suite to clean the database up after your test run completes, you will need the appropriate JDBC driver. The driver needs to be placed in the /bin/ext directory. That's it. At this point, I am assuming you have a SoapUI project set up for your service. If not, it is pretty straightforward to create a project off of a WSDL.

Test Suite Overview
Our service was comprised of about 20 different service operations, some of which write to the database, and all of them read from the database. A test case was developed for each service operation, and the test steps within each test case were developed based off the requirements of the operation. Since the purpose of the service is User Profile management, we needed to define a number of attributes that could be reused across the test cases for testing and assertions. For example, our service has one operation for registering a user and another for getting a users profile data. So the data used for the registration test cases can be used subsequently to validate the return from the get user operation. Also, at the end of the test run, we want to clean up any data that was created and used for the test, so we needed to keep track of users created, etc.

So, the next few sections will describe how to go about creating the Test Suite, defining and using Properties, and provide examples of different Test Steps and how they were developed.

Creating the Test Suite
There are two ways to create a new Test Suite within SoapUI. The first is to right click on the service interface in the left hand pane of the application and select "Generate TestSuite" as shown below:


You will be prompted to select the operations to include in the Test Suite, along with some options for how you want the Test Suite constructed. Using "One TestCase for each Operation" provides a good modular approach to developing your Test Suite, and it also makes reuse and Load Tests easier. If you have been using SoapUI to exercise the web service and have existing requests defined in the Service Interface, you can choose to reuse those requests or you can have SoapUI generate empty requests.

An alternative way to create a Test Suite would be to right-click on the project and select "New TestSuite". You will be prompted for a name and an empty Test Suite will be created. Test Cases can be added by right-clicking on the TestSuite and selecting "New TestCase".

Then within each Test Case, you can define one or more Test Steps. These represent the actual test code that gets executed. Shown below is a screen shot of the RegisterUser Test Case.

As you can see, there are 15 test steps. These map to the requirements that we were implementing against. They are executed in the order shown, and their order can be changed by right-clicking on a Test Step and selecting either Move Up or Move Down.

SoapUI supports numerous types of test steps including: SOAP, Groovy, Property Transfer, Property Steps, Conditional Gotos, and Delays. In our project we used SOAP steps, Property Transfer steps and Groovy Script steps. More details on the various Test Steps we used and how we used them can be found later in this post.

Defining Test Properties
As mentioned earlier, we need to define a number of properties that can be reused across test cases and assertions. These properties are defined at the Test Suite level.


Once we have a Test Suite, we can define properties that can be shared and reused across Test Cases within the Test Suite. If you open the Test Suite editor by double-clicking on the Test Suite in the left hand pane, you will see the following:


By clicking on the "Properties" button on the bottom of the window, you can define Test Suite properties. Below is an example of some of the properties we defined:


At the top of the Properties window, there is a button that lets you add or remove property name/value pairs. Here you can also import property values from a file. Our approach was to define all the properties we needed ahead of time (or at least as much as we knew about), and access or set their values as needed throughout the test cases.

This approach worked pretty well, except that we realized that if developers were executing the test suite concurrently we could have data collisions. Especially with attributes like username and email address, which are unique across the system. So we needed a way to provide some level of uniqueness during the life of a test execution. We did this via the "Setup Script" option for the Test Suite. SoapUI will execute the Setup Script upon each Test Suite execution. It will also execute the TearDown Script upon completion of the test run.These Setup and Teardown scripts are Groovy based scripts . So, we wrote a little script to append the current time in millis onto some property (called "coreName") and placed that in the SetupScript section of the Test Suite, as shown below.

Within the Setup script, you have access to certain variables including log, context, and testSuite. These variables give you access to the SoapUI framework to enable logging and access to test steps, properties, etc. As you can see above, the script retrieves the "coreName" property from the Test Suite properties, appends the current time in millis, and then sets the value of the "baseName" property. This baseName property is used in the creation of usernames and email addresses in numerous Test Cases and assertions throughout the Test Suite. The green arrow at the top left of the script editor window allows you test the script.

Test Steps

SOAP Test Step
As mentioned earlier, we used 3 different types of test steps throughout our Test Suite. Given we are testing web services, the most used test step type is the SOAP test step. A screen shot of the SOAP Test Step screen is shown below.


There are several areas of interest on this screen. The top area has a number of buttons for creating assertions, default requests based off the WSDL, executing the test steps, etc. There is also a drop down that allows you to define what endpoint to which to send the request. This gets pre-populated based on the WSDL.

The left-middle pane of the window shows the request payload for the SOAP request. You can see that there are some property placeholders being used in this request. When using Test Suite scoped properties, the format is: "${#TestSuite#}". So you can see here that we are attempting to call the UserLogin operation, passing in the username property as the username and password request parameters.

The bottom portion of the window shows the assertions that we have defined for this test case. As with other testing frameworks, if the assertion fails, the test step fails. How this failure is handled by SoapUI is defined in the Test Step options. These can be accessed via right-clicking on the Test Step in the left hand pane of the SoapUI window. The option window is shown below.

As you can see, I have my test cases failing on error and aborting on error. Typically in my scenario, if one step failed, the following steps would usually fail, so I let the entire test case abort.

SoapUI supports a number of assertion types. The ones we typically used were SOAP response, ensuring no SOAP faults, and XPath expressions. There are many ways to define assertions and each developer has there own technique, but we have learned a lesson or two along the way. Initially we started out by using XPath to validate each of the return elements in the Response payload. So we would have an XPath expression that would extract a particular node of the response and we would test for a certain value. The XPath configuration window is shown below:



Here we can see that the top portion defines the XPath expression to execute against the response. In the top portion of the window, we declare any namespaces and define our XPath expression. In the bottom portion of the window, the expected result is entered. There are two actions that can be performed in this window against the XPath expression. The first is "Select from Current". This will execute the XPath expression against the response (this means that the request needs to be submitted), and place the result in the expected result window. This is handy to validate your XPath expression, not to mention having the expected result filled in for you. The next action that can be performed is the "Test" action. This will test the XPath expression against the expected result and assert whether they match. If they do not match, the difference will be indicated.

In the assertion above, we are looking at the alt-email node in the response. We know that there are 2 alt-emails for this user (since we registered the user in an earlier test case). The interesting thing here is that we do not know which alt-email will be returned first in the response payload. There is no real way to distinguish between the nodes. Therefore, we put in an "or" and specify that the first or second alt-email matched the pre-defined Test Suite property "${#TestSuite#username}${#TestSuite#alt1EmailSuffix}". The expected result of this XPath expression is "true". We also defined another assertion, comparing the alt-email nodes with the second alt-email Test Suite property. This way we are ensuring that both alt-email addresses come back in the response payload for this operation. Another example of an XPath expression would be looking for a specific value in the response payload, as shown below:



Here, we expect the code element to have a text value of "US-02000".

One more intersting option on the assertion window is the "Allow Wildcards" checkbox. If we do not care about specific values being returned in the response, we can replace those values with an asterisk, and the assertion will pass regardless of what is contained there. This can be seen below:


Here, we don't care about the actual alt-email addresses coming back, we just care that there are two of them and they both have a validated attribute of "true".

One lesson learned here was that it was easy to get lazy and have our XPath expression select the entire response payload and then replace the dynamic fields in the expected reponse with the Test Suite property placeholders. This made the test step development quicker and easier, but if the schema changed, we had to go through all the effected responses and reformat them. Now this would have been the case had we validated each node independently in the response, but the impact of the changes would be more focused and easier to manage.

Property Transfer Steps
Property Transfer steps allow you to extract a value from a source (SOAP request/response payload, another property, etc.) and place it into a target (SOAP request/response payload, another property, etc.). We used these to pull values from SOAP responses and place them into properties that could be used in subsequent Test Steps.

In our scenario, we had requirements that allowed a consuming application to request a login token that could be used in a "remember me" capacity. Thereby allowing an application to log the user in by using the token in place of username/password credentials. So we needed a test step to test the create token operation, then store that token for testing a subsequent login operation. This was accomplished using the Property Transfer step that retrieved the token from the response payload of the CreateToken test step and stored it in a property called "loginToken". The configuration for that step is shown below:

As you can see, we extract the response value using an XPath expression and define the target as the defined Test Suite property "loginToken". This window allows multiple property transfer definitions and also allows the testing of the transfer by clicking the green arrow button at the top of the window. The source drop down allows you to select any test step defined in this test case, and the target drop down allows the same. This is one area where the order of your test steps can come into play. The response where we are extracting the token-value from is executed before this test step. The step that requires this token is following the property transfer test step.

Groovy Script Test Steps
In the test case described above, we requested a token in one test step, used a property transfer step to store the response, and then performed a login using the token in another test step. An additional requirement stated that if the token has expired, the login should fail. Since the token expiration can only be set to be some future time, we needed a way to manipulate the underlying data. We used a Groovy Script Step to access the database and update the expiration time of the token to some time in the past. The code is shown below:


import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import com.eviware.soapui.SoapUI;
import com.eviware.soapui.model.testsuite.*;

def Connection con = null;
def Statement stmt = null;
def ResultSet rs = null;
try{
def tokenValue = testRunner.getTestCase().getTestSuite().getPropertyValue("loginToken");

def propertyStep = testRunner.getTestCase().getTestSuite().getTestCaseByName("ConnectionProperties").getTestStepByName("ConnectionProperties")
def dbDriver = propertyStep.getPropertyValue("dbDriver");
def dbConnString = propertyStep.getPropertyValue("dbConnString");
def dbHostName = propertyStep.getPropertyValue("dbHostName");
def dbHostPort = propertyStep.getPropertyValue("dbHostPort");
def dbServiceName = propertyStep.getPropertyValue("dbServiceName");
def dbConnection = dbConnString + dbHostName + ":" + dbHostPort + ":" + dbServiceName
def dbUsername = propertyStep.getPropertyValue("dbUsername");
def dbPassword = propertyStep.getPropertyValue("dbPassword");

log.info("Connecting to " + dbConnection);
Class.forName(dbDriver);
con = DriverManager.getConnection(dbConnection, dbUsername, dbPassword);
stmt = con.createStatement();
def sql = "update token set expire_date = to_date('2007-01-01 12:00:00 AM', 'YYYY-MM-DD HH:MI:SS AM') where token_value='" + tokenValue + "'"
count = stmt.executeUpdate(sql)
log.info("statement = " + sql)
log.info("updated " + count + " rows in token table")
}catch(SQLException e){
log.info(e.getMessage());
}catch(ClassNotFoundException e){
e.printStackTrace();
}finally{
try{rs.close();}catch(Exception e){}
try{stmt.close();}catch(Exception e){}
try{con.close();}catch(Exception e){}

}




As you can see in the code above, we are accessing both Test Suite scoped properties as well as properties defined in another test case. We decided to define our database connection properties in a separate test case/test step. This was done to make it easier to manage the connection properties for multiple environments. We could simply define a property test step for each environment, and by simply renaming the test step to the standard "ConnectionProperties" test step name, the appropriate connection properties would be used across the test suite. Also, the loginToken property that was defined and used in a previous test steps, is being reused again. This is where defining properties comes in handy.



This test step sets the expiration of the token to some time in the past, so the following test step can validate that attempting to login with an expired token will fail.





Another way in which we used Groovy Scripts was to clean up the data we created and manipulated during our test execution. This script uses property values defined in previous steps to delete the appropriate records from the database. This is the final test step executed in this Test Suite.





Putting it All Together


After going through and implementing all the SOAP Steps, Property Transfer Steps, Groovy Steps, etc. we finally have a Test Suite. Here is a portion of what the Test Suite looks like (due to size, I couldn't fit it all).







If we execute the Test Suite by clicking the green arrow at the top of the window, we can see the results. Here is a portion of the results from my last run:








You can see that some test cases passed and some failed. Since some failed, the entire suite failed. In order to determine what failed, you can double-click on the Test Case and it will bring up the test case editor.





We see here that the 5th test step failed, and we can drill into that by double clicking on it.





Here, we can see that the 3rd assertion failed, lets take a look at what the problem was. You can see a description just below the assertion indicating what failed. The message below is what you would read if you could see the entire string.





XPathContains assertion failed for path [declare namespace ns='http://www.foo.com/schemas/user/1.1';

declare namespace xsd='http://www.foo.com/services/user/1.1/xsd';


//xsd:login-success] : Exception:org.custommonkey.xmlunit.Diff


[different] Expected attribute value 'LASTNAME' but was 'FIRSTNAME' - comparing at /login-success[1]/login-user[1]/attribute[1]/@name to at /login-success[1]/login-user[1]/attribute[1]/@name






This says that the reponse expected a node with the attribute value of "LASTNAME", but actually received "FIRSTNAME". If we look at the details of our assertion, we see that the laziness I mentioned earlier came back to haunt us.





The basic problem is that there are multiple "attribute" nodes for a given user. These attribute nodes have a "name" attribute that defines the attribute. Since we are asserting the entire response payload, and these attributes can come back in any order, there is no guarantee that the response payload will match our assertion. The proper way to validate this test step would be to create an assertion for each node to ensure the response payload contains everything we expect.





Eventually after cleaning up all the test cases, we can get a test suite that executes cleanly. Also, SoapUI provides a command line interface and maven plugins, so that these tests can integrated into your build environments as well.





Summary


SoapUI provides a robust set of capabilities to test web services not only during development, but also to test the validity of deployments. As with good testing practices, it does require upfront planning and good test case definition.





Not only were we able to test the functionality of our web services during development, but we were able to deploy and validate new versions of our service in a manner of minutes. This was a great time saver and allowed us to confidently declare our web services as available to consumers.



Thanks to Steve Smith for the original  post.