Friday

Applications architecture and the S.O.A. alternative

Observing the developments on IT areas the last few years (no more than 10), one will conclude that the definition of ‘application' is rapidly changing. As the industry is moving from using an application-centric architecture to a Service Oriented Architecture (S.O.A.), the focus for building functionality is moving to Service Oriented Business Applications ( a nice description about that, except from the various posts in “evolving through …” you can find in SOBA more concentrated).

The above view, could provide the alternative to view applications as a set of services and components working together in fulfilling a certain business need. The technology, specifications, and standards for specifying these components and services may vary, and the choice is up to you, and of coarse your environment as a whole.

In various post here, I propose and expose from time to time this kind of approach, creating therefore any needed components and services in a model driven way and always having in mind an S.O.A. extension. However, even in that case you'll need some kind of programming model that defines interfaces, implementations, and references. The pros of this alternative is that when building an application or solution based on services, these services can be both created specifically for the application and reused from existing systems and applications. The discussions behind these, is known as Service Component Architecture (SCA), meaning the programming model that specifies how to create and implement services, how to reuse them and how to assemble or compose services into solutions.

The reason for this post, was an interesting project that an involved-in friend, asked for my opinion. The project is the AspireRFID an (one of the very-very few) RFID implementations that run over the Open Source community and therefore a candidate new niche for that domain. Although the existing approach is very interesting and very well analysed, I couldn't stop myself on telling that a more Service Oriented approach should be taken. This alternative might seem a little obscure, from the point of view that the technicalities and area of RFIDs might relate with a different business model than the well known explored ‘B’ in the SOBA. Actually, this is not the case. And the reason is that this area is promising candidate for that ‘B’ because the limited relations toy ERPs, CRMs or what ever, are the until know the well defined areas and developments of the various alternatives of open source that could be used on (aka OSGi, ServiceMix, FUSECamel, Newton, Jini, Tuscany etc). So, in that case the solution might not be a S.O.A. a S.O.A. towards one.

who’s and why’s of Service Component Architecture (SCA)

As has been stated in here and here, mentioning of an alternative is a crucial concern when there is the involvement of the meaning that one gives to S.O.A.. The SCA models the "A" in S.O.A. (and therefore not SOA). It provides a model for service construction, service assembly, and deployment. The SCA supports components defined in multiple languages, deployed in multiple container technologies, and with multiple service access methods. This means that with the SCA approach, we have a set of specifications, addressing both runtime vendors and enterprise developers/architects, for building service-oriented enterprise applications.

The SCA specifications try to define a model for building Service-Oriented (Business) Applications (SOBA). These specifications are based on the following design principles:

OSOA
  • Independent of programming language or underlying implementation.
  • Independent of host container technology.
  • Loose coupling between components.
  • Metadata vendor-independence.
  • Security, transaction and reliability modelled as policies.
  • Recursive composition.

The first draft of SCA was published in November 2005 by the Open SOA Collaboration (OSOA). OSOA is a consortium of vendors and companies interest in supporting or building a SOA. Among them are IBM, Oracle, SAP, Sun, and Tibco. In March 2007 the 1.0 final set of specifications was released. In July 2007 the specifications were adopted by OASIS and they are developed further in the OASIS Open Composite Services Architecture (Open CSA) member section.

Concluding - Current SCA Specifications at OpenCSA

The current set of SCA specifications hosted on the OSOA site includes,
SCA Assembly Model
SCA Policy Framework
SCA Java Client & Implementation
SCA BPEL Client & Implementation
SCA Spring Client & Implementation
SCA C++ Client & Implementation
SCA Web Services Binding
SCA JMS Binding

The important element within the SCA approach, is that for the first time we have a set of specifications that address how S.O.A. can be built rather than what S.O.A. is about. Lets have a bird's eye view of SCA as a technology.

The four main elements of the SCA, aka the related specifications to start with

The set of specifications can be split into four main elements: the assembly model specification, component implementation specifications, binding specifications, and the policy framework specification. Therefore, these might be a good starting/guiding point upon the need of analysis and/or re-engineer needed for the project or the solution-application at hand.

Assembly model specification: this model defines how to specify the structure of a composite application. It defines what services are assembled into a SOBA and with which components these services are implemented. Each component can be a composite itself and be implemented by combining the services of one of more other components (i.e. recursive composition). So, in short, the assembly model specifies the components, the interfaces (services) for each component, and the references between components, in a unified, language independent-way.  The assembly model is defined with XML files.

Using the SCA assembly model, we have to define firstly the recursive assembly of components into services. Recursive assembly allows fine-grained components to be wired into what in SCA terms is called composites. These composites than can be used as components in higher-level composites and applications. The type of a component generally defines the services exposed by the component, properties that can be configured on the components and dependencies the component has on services exposed by other components.

An example of an SCA assembly model, is shown in the diagram bellow:

sca-assemblymodeldiagram-image-1.jpeg

Diagram 1: SCA assembly model

In the diagram above Composite X is composed of two components Component Y and Component Z, which themselves are composites. Composite X exposes its service using web services.

Composite Y is composed of a Java component, a Groovy component and a Jython component. The service offered by Java component is promoted outside the composite. The Java component depends on services offered by the Groovy and Jython components. In SCA terminology these dependencies are called references. All the references for the Java component are satisfied by services offered by other components within the same composite. However, the reference on the Groovy component has been promoted outside the composite and will have to be satisfied in a context within which Composite Y is used as a component. In the diagram above this is achieved using the service promoted by Composite Z.

Composite Z is composed of a Java component, a BPEL component and a Spring component, the service offered by the BPEL component is promoted to be used when the composite is used as a component in a higher level composite.

As you can see SCA allows building of compositional service-oriented applications using components built on heterogeneous technologies and platforms. SCA assembly model is generally expressed in an XML based language called Service Component Description Language (SCDL).

Component implementation specifications: these specifications define how a component is actually written in a particular programming language. Components are the main building blocks for an application build using SCA. A component is a running instance of an implementation that has been appropriately configured. The implementation provides the actual function of the component and can be defined with a Java class, BPEL process, Spring bean, and C++ or C code. Several containers implementing the SCA standard (meaning that they can run SCA components) support additional implementation types like .Net, OSGI bundles, etc. In theory a component can be implemented with any technology, as long as it relies on a common set of abstractions, e.g. services, references, properties, and bindings.

The following component implementation technologies are currently described:

Binding specifications: these specifications define how the services published by a component can be accessed. Binding types can be configured for both external systems and internal wires between components. The current binding types described by OSOA are bindings using SOAP (web services binding), JMS, EJB, and JCA. Several containers implementing the SCA standard support additional binding types like RMI, Atom, JSON, etc. An SCA runtime should at least implement the SCA service and web service binding types. The SCA service binding type should only be used for communication between composites and components within an SCA domain. The way in which this binding type is implemented is not defined by the SCA specification and it can be implemented in different ways by different SCA runtimes. It is not intended to be interoperable. For interoperability the standardized binding types like web services have to be used.

Officially described binding types:

Policy framework specification: for describing how to add non-functional requirements to services. Two kinds of policies exist: interaction and implementation policies. Interaction policies affect the contract between a service requestor and a service provider. Examples of such policies are message protection, authentication, and reliable messaging. Implementation policies affect the contract between a component and its container. Examples of such policies are authorization and transaction strategies.

Summary

SCA provides a set of specifications, addressing both runtime vendors and enterprise developers/architects, for building service-oriented enterprise applications. For the first time we have a set of specifications that address how SOA can be built rather than what SOA is about. I have provided a bird's eye view of SCA as a technology.

The related concepts in detail

As stated before the main building block of a Service-Oriented Business Application is the component. Figure 1 exhibits the elements of a component. A component consists of a configured piece of implementation providing some business function. An implementation can be specified in any technology, including other SCA composites. The business function of a component is published as a service. The implementation can have dependencies on the services of other components, we call these dependencies references. Implementations can have properties which are set by the component (i.e. they are set in the XML configuration of a component).

SCA Component diagram

Figure 1 - SCA Component diagram (taken from the SCA Assembly Model specification)

Components can be combined into assemblies, thereby forming a business solution. So from this point we get the meaning of ‘B’ for in what Business may refer to. The SCA calls these assemblies composites. As shown in figure 2 a composite consists of multiple components connected by wires. Wires connect a reference and a service and specify a binding type for this connection. Services of components can be promoted, i.e. they can be defined as a service of the composite. The same holds for references. So, in principle a composite is a component implemented with other components and wires. As stated before, components can in their turn be implemented with composites thereby providing a way for a hierarchical construction of a business solution, where high-level services (often indicated as composite services) are implemented internally by sets of lower-level services. 

SCA Composite diagram

Figure 2 - SCA Composite diagram (taken from the SCA Assembly Model specification)

The service (or interface if you like) of a component can be specified with a Java interface or a WSDL PortType. Such a service description specifies what operations can called on the composite, including their parameters and return types. For each service the method of access can be defined. As seen before, the SCA calls this a binding type. 

Example XML structure defining a composite

Figure 3 - Example XML structure defining a composite

Figure 3 exhibits an example XML structure defining an SCA composite. It's not completely filled in, but it gives a rough idea what the configuration looks like. The implementation tag of a component can be configured based on the chosen implementation technology, e.g. Java, BPEL, etc. In case of Java the implementation tag defines the Java class implementing the component.

Deployment and the Service Component Architecture contributions, clouds, and nodes

SCA composites are deployed within an SCA domain. An SCA Domain (as shown in Figure 4) represents a set of services providing an area of business functionality that is controlled by a single organization. A single SCA domain defines the boundary of visibility for all SCA mechanisms. For example, SCA service bindings (recall the earlier explained SCA binding types) do only work within a single SCA domain. Connections to services outside the domain must use the standardized binding types like web service technology. The SCA policy definitions do also only work within the context of a single domain. In general, external clients of a service that is developed and deployed using SCA should not be able to tell that SCA was used to implement the service, it is an implementation detail.

SCA Domain diagram

Figure 4 - SCA Domain diagram (taken from the SCA Assembly Model specification)

An SCA domain is usually configured using XML files. However, an SCA runtime may also allow the dynamic modification of the configuration at runtime.

An SCA domain may require a large number of different artifacts in order to work. In general, these artifacts consists of the XML configuration of the composites, components, wires, services, and references. We of course also need the implementations of the components specified in all kinds of technologies (e.g. Java, C++, BPEL, etc.). To bundle these artifacts the SCA defines an interoperable packaging format for contributions (ZIP). SCA runtimes may also support other packaging formats like JAR, EAR, WAR, OSGi bundles, etc. Each contribution at least complies to the following characteristics:

  • It must be possible to present the artifacts of the packaging to SCA as a hierarchy of resources based off of a single root.
  • A directory resource should exist at the root of the hierarchy named META-INF.
  • A document should exist directly under the META-INF directory named ‘sca-contribution.xml' which lists the SCA Composites within the contribution that are runnable.

A goal of the SCA approach to deployment is that the contents of a contribution should not need to be modified in order to install and use the contents of the contribution in a domain.

An SCA domain can be distributed over a series of interconnected runtime nodes, i.e. it is possible to define a distributed SCA domain. This enables the creation of a cloud consisting of different nodes each running a contribution within the same SCA domain. Nodes can run on separate physical systems. Composites running on nodes can dynamically connect to other composites based on the composite name instead of its location (no matter in which node the other composite lives), because they all run in the same domain. This also allows for dynamic scaling, load balancing, etc. Figure 5 shows an example distributed SCA domain, running on three different nodes. In reality you don't need different nodes (and even different components) for this example, but it makes the idea clear.

A distributed SCA domain running on multiple nodes

Figure 5 - A distributed SCA domain running on multiple nodes (taken from the Tuscany documentation)

Implementations of the Service Component Architecture

The SCA is broadly supported with both commercial and open source implementations. 

Open source implementations:
  • Apache Tuscany (official reference implementation). The current version is 1.4, released in January 2009. They are also working on a 2.0 version which they aim to run in an OSGi enabled environment.
  • The Newton Project.
  • Fabric3. A platform for developing, assembling and managing distributed applications on SCA standards.Fabric3 leverages SCA to provide a standard, simplified programming model for creating services and assembling them into applications.

Commercial implementations:

Concluding… 

Concluding with the analysis of this kind of alternative, concerning the SCA in combination with the overall and therefore holistic Reference Architecture for S.O.A., we have expose a step toward for a complete Architectural Design and a Guide to implement the proper scheme concerning the case that should be applicable in the case at hand. Both the concerns covered in the Reference Architecture approach and the SCA approach here, in hand with the strategic decisions that the organisation or the project has in ‘mind’ would be able to provide specific specifications (in the technical, business and conceptual analysis) about the meanings and approaches that the approach is aiming to take. The definition of ‘B’s and ‘A’s is therefore the mission of these approaches and the clearance of the meaning for the entities involved from a context based view. 

So, as far as concerning the S.O.A. ‘infects’ that have to do with the proper definition of Architecture, we may have a logical solution from divide and conquer. Having as a next step after the Reference Architecture’s decisions, the Service Component Architecture (SCA) which provides the means for analysing and decomposing the Service Oriented Business Applications (SOBA) involved, we have a clearer decomposition and definition of the ‘B’ (– business meaning in case at hand) from the ‘A’ (– of the whole architecture and organisation or project).


Thursday

The Class Casting Exception in OSGi Environments

In previous posts, i have deal with the OSGi approach on building services, either the general approach or  using iPOJO. These are ok, if one is willing to implement a startup of services bundles or some project from the start, or to develop a solution with a nature of separation or independency on existing IT assets. Unfortunately, this is rarely the case. Usually, there is either the need to consume some objects from existing applications or to just the other way around,  to consume objects created in the OSGi environment. And real life, demands both. The first issue is normally easy to deal with. Actually, this is the way you should go on. Having in mind the S.O.A. enabling that your project, solution,architecture, code etc, should comply with,  there is an issue in the horizon.

Demystifying the Issue one level down

Consider the common condition: You have a non-OSGi Java Application in which an OSGi-based application is embedded. You want to consume objects created in the OSGi-based application (an OSGi Service for example).

The above condition will provide us a problem issue: When you try to consume (invoking methods, assigning to local variables, etc.) the objects created inside the OSGi Environment from the non-OSGi application you get Class Casting Exception.

The reason for getting this exception, is that any class loaded inside the OSGi Environment is loaded from a special class loader provided by the OSGi Framework which is different from the class loader used by the non-OSGi application. So even if the class of the non-OSGi application is exactly the same as the one of the OSGi-based application, the objects will still be treated as of different types since their classes were loaded using different class loaders.

Getting around

If you want to perform on the object a single method invocation or two without passing parameters of conflicting types (the classes that loaded from different class loaders) just invoke the method using reflection, if you have a more complex case then use utilities that would allow you to bridge the classes.Here comes another key resource place for the OSGi’s: DynamicJava. You can overcome the above exception with the help of  API-Bridge for instance, which provides such capabilities. Actually, this API was created to address this problem specifically.

Bridge objects act as their type is of the target class loader, so when the non-OSGi application works with the bridged object it will seem to it that it was loaded from the same class loader. Lets see an example that clarifies how API-Bridge functions.

API-Bridge

Below is a code snippet that loads the same class using two different class loaders and then bridges them using the API-Bridge provided by DynamicJava resources.

   1: public void testApiBridge() throws Exception {  


   2:      /// Class A which implements interface AInterface is loaded by two   


   3:      /// different class loaders so they are treated as different examples  


   4:      Class aClass1 = getClassLoader1().loadClass("org.test.A");  


   5:      Class aClass2 = getClassLoader2().loadClass("org.test.A");  


   6:      Class aInterface1 = getClassLoader1().loadClass("org.test.AInterface");  


   7:      Class aInterface2 = getClassLoader2().loadClass("org.test.AInterface");  


   8:        


   9:      /// Objects of AImplementation class loaded from Class Loader 1 are not  


  10:      /// assignable from objects loaded from Class Loader 2.  


  11:      assertFalse(aInterface1.isAssignableFrom(aInterface2));  


  12:        


  13:      Object a2 = aClass2.newInstance();  


  14:        


  15:      /// The object is not castable since it's an instance of a class  


  16:      /// that was loaded using a different class loader.  


  17:      assertFalse(aInterface1.isInstance(a2));  


  18:        


  19:      /// We create an API Bridge that bridges classes (their fields,  


  20:      /// method parameters, etc.) of the "org.test" package to  


  21:      /// the class loader 1.  


  22:      ApiBridge apiBridge = ApiBridge.getApiBridge(getClassLoader1(), "org.test");  


  23:        


  24:      Object bridgedA2 = apiBridge.bridge(a2);  


  25:      /// The bridged a2 object is now castable  


  26:      assertTrue(aInterface1.isInstance(bridgedA2));  


  27:  }  






Although the example doesn't look very clear at first glance, it's important to note that we wrote only 2 lines of code to bridge the API. What we did up there is that we bridged an object and verified the fact is that it's castable to the AInterface of a different class loader.



While API-Bridge supports classes when used as implementations, super classes and method parameters, casting is only supported for interfaces.



Real World


An example of this problem is a Java Web Application run in an OSGi-oblivious Servlet Container (Tomcat for example), the Web Application has an OSGi-based application embedded inside of it. The OSGi-based application registers an OSGi Service that implements the javax.servlet.Servlet interface. The Web Application has a Delegator Servlet which delegates requests to the registered OSGi Service. The problem with this application is that the Web Application can't consume the Servlet object since it's loaded by a class loader of the OSGi Environment, the fact that causes ClassCastException to be thrown. To overcome this problem, I used the API-Bridge to bridge the Servlet API. Below is the ServletDelegator class which delegates requests to the OSGi Service and uses the API-Bridge to bridge the Servlet objects





   1: public class DelegatorServlet extends HttpServlet {  


   2:       


   3:     @Override  


   4:     protected void doGet(HttpServletRequest request, HttpServletResponse response)  


   5:             throws ServletException, IOException {  


   6:         BundleContext bundleContext = getBundleContext();  


   7:         if (bundleContext != null) {  


   8:             ServiceReference servletServiceRef = bundleContext.getServiceReference(  


   9:                     Servlet.class.getName());  


  10:             if (servletServiceRef != null) {  


  11:                 /// We can't write a line like:  


  12:                 /// Servlet servletObject =  


  13:                 ///     (Servlet)bundleContext.getService(servletServiceRef);  


  14:                 /// because a class casting exception will be thrown since the  


  15:                 /// Servlet class in this code is loaded by Tomcat class loader   


  16:                 /// while the retrieved Servlet by the OSGi Class Loader.  


  17:                 Object servletObject = bundleContext.getService(servletServiceRef);  


  18:                   


  19:                 ApiBridge apiBridge = ApiBridge.getApiBridge(  


  20:                         Thread.currentThread().getContextClassLoader(),  


  21:                        "javax.servlet", "javax.servlet.http");  


  22:                 Servlet servlet = (Servlet)apiBridge.bridge(servletObject);  


  23:                 /// Note that method parameters will be bridged too by API Bridge,  


  24:                 /// so they could be passed to the OSGi Environment.  


  25:                 servlet.service(request, response);  


  26:             } else {  


  27:                 printHtml("Error: A Servlet OSGi Service was not found.",  


  28:                         response);  


  29:             }  


  30:         } else {  


  31:             printHtml("Error: Bundle Context was not found in the ServletContext.",  


  32:                     response);  


  33:         }  


  34:     }  


  35:       


  36:     protected BundleContext getBundleContext() {  


  37:         /// Here we retrieve the Bundle Context of the OSGi-based application  


  38:     }  


  39:       


  40:     /// ... Other supportive code  


  41:       


  42: }  


Friday

Alternative on Building OSGi service: build using iPOJO

Continuing from a previous post on creating OSGi services, this posting, is making thinks more adaptable. The adaptation key here, is in iPOJO.  iPOJO stands for injected POJO. It’s a component model from Apache on top of OSGi. It is very flexible, extensible and easy to adapt as already mentioned. It is a try to remove the overhead of developer’s handling services in OSGi. So, as before, let’s build a simple service using iPOJO and consume it.

OSGi is about modularizing your application. In this demo, we will have three projects (or three bundles) that will be deployed on Apache Felix. The needed ingredients are:

  1. Bnd tool from aQute

Problem Statement

The problem is very simple: Our service is a simple OSGi service that will display the string “Hi There!”.

Getting Started

Like any OSGi service, our service is represented as an interface:

package hi.service;
public interface HiService {
public String sayHi();
}


This will be our first project or first bundle. The bundle will have just the interface. Please note that you will have to export hi.service package when building the bundle. To make this task easy, use the bnd tool to create the jar. I have used Ant script to compile and package the projects. Once bundle is ready, you will use it as a dependency library for compiling other projects.



 



Service Implementation





The next step is to implement our service. We will create a new project with previous projects jar file as a dependency. The service implementation is also POJO and there is no mention of OSGi service in the code. Let’s have a look at our implementation:



package hi.component;
import hi.service.HiService;
public class HiComponent implements HiService {
String message = "Hi There!";
public String sayHi() {
return message;
}
}


Now we define that we have a iPOJO component. This is done through a xml that is placed along with the project. The xml defines that, we have a component of the class hi.component.HiComponent. The example below is a very simple one, you can have call backs, properties set etc in this xml using different xml elements.



<ipojo>
<component classname="hi.component.HiComponent">
<provides/>
</component>
<instance component="hi.component.HiComponent"/>
</ipojo>


We will compile this project into another bundle that will be deployed in the runtime.



 



Using the Service



This is the final project. This project will be consuming the service we created above. The client can be a POJO or a Activator class. Like the service implementation code, we do not code for a service. Instead we go about and code as if the implementation is available to us. iPOJO framework would take care of the rest. Here is our "Hi there!" client:



package hi.client;
import hi.service.HiService;

public class HiClient {

private HiService m_hi;

public HiClient() {
super();
System.out.println("Hi There Client constructor...");
}

public void start() {
System.out.println("Starting client...");
System.out.println("Service: " + m_hi.sayHi());
}

public void stop() {
System.out.println("Stoping client...");
}
}


I have some sys outs to see how our client work. Just like the service project, we have a xml that will define what is the required service, callback functions etc. Have a look at the xml:



<ipojo>
<component classname="hi.client.HiClient">
<requires field="m_hi"/>
<callback transition="validate" method="start"/>
<callback transition="invalidate" method="stop"/>
</component>
<instance component="hi.client.HiClient"/>
</ipojo>


The xml specifies that HiClient requires m_hi to execute. m_hi is a instance of our service. So as long as the service is not available, the HiClient component does not get executed. The callback xml elements specify which methods to execute when the state of the component changes.

Once compiling and packaging of this project is done, we are ready to deploy our example into a runtime and see it working. When you have Felix and iPOJO framework downloaded, you have to configure Felix to load the bundles when it’s started.


Felix configurations are placed in config.properties under conf folder.You will have to modify the entires for felix.auto.start.1 variable. Here is how it looked like after I modified:



felix.auto.start.1= \
file:bundle/org.apache.felix.shell-1.0.1.jar \
file:bundle/org.apache.felix.shell.tui-1.0.1.jar \
file:bundle/org.apache.felix.bundlerepository-1.0.3.jar \
file:bundle/org.apache.felix.ipojo-1.0.0.jar \
file:bundle/hi.service.jar \
file:bundle/hi.component.jar \
file:bundle/hi.client.jar


I have put all my jars in felix/bundle folder. You may place them in different location but you must specify the correct paths above.

We are ready now. Run your felix runtime to see the results!

The OSGi and the Services issue

Few months ago, i had introduce the Open Service Gateway initiative in a post, as trying to make a solid ground for the association of posts concerning the ServiceMix related posts, for introducing a simplistic usage of open source tools on S.O.A. related projects. From that time and on, we were introduced to a wide adoption of OSGi methodology and approach by developers and vendors (both legacy and open source). This fact gives the option and possibility to consider OSGi technology as technology that will transform Java development. Specially on the enterprise side.

To establish a point of useful OSGi, i am giving a quick example on how to build a simple service and consume it. This is quite useful in the S.O.A. development, considering it as a step in the development tasks of the reference architecture that one should follow in a S.O.A. enabled solution and approach. Specially when in a small team the object is cheap (cost and time) development considering services.

You may start and work with Equinox and Eclipse IDE. You may also use other OSGi runtime like Apache Felix or Knopflerfish. Knopflerfish even gives you are good GUI to work with. I will not be explaining fundamentals of OSGi technology.The goal here is to be a little bit more simplistic and independent, in order to be able to work with any other IDE, approach or tools you might prefer. So here we are concerned with a simple application (bundle).  Lets start with.

What's OSGi service?

In very general terms, conforming with S.O.A. re-use, a service is a … repeatable task. When it comes to business, any repeatable task in your business process is a service. Similarly in a application, you can have generic tasks (even specific tasks) that are repeatedly used and therefore, can be represented as service. Representing and using these tasks as services is what SOA is all about! But that' at an enterprise level. When it comes to OSGi services, it is the same concept but applied at JVM level.
In OSGi, a service is a plain java object which is published to a registry. A consumer can consume the registered service through lookup. A service a be registered and unregistered at any point of time. Service is built using interface-based programming model. To implement or build a service you basically provide implementation to a interface. To consume, you only need the interface for the lookup and there is no need to know about the implementation. The service registry is the "middle man" who help producers and consumers to get in touch with each other.

Building STARTUPOSGISRVservice

The first step would be to create our interface or "front end" of the service. For our service, we will have a simple interface named IStartupOSGISRV:

package org.my.service.startupservice;

public interface IStartupOSGISRV {
public String sayHi();
}


And here is our service implementation.



package org.my.service.startupservice;

public class StartupOSGISRV implements IStartupOSGISRV {
public String sayHi() {
return "Hi There!";
}
}


That's it! Our service is ready for use. But, we need to inform consumers that the service is ready to serve. For this, we will have to register our service with the OSGi service registry.

OSGi framework provides us with standard APIs to register and unregister service with the registry. We will use the registerService method to register as shown below:



serviceRegistration = context.registerService(IStartupOSGISRV.class.getName(),startupservice,null);


I am sure for beginners this is not enough. Let's explain the stuff little further.To register our new service, we will build a simple bundle that will call registerService method.



package org.my.service.startupservice;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceRegistration;

public class Activator implements BundleActivator {

private ServiceRegistration serviceRegistration;
private IStartupOSGISRV startupservice;

public void start(BundleContext context) throws Exception {
System.out.println("Starting StartupServiceBundle..");
startupservice= new StartupService();
serviceRegistration = context.registerService(IStartupOSGISRV .class.getName(),startupservice,null);

}

public void stop(BundleContext context) throws Exception {
serviceRegistration.unregister();
}

}


Our Activator class implements BundleActivator. Basically, its a simple OSGi bundle with start and stop methods. We will register our service with the bundle starts up and unregister when the bundle is uninstalled from the framework.

Now lets have a closer look at start method. We create a instance of our service and then use registerService method. The first argument is service name which is obtained using InterfaceName.class.getName(). Its a best practice to use this method instead of specifying the name as string (org.my.service.startupservice.IStartupOSGISRV). The second argument is the instance of the service itself. And the final argument is Map wherein developers can pass additional properties to the service.


To unregister the service, you simple call unregister method when we stop the bundle. So now we have a running service on our OSGi runtime.  Now we must consume it.



 



Consuming a service



To consume a service, we first create serviceReference object form the BundleContext. This can be achieved by calling getServiceReference method. The method takes the class name as a argument. Once you have the serviceReference object, we will use getService method to finally get the service. We will have to typecast the object returned by getService method before using it.



startupserviceRef = context.getServiceReference(IHelloService.class.getName());
IStartupOSGISRV serviceObjectStartupService = (IStartupOSGISRV)context.getService(startupserviceRef);
System.out.println("Service says: " + serviceObjectStartupService.sayHi());


Implementing the service and consumer is the same package is easy. Because, the interface is available. When you have your service and consumer bundle separate, there are some important points to note. OSGi provides the capability of specifying the packages they can be exported or imported. With this facility you can expose your service interface and hide its implementation from the public. The configuration details are specified in the MANIFEST file. Have a look at our StartupService's MANIFEST file:



Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: StartupService Plug-in
Bundle-SymbolicName: StartupService
Bundle-Version: 1.0.0
Bundle-Activator: org.my.service.startupservice.Activator
Bundle-ActivationPolicy: lazy
Bundle-RequiredExecutionEnvironment: JavaSE-1.6
Import-Package: org.osgi.framework;version="1.3.0"
Export-Package: org.my.service.startupservice;uses:="org.osgi.framework"


Notice that we have exported org.my.service.startupservice package. Similarly, we import this package in our consuming bundle.

And to add some notes; The code used for consuming the service is not the best way. You should make the code very simple and easy to understand without involving Exceptions handling,Null pointer checks and ServiceListeners.

Tuesday

On the Reference Architecture for SOA Adoption


While talking with various people and thinking over the case of S.O.A. (note not SOA) in order to explore in more depth the holistic view needed, i came across Hariharan’s post on Reference Architecture for SOA Adoption. Hariharan states:

Fundamental objective of SOA is aligning your IT capabilities with business goals. SOA is not just IT architecture nor business strategy. It should be perfect combination of IT infrastructure and business strategy. So, when we are planning for SOA adoption, we need a strong roadmap and blueprint. Reference architecture model will be a blueprint of SOA.
Reference architecture is like a drawing of your corporate building. Before and after construction, you need blueprint for verification and reference. Similar to that SOA reference architecture could be a reference model for your enterprise business system. Reference architecture should be defined properly during the SOA roadmap definition phase. Refer my last blog in which I was talking about SOA roadmap.
Fundamentally, while defining reference architecture model for corporate, we should consider the following components as part of architecture.
1. Infrastructure and components services
2. Third party communication and data sharing services
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance
Currently, if you look at the SOA market, there are many product and service providers comes up with its’ own SOA reference architecture. Many major SOA players like IBM, TIBCO, Web Methods, BEA Oracle and MGL has defined reference architecture based on their product catalog and services. If we refer, all has its own model with explanation which may puzzle the SOA adaptors to choose the right approach. One important point should be remembered that product vendor or service provider’s reference model may not suit for your requirements completely. We should consider that as just one of the reference point to finalize the vendor. We should design the right blueprint model for our corporate and SOA need. This is a crucial step that should be planned as part of the SOA implementation roadmap.

Good starting point. But more or less, again the ‘bird eye’ view needed for S.O.A., is not complete. The reason, is services (and actually web-services) oriented view. As stated here S.O.A. is more than integration. It is integration as well, as is about the following of strategies and the future of the in hand infrastructure. So as concerning the types 1,2,6,8 and 12, the reference architecture should avoid or be irrelevant (as possible) to the ‘marketing effect’ proposed by vendors, as stated here. This last issue is recognized in the above statement in the last paragraph when stating :

“Many major SOA players like IBM, TIBCO, Web Methods, BEA Oracle and MGL has defined reference architecture based on their product catalog and services. If we refer, all has its own model with explanation which may puzzle the SOA adaptors to choose the right approach. One important point should be remembered that product vendor or service provider’s reference model may not suit for your requirements completely”

but not enough explored. In any way, i am not against any vendor, or trying to construct a polemic view against them when referring to ‘marketing effect’. They are trying to do business and make money, extent their ROIs and so on. Thats ok. My point is that we must have in mind the inner or hidden points on every co-operator when making our business or working.

So keeping in hand the 12 above points, but we have to be more holistic and therefore more concrete and safe in our reference. This reference is going to be the ground to explore our enterprise niches and evolve through market changes. A nice starting point in what a reference architecture should look like, is proposed from OASIS, among others proposals and standards, in SOA Reference Architecture. Compliant to this reference is the Conceptual Integration Technical Reference Architecture from the Washington State Department of Information Services. In the conceptual Integration Technical R.A. there is this diagram:

The conceptual reference architecture depicted in the diagram above (and defined in this document) adopts the Service-Oriented Architecture Reference Model (SOA-RM) developed by the Organization for the Advancement of Structured Information Standards (OASIS).

The SOA-RM defines its purpose as follows:

“A reference model is an abstract framework for understanding significant relationships among the entities of some environment. It enables the development of specific architectures using consistent standards or specifications supporting that environment. A reference model consists of a minimal set of unifying concepts, axioms and relationships within a particular problem domain, and is independent of specific standards, technologies, implementations, or other concrete details.” ([SOA-RM], p. 4).

“The goal of this reference model is to define the essence of service oriented architecture, and emerge with a vocabulary and a common understanding of SOA. It provides a normative reference that remains relevant for SOA as an abstract and powerful model, irrespective of the various and inevitable technology evolutions that will impact SOA.” ([SOA-RM], p. 4).

As you can see, the Reference is quite abstract. While the SOA-RM is a powerful model that provides the first vendor-neutral, open-standard definition of the service-oriented approach to integration, its abstract nature means that it is not capable of providing the architectural guidance needed for the actual design of integration solutions (or integrated software systems). That guidance comes from the definition of a reference architecture that rests on the foundation of the reference model’s concepts. The reference architecture builds on those concepts by specifying additional relationships, further defining and specifying some of the concepts, and adding key (high-level) software components necessary for integration solutions.

In the diagram above, SOA-RM concepts are shaded yellow. Concepts and components particular to the conceptual reference architecture defined by this document are shaded cyan. Relationships between concepts (indicated by arrows) are defined in the SOA-RM if the arrows connect concepts shaded yellow. Relationships between cyan-shaded concepts or between cyan-shaded and yellow-shaded concepts are particular to the reference architecture.

In order to conclude with Hariharan’s 12 points, which they try to describe and define in a next detailed plan the web services characteristics, we need to have some preliminaries which should work as the base for the next step Hariharan’s analysis:

Firstly, in the context of the organisation in hand (for whom we will define a more solid S.O.A. Reference Architecture), there should be an analysis of the models that exist and the models that we will going to need in the near future (obtained from the strategic decisions in short and long term of the organisation). So analysis of Models of the various entities that play crucial role in the infrastructure. In general these entities lying into 2 main categories:

  1. Data Modelling
  2. Generic Entity Modelling

What are the nature and model of the data need to be interchanged inside the infrastructure? what formats, categories and relations (dependencies) they follow? This analysis some times maybe needed in a system’s view note. From the point of view that the main system A used in the organisation is handling this data inside it (e.g. time/date/currency/locationGIS/Post code etc…).

What are the main compound entities that they exist and should be interchanged inside the organisation’s infrastructure? How they depend on the predefined data models above? E.g. Customer, Order, Products etc which they will be analysed in the context of main data models and their dependencies and more over, they will incorporate any missing data (used) elements missing from the Data Modelling analysis, depicting relations hierarchies etc. (Usually any missing attributes for the Data Modelling, should be declared and bound on the Entities relations and modelling).

So, now, we can describe our flows (existing) and found the business and IT rules that are needed in order to be able to have some services on the fly or from compounding existing ones. These rules and analysis, will show as if there is a need for the ‘canonical modelling’ that we are using. Depends of coarse on the systems used, to be used and therefore from the strategic decisions taken by the organisation’s next steps (e.g. in Telco's double/triple play which demand an extension to the product models) and of coarse from the vendors compliant modelling schemes of the existing or new systems to be used. Therefore, Hariharan’s list is extending to:

A1.1. Existing Data Modelling
A1.2. Existing Entities Modelling
A1.3. To-Be Data Modelling
A1.4. To-Be Entities Modelling
1. Infrastructure and components services
2. Third party communication and data sharing services
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance

Now we have the flows parts. Again, having the A1.1-A1.4, we can construct the existing data and entities flows, meaning a more generic Business flows that although they correspond to business analysis, they include the IT –systems steps for complying with the Modelling restrictions by organisation’s infrastructure. These are not only services, although if we generalise the term service, we are in. There might be p2p systems interconnections, hidden to the business flow for demanding, retrieving and consolidating data and so on. \

So we should obtain also an

A2.1 Existing Flows
A2.2 New-proposed Flows

from this point we should be able, having

a) our system’s capabilities
b) our modelling rules

The candidate services, which comply with A2.1 and A2.2 and create the web services (or the RPCs, XMLs, SQLs and so on) for obtaining the repository of available actions. So the list becomes:

A1.1. Existing Data Modelling
A1.2. Existing Entities Modelling
A1.3. To-Be Data Modelling
A1.4. To-Be Entities Modelling
1. Infrastructure and components services
2. Third party communication and data sharing services
A2.1 Existing Flows
A2.2 New-proposed Flows
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
A6.1. Services Decomposition Strategy
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance and Orchestration

With these in mind we should be able to develop an Decomposition Strategy (A6.1 in the list) by which we should be able to deconstruct and reconstruct meaningful information from existing services, categorise all the possible services to be used in the organisation, be able to have loosely coupled services and explore with the best possible way under the organisation’s context the services orchestration abilities and of coarse re-usability. In all the above we should have in mind Mike Kavis post concerning the Top 10 Reasons Why People Make SOA Fail.

[

    1. They fail to explain SOA's business value.
    2. They underestimate the impact of organizational change.
    3. They fail to secure strong executive sponsorship.
    4. They attempt to do SOA "on the cheap."
    5. They lack the required skills to deliver SOA.
    6. They have poor project management.
    7. They think of SOA as a project instead of an architecture.
    8. They underestimate the complexity of SOA.
    9. They fail to implement and adhere to SOA governance.
    10. They let the vendors drive the architecture.

]

If you have any additions or need help constructing a more concrete and specific Reference model for your instance, give a drop. All comments are welcomed.