Friday

Alternative on Building OSGi service: build using iPOJO

Continuing from a previous post on creating OSGi services, this posting, is making thinks more adaptable. The adaptation key here, is in iPOJO.  iPOJO stands for injected POJO. It’s a component model from Apache on top of OSGi. It is very flexible, extensible and easy to adapt as already mentioned. It is a try to remove the overhead of developer’s handling services in OSGi. So, as before, let’s build a simple service using iPOJO and consume it.

OSGi is about modularizing your application. In this demo, we will have three projects (or three bundles) that will be deployed on Apache Felix. The needed ingredients are:

  1. Bnd tool from aQute

Problem Statement

The problem is very simple: Our service is a simple OSGi service that will display the string “Hi There!”.

Getting Started

Like any OSGi service, our service is represented as an interface:

package hi.service;
public interface HiService {
public String sayHi();
}


This will be our first project or first bundle. The bundle will have just the interface. Please note that you will have to export hi.service package when building the bundle. To make this task easy, use the bnd tool to create the jar. I have used Ant script to compile and package the projects. Once bundle is ready, you will use it as a dependency library for compiling other projects.



 



Service Implementation





The next step is to implement our service. We will create a new project with previous projects jar file as a dependency. The service implementation is also POJO and there is no mention of OSGi service in the code. Let’s have a look at our implementation:



package hi.component;
import hi.service.HiService;
public class HiComponent implements HiService {
String message = "Hi There!";
public String sayHi() {
return message;
}
}


Now we define that we have a iPOJO component. This is done through a xml that is placed along with the project. The xml defines that, we have a component of the class hi.component.HiComponent. The example below is a very simple one, you can have call backs, properties set etc in this xml using different xml elements.



<ipojo>
<component classname="hi.component.HiComponent">
<provides/>
</component>
<instance component="hi.component.HiComponent"/>
</ipojo>


We will compile this project into another bundle that will be deployed in the runtime.



 



Using the Service



This is the final project. This project will be consuming the service we created above. The client can be a POJO or a Activator class. Like the service implementation code, we do not code for a service. Instead we go about and code as if the implementation is available to us. iPOJO framework would take care of the rest. Here is our "Hi there!" client:



package hi.client;
import hi.service.HiService;

public class HiClient {

private HiService m_hi;

public HiClient() {
super();
System.out.println("Hi There Client constructor...");
}

public void start() {
System.out.println("Starting client...");
System.out.println("Service: " + m_hi.sayHi());
}

public void stop() {
System.out.println("Stoping client...");
}
}


I have some sys outs to see how our client work. Just like the service project, we have a xml that will define what is the required service, callback functions etc. Have a look at the xml:



<ipojo>
<component classname="hi.client.HiClient">
<requires field="m_hi"/>
<callback transition="validate" method="start"/>
<callback transition="invalidate" method="stop"/>
</component>
<instance component="hi.client.HiClient"/>
</ipojo>


The xml specifies that HiClient requires m_hi to execute. m_hi is a instance of our service. So as long as the service is not available, the HiClient component does not get executed. The callback xml elements specify which methods to execute when the state of the component changes.

Once compiling and packaging of this project is done, we are ready to deploy our example into a runtime and see it working. When you have Felix and iPOJO framework downloaded, you have to configure Felix to load the bundles when it’s started.


Felix configurations are placed in config.properties under conf folder.You will have to modify the entires for felix.auto.start.1 variable. Here is how it looked like after I modified:



felix.auto.start.1= \
file:bundle/org.apache.felix.shell-1.0.1.jar \
file:bundle/org.apache.felix.shell.tui-1.0.1.jar \
file:bundle/org.apache.felix.bundlerepository-1.0.3.jar \
file:bundle/org.apache.felix.ipojo-1.0.0.jar \
file:bundle/hi.service.jar \
file:bundle/hi.component.jar \
file:bundle/hi.client.jar


I have put all my jars in felix/bundle folder. You may place them in different location but you must specify the correct paths above.

We are ready now. Run your felix runtime to see the results!

The OSGi and the Services issue

Few months ago, i had introduce the Open Service Gateway initiative in a post, as trying to make a solid ground for the association of posts concerning the ServiceMix related posts, for introducing a simplistic usage of open source tools on S.O.A. related projects. From that time and on, we were introduced to a wide adoption of OSGi methodology and approach by developers and vendors (both legacy and open source). This fact gives the option and possibility to consider OSGi technology as technology that will transform Java development. Specially on the enterprise side.

To establish a point of useful OSGi, i am giving a quick example on how to build a simple service and consume it. This is quite useful in the S.O.A. development, considering it as a step in the development tasks of the reference architecture that one should follow in a S.O.A. enabled solution and approach. Specially when in a small team the object is cheap (cost and time) development considering services.

You may start and work with Equinox and Eclipse IDE. You may also use other OSGi runtime like Apache Felix or Knopflerfish. Knopflerfish even gives you are good GUI to work with. I will not be explaining fundamentals of OSGi technology.The goal here is to be a little bit more simplistic and independent, in order to be able to work with any other IDE, approach or tools you might prefer. So here we are concerned with a simple application (bundle).  Lets start with.

What's OSGi service?

In very general terms, conforming with S.O.A. re-use, a service is a … repeatable task. When it comes to business, any repeatable task in your business process is a service. Similarly in a application, you can have generic tasks (even specific tasks) that are repeatedly used and therefore, can be represented as service. Representing and using these tasks as services is what SOA is all about! But that' at an enterprise level. When it comes to OSGi services, it is the same concept but applied at JVM level.
In OSGi, a service is a plain java object which is published to a registry. A consumer can consume the registered service through lookup. A service a be registered and unregistered at any point of time. Service is built using interface-based programming model. To implement or build a service you basically provide implementation to a interface. To consume, you only need the interface for the lookup and there is no need to know about the implementation. The service registry is the "middle man" who help producers and consumers to get in touch with each other.

Building STARTUPOSGISRVservice

The first step would be to create our interface or "front end" of the service. For our service, we will have a simple interface named IStartupOSGISRV:

package org.my.service.startupservice;

public interface IStartupOSGISRV {
public String sayHi();
}


And here is our service implementation.



package org.my.service.startupservice;

public class StartupOSGISRV implements IStartupOSGISRV {
public String sayHi() {
return "Hi There!";
}
}


That's it! Our service is ready for use. But, we need to inform consumers that the service is ready to serve. For this, we will have to register our service with the OSGi service registry.

OSGi framework provides us with standard APIs to register and unregister service with the registry. We will use the registerService method to register as shown below:



serviceRegistration = context.registerService(IStartupOSGISRV.class.getName(),startupservice,null);


I am sure for beginners this is not enough. Let's explain the stuff little further.To register our new service, we will build a simple bundle that will call registerService method.



package org.my.service.startupservice;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceRegistration;

public class Activator implements BundleActivator {

private ServiceRegistration serviceRegistration;
private IStartupOSGISRV startupservice;

public void start(BundleContext context) throws Exception {
System.out.println("Starting StartupServiceBundle..");
startupservice= new StartupService();
serviceRegistration = context.registerService(IStartupOSGISRV .class.getName(),startupservice,null);

}

public void stop(BundleContext context) throws Exception {
serviceRegistration.unregister();
}

}


Our Activator class implements BundleActivator. Basically, its a simple OSGi bundle with start and stop methods. We will register our service with the bundle starts up and unregister when the bundle is uninstalled from the framework.

Now lets have a closer look at start method. We create a instance of our service and then use registerService method. The first argument is service name which is obtained using InterfaceName.class.getName(). Its a best practice to use this method instead of specifying the name as string (org.my.service.startupservice.IStartupOSGISRV). The second argument is the instance of the service itself. And the final argument is Map wherein developers can pass additional properties to the service.


To unregister the service, you simple call unregister method when we stop the bundle. So now we have a running service on our OSGi runtime.  Now we must consume it.



 



Consuming a service



To consume a service, we first create serviceReference object form the BundleContext. This can be achieved by calling getServiceReference method. The method takes the class name as a argument. Once you have the serviceReference object, we will use getService method to finally get the service. We will have to typecast the object returned by getService method before using it.



startupserviceRef = context.getServiceReference(IHelloService.class.getName());
IStartupOSGISRV serviceObjectStartupService = (IStartupOSGISRV)context.getService(startupserviceRef);
System.out.println("Service says: " + serviceObjectStartupService.sayHi());


Implementing the service and consumer is the same package is easy. Because, the interface is available. When you have your service and consumer bundle separate, there are some important points to note. OSGi provides the capability of specifying the packages they can be exported or imported. With this facility you can expose your service interface and hide its implementation from the public. The configuration details are specified in the MANIFEST file. Have a look at our StartupService's MANIFEST file:



Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: StartupService Plug-in
Bundle-SymbolicName: StartupService
Bundle-Version: 1.0.0
Bundle-Activator: org.my.service.startupservice.Activator
Bundle-ActivationPolicy: lazy
Bundle-RequiredExecutionEnvironment: JavaSE-1.6
Import-Package: org.osgi.framework;version="1.3.0"
Export-Package: org.my.service.startupservice;uses:="org.osgi.framework"


Notice that we have exported org.my.service.startupservice package. Similarly, we import this package in our consuming bundle.

And to add some notes; The code used for consuming the service is not the best way. You should make the code very simple and easy to understand without involving Exceptions handling,Null pointer checks and ServiceListeners.

Tuesday

On the Reference Architecture for SOA Adoption


While talking with various people and thinking over the case of S.O.A. (note not SOA) in order to explore in more depth the holistic view needed, i came across Hariharan’s post on Reference Architecture for SOA Adoption. Hariharan states:

Fundamental objective of SOA is aligning your IT capabilities with business goals. SOA is not just IT architecture nor business strategy. It should be perfect combination of IT infrastructure and business strategy. So, when we are planning for SOA adoption, we need a strong roadmap and blueprint. Reference architecture model will be a blueprint of SOA.
Reference architecture is like a drawing of your corporate building. Before and after construction, you need blueprint for verification and reference. Similar to that SOA reference architecture could be a reference model for your enterprise business system. Reference architecture should be defined properly during the SOA roadmap definition phase. Refer my last blog in which I was talking about SOA roadmap.
Fundamentally, while defining reference architecture model for corporate, we should consider the following components as part of architecture.
1. Infrastructure and components services
2. Third party communication and data sharing services
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance
Currently, if you look at the SOA market, there are many product and service providers comes up with its’ own SOA reference architecture. Many major SOA players like IBM, TIBCO, Web Methods, BEA Oracle and MGL has defined reference architecture based on their product catalog and services. If we refer, all has its own model with explanation which may puzzle the SOA adaptors to choose the right approach. One important point should be remembered that product vendor or service provider’s reference model may not suit for your requirements completely. We should consider that as just one of the reference point to finalize the vendor. We should design the right blueprint model for our corporate and SOA need. This is a crucial step that should be planned as part of the SOA implementation roadmap.

Good starting point. But more or less, again the ‘bird eye’ view needed for S.O.A., is not complete. The reason, is services (and actually web-services) oriented view. As stated here S.O.A. is more than integration. It is integration as well, as is about the following of strategies and the future of the in hand infrastructure. So as concerning the types 1,2,6,8 and 12, the reference architecture should avoid or be irrelevant (as possible) to the ‘marketing effect’ proposed by vendors, as stated here. This last issue is recognized in the above statement in the last paragraph when stating :

“Many major SOA players like IBM, TIBCO, Web Methods, BEA Oracle and MGL has defined reference architecture based on their product catalog and services. If we refer, all has its own model with explanation which may puzzle the SOA adaptors to choose the right approach. One important point should be remembered that product vendor or service provider’s reference model may not suit for your requirements completely”

but not enough explored. In any way, i am not against any vendor, or trying to construct a polemic view against them when referring to ‘marketing effect’. They are trying to do business and make money, extent their ROIs and so on. Thats ok. My point is that we must have in mind the inner or hidden points on every co-operator when making our business or working.

So keeping in hand the 12 above points, but we have to be more holistic and therefore more concrete and safe in our reference. This reference is going to be the ground to explore our enterprise niches and evolve through market changes. A nice starting point in what a reference architecture should look like, is proposed from OASIS, among others proposals and standards, in SOA Reference Architecture. Compliant to this reference is the Conceptual Integration Technical Reference Architecture from the Washington State Department of Information Services. In the conceptual Integration Technical R.A. there is this diagram:

The conceptual reference architecture depicted in the diagram above (and defined in this document) adopts the Service-Oriented Architecture Reference Model (SOA-RM) developed by the Organization for the Advancement of Structured Information Standards (OASIS).

The SOA-RM defines its purpose as follows:

“A reference model is an abstract framework for understanding significant relationships among the entities of some environment. It enables the development of specific architectures using consistent standards or specifications supporting that environment. A reference model consists of a minimal set of unifying concepts, axioms and relationships within a particular problem domain, and is independent of specific standards, technologies, implementations, or other concrete details.” ([SOA-RM], p. 4).

“The goal of this reference model is to define the essence of service oriented architecture, and emerge with a vocabulary and a common understanding of SOA. It provides a normative reference that remains relevant for SOA as an abstract and powerful model, irrespective of the various and inevitable technology evolutions that will impact SOA.” ([SOA-RM], p. 4).

As you can see, the Reference is quite abstract. While the SOA-RM is a powerful model that provides the first vendor-neutral, open-standard definition of the service-oriented approach to integration, its abstract nature means that it is not capable of providing the architectural guidance needed for the actual design of integration solutions (or integrated software systems). That guidance comes from the definition of a reference architecture that rests on the foundation of the reference model’s concepts. The reference architecture builds on those concepts by specifying additional relationships, further defining and specifying some of the concepts, and adding key (high-level) software components necessary for integration solutions.

In the diagram above, SOA-RM concepts are shaded yellow. Concepts and components particular to the conceptual reference architecture defined by this document are shaded cyan. Relationships between concepts (indicated by arrows) are defined in the SOA-RM if the arrows connect concepts shaded yellow. Relationships between cyan-shaded concepts or between cyan-shaded and yellow-shaded concepts are particular to the reference architecture.

In order to conclude with Hariharan’s 12 points, which they try to describe and define in a next detailed plan the web services characteristics, we need to have some preliminaries which should work as the base for the next step Hariharan’s analysis:

Firstly, in the context of the organisation in hand (for whom we will define a more solid S.O.A. Reference Architecture), there should be an analysis of the models that exist and the models that we will going to need in the near future (obtained from the strategic decisions in short and long term of the organisation). So analysis of Models of the various entities that play crucial role in the infrastructure. In general these entities lying into 2 main categories:

  1. Data Modelling
  2. Generic Entity Modelling

What are the nature and model of the data need to be interchanged inside the infrastructure? what formats, categories and relations (dependencies) they follow? This analysis some times maybe needed in a system’s view note. From the point of view that the main system A used in the organisation is handling this data inside it (e.g. time/date/currency/locationGIS/Post code etc…).

What are the main compound entities that they exist and should be interchanged inside the organisation’s infrastructure? How they depend on the predefined data models above? E.g. Customer, Order, Products etc which they will be analysed in the context of main data models and their dependencies and more over, they will incorporate any missing data (used) elements missing from the Data Modelling analysis, depicting relations hierarchies etc. (Usually any missing attributes for the Data Modelling, should be declared and bound on the Entities relations and modelling).

So, now, we can describe our flows (existing) and found the business and IT rules that are needed in order to be able to have some services on the fly or from compounding existing ones. These rules and analysis, will show as if there is a need for the ‘canonical modelling’ that we are using. Depends of coarse on the systems used, to be used and therefore from the strategic decisions taken by the organisation’s next steps (e.g. in Telco's double/triple play which demand an extension to the product models) and of coarse from the vendors compliant modelling schemes of the existing or new systems to be used. Therefore, Hariharan’s list is extending to:

A1.1. Existing Data Modelling
A1.2. Existing Entities Modelling
A1.3. To-Be Data Modelling
A1.4. To-Be Entities Modelling
1. Infrastructure and components services
2. Third party communication and data sharing services
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance

Now we have the flows parts. Again, having the A1.1-A1.4, we can construct the existing data and entities flows, meaning a more generic Business flows that although they correspond to business analysis, they include the IT –systems steps for complying with the Modelling restrictions by organisation’s infrastructure. These are not only services, although if we generalise the term service, we are in. There might be p2p systems interconnections, hidden to the business flow for demanding, retrieving and consolidating data and so on. \

So we should obtain also an

A2.1 Existing Flows
A2.2 New-proposed Flows

from this point we should be able, having

a) our system’s capabilities
b) our modelling rules

The candidate services, which comply with A2.1 and A2.2 and create the web services (or the RPCs, XMLs, SQLs and so on) for obtaining the repository of available actions. So the list becomes:

A1.1. Existing Data Modelling
A1.2. Existing Entities Modelling
A1.3. To-Be Data Modelling
A1.4. To-Be Entities Modelling
1. Infrastructure and components services
2. Third party communication and data sharing services
A2.1 Existing Flows
A2.2 New-proposed Flows
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
A6.1. Services Decomposition Strategy
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance and Orchestration

With these in mind we should be able to develop an Decomposition Strategy (A6.1 in the list) by which we should be able to deconstruct and reconstruct meaningful information from existing services, categorise all the possible services to be used in the organisation, be able to have loosely coupled services and explore with the best possible way under the organisation’s context the services orchestration abilities and of coarse re-usability. In all the above we should have in mind Mike Kavis post concerning the Top 10 Reasons Why People Make SOA Fail.

[

    1. They fail to explain SOA's business value.
    2. They underestimate the impact of organizational change.
    3. They fail to secure strong executive sponsorship.
    4. They attempt to do SOA "on the cheap."
    5. They lack the required skills to deliver SOA.
    6. They have poor project management.
    7. They think of SOA as a project instead of an architecture.
    8. They underestimate the complexity of SOA.
    9. They fail to implement and adhere to SOA governance.
    10. They let the vendors drive the architecture.

]

If you have any additions or need help constructing a more concrete and specific Reference model for your instance, give a drop. All comments are welcomed.

Wednesday

S.O.A.s ‘nature’. On the integration issue – Web Services: soapbuilders reloaded and the prisoner’s dilemma

As stated in S.O.A.s ‘nature’ and integration, S.O.A. is more than integration. But one of the most practical and real-world aspects and promises of the SOAs is integration. Especially integration from the interoperability point of view.  In this aspect, the issue is behind the Web Services Interoperability abilities, that they could (after appropriate analysis and models consolidation) can provide to the proposed solution.

Interoperability is one of the design principles that distinguish Web Services as service middleware from its forefathers: RPC and distributed object computing and from its uncle: message-oriented middleware.

Over the last couple of years there has been a trend toward vendors offering complete SOA suites, focusing on strong cohesion (real or imagined) at the suite level. In this model, there's less motivation for cross vendor interoperability and open integration. This trend exposes the dangers of going with a SOA suite: vendor lock-in; isolated, non-integrated components; and exposure to the risk of needing ongoing consulting services for bespoke integration. As vendor consolidation slows this year, but continues, infrastructure suites will continue to dominate and selecting products to populate a heterogeneous services infrastructure will remain difficult. Actually, this is the ‘marketing’ case of the SOA adoption, instead of S.O.A., as discussed in SOA marketing and expenses. On the misleading promise on SOAs and what is dying?.

Under this assumptions, and any activity that tries to provide a solid ground for neutral, or open interoperability should be welcomed.  So I’m glad for this announcement of the Web Services Test Forum (WSTF), a vendor and end-user coalition set up to test Web Services interoperability scenarios. The approach is described in this essential post by Oracle’s Gilbert Pilz and another by Doug Davis from IBM.

December 8, 2008 – The Web Services Test Forum (WSTF) launched today, providing an open community to improve the quality of the Web services standards, with initial membership from Active Endpoints, AIAG, Axway, CISCO, eviware, FORD Motor Co., Fujitsu, Hitachi, IBM, Oracle, Red Hat, Software AG, Teamlog, TIBCO Software Inc, . Using customer-based scenarios, interoperability is validated in a multi-vendor testing environment. Customers and vendors alike, independent of their geographic location, can dynamically test their applications against available implementations to ensure interoperability is preserved. As an open community, WSTF has made it easy to introduce new interoperability scenarios and approve work through simple majority governance.

So it seems, that the fore mentioned ‘marketing’ SOA issues, were in vendors attention, and some ‘fix-ing’ actions is there to take place. In a long term of coarse, but is something.

This forum is an evolution in the interoperability story begun by the soapbuilders group. This group of SOAP stack vendors and interested 3rd parties was created in 2001 – kicked off by this seemingly innocent email from Tony Hong. Credit goes to IBM and Microsoft for nurturing the idea with support. The work of soapbuilders was carried on by Web Services Protocol Workshops that Jorgen Thelin ran at Microsoft.

And the ‘fix-ing’ action proof:

We need ask ourselves, why is this group coming together now? Why has the prophecy not come true? Vendors are being beaten up (not enough, IMO) by end-users for their interop failings – so in one way it’s a “look, we are doing something” measure.

Also, how does this relate to the work of the Web Service Interoperability Organization (WS-I)? Is this an implicit sigh of desperation at the lack of progress at the WS-I? For example, it’s taken a long time to get the crucial Reliable Secure Profile out of that hopper.

On the WSTF announcement call, Burton Group’s Anne Thomas Manes asked why this effort is not driven through the WS-I. Steve Harris from Oracle explained that while WS-I is a consensus driven effort, WFTF brings a different approach to interoperability. Harris highlighted vendor collaboration and lowering the barriers to entry as differentiators. IBM's Karla Norsworthy added that WSTF is complementary to WS-I, a more lightweight approach, giving the example that the forum could easily bring a few vendors together to test a scenario. Many of the WSTF’s 15 members are also members of WS-I.

Will WSFT make a real contribution to interoperability?

I’m impressed and made hopeful by a number of things. Firstly, that membership obliges a vendor to support live endpoints for each scenario. This means that debate and negotiation will not be the deliverables of the forum; live test results, published on the internet will be. We can view testing between the implementations as a fully connected network, i.e., every endpoint is tested with every other, so each additional implementation adds considerably (as c=(n^2 – n) / 2 for you topology geeks) to the level of interoperability assurance.

Secondly, having end-users like Ford and AIAG in the forum is an important step. Sure, there are a small number of end-users right now, but the usefulness of this forum grows dramatically with the number of end-user enterprises participating. The forum process encourages end-users to submit requirements for the tested scenarios which usefully turns attention onto things other than wire formats.

Actually, the real question is whether the participants, especially the vendors, recognise this opportunity as a prisoner’s dilemma scenario? If everybody co-operates, everybody benefits.

A number of issues may limit the WSTF’s impact.  Clearly, not having Microsoft as a member is a problem. While industry manoeuvres might keep analysts and vendors themselves amused, end-users couldn’t care less; they just want software to work. So having Microsoft involved is a core credibility issue for the WSTF. According to WSTF spokespeople, other members will be hosting Microsoft endpoints. That’s not quite the same thing. Microsoft are committed to WS-I and also to Apache Stonehenge which has similar aims to WSTF.

I think other web services approaches need to be included. To launch an initiative purely directed at Web Services in December 2008 looks antiquated, although the IBM and Oracle spokespeople did claim other approaches, like RESTful web services are not excluded in theory.

As with any community based activity, the only real measure of its success is its interactivity. Let’s give WSTF time and then count the live interoperable implementations and end-user organizations active in the forum.

SOA marketing and expenses. On the misleading promise on SOAs and what is dying?

Continuing thinking over the “SOA and the real Service Integration issue”, I fall on a tone of new-years posts concerning the ‘2009s SOA death’. Once again, I believe that the whole thing is about conceiving SOAs. I post a draft thought concerning the way that SOA need to be conceptualise both from Architects, Developers and Business owners perspectives. In most cases, the ‘philosophy’ of analysing requirements, services and environments for SOA, didn’t had the appropriate ‘holistic’ nature. So, lots of problems raised,  both in financial aspects but also in designing principles. All the above, and more were stated by Mike Kavis in his post  Top 10 Reasons Why People Make SOA Fail.

[

    1. They fail to explain SOA's business value.
    2. They underestimate the impact of organizational change.
    3. They fail to secure strong executive sponsorship.
    4. They attempt to do SOA "on the cheap."
    5. They lack the required skills to deliver SOA.
    6. They have poor project management.
    7. They think of SOA as a project instead of an architecture.
    8. They underestimate the complexity of SOA.
    9. They fail to implement and adhere to SOA governance.
    10. They let the vendors drive the architecture.

]

Dave Linthicum summed it up in his post when he stated:

[


First, there are not enough qualified architects to go around, and you'll find that most of the core mistakes were made by people calling themselves "architects," who lack the key skills for moving an enterprise towards SOA. They did not engage consultants or get the training they needed, and ran around in circles for a few years until somebody pulled their budgets.


Second, the big consulting firms drove many SOA projects into the ground by focusing more on tactics and billable hours than results and short- and long-term value.


Third, the vendors focused too much on selling and not enough on the solution. They put forth the notion that SOA is something they have to sell, not something you do.
Finally, the hype was just too much for those charged with SOA to resist. Projects selected the technology first, then the approach and architecture. That's completely backwards.

]

Anne Thomas Manes blogged the SOA is Dead;Long Live Services, and stated that “SOA is survived by its offspring: mashups, BPM, SaaS, Cloud Computing, and all other architectural approaches that depend on “services”.” The logic that follows her thoughts is more or less :“Successful SOA (i.e., application re-architecture) requires disruption to the status quo. SOA is not simply a matter of deploying new technology and building service interfaces to existing applications; it requires redesign of the application portfolio. And it requires a massive shift in the way IT operates. The small select group of organizations that has seen spectacular gains from SOA did so by treating it as an agent of transformation. In each of these success stories, SOA was just one aspect of the transformation effort. And here’s the secret to success: SOA needs to be part of something bigger.”

And I fully agree with this last statement. Actually, don’t forget that SOA = Service Oriented Architecture.

The illusions of the above SOAs concepts, become when considered the SOA as option, tool, software platform or some product under the SOA – name umbrella that will magically solve all problems. More or less, was a marketing oriented illusion from some (actually the biggest) vendors  on the street. And all this illusions, combined with the financial aspects (more and more and expensive new software, service from vendors and external co-operators, consulting etc ) gave a avoiding aroma in the real S.O.A. approach. Ok, if you mean SOA (the above marketing term) and S.O.A. (the Service Oriented Architecture) i agree, that if its not dead it soon will.

As stated in this blog in a numerous posts, SOA actually is not expensive. Both the open source community and some vendors, have release tools and frameworks that can do the job, or parts of the job. The only consideration is the correct definition of the job, aka the A on S.O.A.s . Actually, all posts under the SOA and ESB labels are toward this concept.

Anne Tomas Manes, blogged on ‘SOA doesn’t need to be expensive’ the main in-expensiveness that S.O.A. could have (and not SOA):

[

It's a common misconception that SOA is expensive. Many organizations believe that they need to acquire a boat-load of new products and technologies to get started with SOA. First on the list of product acquisitions is an ESB, followed by registries, repositories, and security appliances. In these belt-tightening times, many SOA initiatives will be challenged to raise the funding required to acquire these products. So what's a team to do? Pack up and wait for better times? Or make do with what you have?

In truth (and much to the vendors' dismay), you don't need a bunch of new products to do SOA. SOA is about the way you design your solutions -- not about the technology you use to build them. An ESB is a "nice to have", but it's not a prerequisite. Pretty much all programming environments (Java, .NET, Ruby, Groovy, PHP, JavaScript, COBOL, CICS, etc) now include native support for building services -- both method- or document-oriented services built with SOAP and WSDL and RESTful services built with HTTP. You can also build document-oriented services using your favorite message-oriented middleware product (although MOM protocols will limit your reach and interoperability options). (I described the differences among the three types of services here. And see here and here for some great references on RESTful services.)

The only new product that an organization should really budget for is a management solution -- one that supports administration, monitoring, security, and mediation of service interactions (e.g., AmberPoint, Progress Actional, SOA Software Service Manager. Many platform vendors also provide management solutions--sometimes reselling AmberPoint [Oracle and Tibco]).

For organizations that need an ESB, consider open source solutions. Mike Kavis posted a nice summary of open source SOA stacks here. David Linthicum has also been extolling the benefits of an open source SOA product strategy here. Both Mike and Dave point out that the open source solutions tend to be easier to use and more cohesively integrated than the big vendor alternatives. Unfortunately, none of the open source solutions provides a comprehensive management solution yet.

]

the reason that i am using this part of Ms Manes post, is that i feel that describes the best my overall thesis on this subject and hope yours to.

If you need more applied, real world arguments on this, or how can be applied into your case, please drop me a notice.