Showing posts with label tech -ing. Show all posts
Showing posts with label tech -ing. Show all posts

Friday

Applications architecture and the S.O.A. alternative

Observing the developments on IT areas the last few years (no more than 10), one will conclude that the definition of ‘application' is rapidly changing. As the industry is moving from using an application-centric architecture to a Service Oriented Architecture (S.O.A.), the focus for building functionality is moving to Service Oriented Business Applications ( a nice description about that, except from the various posts in “evolving through …” you can find in SOBA more concentrated).

The above view, could provide the alternative to view applications as a set of services and components working together in fulfilling a certain business need. The technology, specifications, and standards for specifying these components and services may vary, and the choice is up to you, and of coarse your environment as a whole.

In various post here, I propose and expose from time to time this kind of approach, creating therefore any needed components and services in a model driven way and always having in mind an S.O.A. extension. However, even in that case you'll need some kind of programming model that defines interfaces, implementations, and references. The pros of this alternative is that when building an application or solution based on services, these services can be both created specifically for the application and reused from existing systems and applications. The discussions behind these, is known as Service Component Architecture (SCA), meaning the programming model that specifies how to create and implement services, how to reuse them and how to assemble or compose services into solutions.

The reason for this post, was an interesting project that an involved-in friend, asked for my opinion. The project is the AspireRFID an (one of the very-very few) RFID implementations that run over the Open Source community and therefore a candidate new niche for that domain. Although the existing approach is very interesting and very well analysed, I couldn't stop myself on telling that a more Service Oriented approach should be taken. This alternative might seem a little obscure, from the point of view that the technicalities and area of RFIDs might relate with a different business model than the well known explored ‘B’ in the SOBA. Actually, this is not the case. And the reason is that this area is promising candidate for that ‘B’ because the limited relations toy ERPs, CRMs or what ever, are the until know the well defined areas and developments of the various alternatives of open source that could be used on (aka OSGi, ServiceMix, FUSECamel, Newton, Jini, Tuscany etc). So, in that case the solution might not be a S.O.A. a S.O.A. towards one.

who’s and why’s of Service Component Architecture (SCA)

As has been stated in here and here, mentioning of an alternative is a crucial concern when there is the involvement of the meaning that one gives to S.O.A.. The SCA models the "A" in S.O.A. (and therefore not SOA). It provides a model for service construction, service assembly, and deployment. The SCA supports components defined in multiple languages, deployed in multiple container technologies, and with multiple service access methods. This means that with the SCA approach, we have a set of specifications, addressing both runtime vendors and enterprise developers/architects, for building service-oriented enterprise applications.

The SCA specifications try to define a model for building Service-Oriented (Business) Applications (SOBA). These specifications are based on the following design principles:

OSOA
  • Independent of programming language or underlying implementation.
  • Independent of host container technology.
  • Loose coupling between components.
  • Metadata vendor-independence.
  • Security, transaction and reliability modelled as policies.
  • Recursive composition.

The first draft of SCA was published in November 2005 by the Open SOA Collaboration (OSOA). OSOA is a consortium of vendors and companies interest in supporting or building a SOA. Among them are IBM, Oracle, SAP, Sun, and Tibco. In March 2007 the 1.0 final set of specifications was released. In July 2007 the specifications were adopted by OASIS and they are developed further in the OASIS Open Composite Services Architecture (Open CSA) member section.

Concluding - Current SCA Specifications at OpenCSA

The current set of SCA specifications hosted on the OSOA site includes,
SCA Assembly Model
SCA Policy Framework
SCA Java Client & Implementation
SCA BPEL Client & Implementation
SCA Spring Client & Implementation
SCA C++ Client & Implementation
SCA Web Services Binding
SCA JMS Binding

The important element within the SCA approach, is that for the first time we have a set of specifications that address how S.O.A. can be built rather than what S.O.A. is about. Lets have a bird's eye view of SCA as a technology.

The four main elements of the SCA, aka the related specifications to start with

The set of specifications can be split into four main elements: the assembly model specification, component implementation specifications, binding specifications, and the policy framework specification. Therefore, these might be a good starting/guiding point upon the need of analysis and/or re-engineer needed for the project or the solution-application at hand.

Assembly model specification: this model defines how to specify the structure of a composite application. It defines what services are assembled into a SOBA and with which components these services are implemented. Each component can be a composite itself and be implemented by combining the services of one of more other components (i.e. recursive composition). So, in short, the assembly model specifies the components, the interfaces (services) for each component, and the references between components, in a unified, language independent-way.  The assembly model is defined with XML files.

Using the SCA assembly model, we have to define firstly the recursive assembly of components into services. Recursive assembly allows fine-grained components to be wired into what in SCA terms is called composites. These composites than can be used as components in higher-level composites and applications. The type of a component generally defines the services exposed by the component, properties that can be configured on the components and dependencies the component has on services exposed by other components.

An example of an SCA assembly model, is shown in the diagram bellow:

sca-assemblymodeldiagram-image-1.jpeg

Diagram 1: SCA assembly model

In the diagram above Composite X is composed of two components Component Y and Component Z, which themselves are composites. Composite X exposes its service using web services.

Composite Y is composed of a Java component, a Groovy component and a Jython component. The service offered by Java component is promoted outside the composite. The Java component depends on services offered by the Groovy and Jython components. In SCA terminology these dependencies are called references. All the references for the Java component are satisfied by services offered by other components within the same composite. However, the reference on the Groovy component has been promoted outside the composite and will have to be satisfied in a context within which Composite Y is used as a component. In the diagram above this is achieved using the service promoted by Composite Z.

Composite Z is composed of a Java component, a BPEL component and a Spring component, the service offered by the BPEL component is promoted to be used when the composite is used as a component in a higher level composite.

As you can see SCA allows building of compositional service-oriented applications using components built on heterogeneous technologies and platforms. SCA assembly model is generally expressed in an XML based language called Service Component Description Language (SCDL).

Component implementation specifications: these specifications define how a component is actually written in a particular programming language. Components are the main building blocks for an application build using SCA. A component is a running instance of an implementation that has been appropriately configured. The implementation provides the actual function of the component and can be defined with a Java class, BPEL process, Spring bean, and C++ or C code. Several containers implementing the SCA standard (meaning that they can run SCA components) support additional implementation types like .Net, OSGI bundles, etc. In theory a component can be implemented with any technology, as long as it relies on a common set of abstractions, e.g. services, references, properties, and bindings.

The following component implementation technologies are currently described:

Binding specifications: these specifications define how the services published by a component can be accessed. Binding types can be configured for both external systems and internal wires between components. The current binding types described by OSOA are bindings using SOAP (web services binding), JMS, EJB, and JCA. Several containers implementing the SCA standard support additional binding types like RMI, Atom, JSON, etc. An SCA runtime should at least implement the SCA service and web service binding types. The SCA service binding type should only be used for communication between composites and components within an SCA domain. The way in which this binding type is implemented is not defined by the SCA specification and it can be implemented in different ways by different SCA runtimes. It is not intended to be interoperable. For interoperability the standardized binding types like web services have to be used.

Officially described binding types:

Policy framework specification: for describing how to add non-functional requirements to services. Two kinds of policies exist: interaction and implementation policies. Interaction policies affect the contract between a service requestor and a service provider. Examples of such policies are message protection, authentication, and reliable messaging. Implementation policies affect the contract between a component and its container. Examples of such policies are authorization and transaction strategies.

Summary

SCA provides a set of specifications, addressing both runtime vendors and enterprise developers/architects, for building service-oriented enterprise applications. For the first time we have a set of specifications that address how SOA can be built rather than what SOA is about. I have provided a bird's eye view of SCA as a technology.

The related concepts in detail

As stated before the main building block of a Service-Oriented Business Application is the component. Figure 1 exhibits the elements of a component. A component consists of a configured piece of implementation providing some business function. An implementation can be specified in any technology, including other SCA composites. The business function of a component is published as a service. The implementation can have dependencies on the services of other components, we call these dependencies references. Implementations can have properties which are set by the component (i.e. they are set in the XML configuration of a component).

SCA Component diagram

Figure 1 - SCA Component diagram (taken from the SCA Assembly Model specification)

Components can be combined into assemblies, thereby forming a business solution. So from this point we get the meaning of ‘B’ for in what Business may refer to. The SCA calls these assemblies composites. As shown in figure 2 a composite consists of multiple components connected by wires. Wires connect a reference and a service and specify a binding type for this connection. Services of components can be promoted, i.e. they can be defined as a service of the composite. The same holds for references. So, in principle a composite is a component implemented with other components and wires. As stated before, components can in their turn be implemented with composites thereby providing a way for a hierarchical construction of a business solution, where high-level services (often indicated as composite services) are implemented internally by sets of lower-level services. 

SCA Composite diagram

Figure 2 - SCA Composite diagram (taken from the SCA Assembly Model specification)

The service (or interface if you like) of a component can be specified with a Java interface or a WSDL PortType. Such a service description specifies what operations can called on the composite, including their parameters and return types. For each service the method of access can be defined. As seen before, the SCA calls this a binding type. 

Example XML structure defining a composite

Figure 3 - Example XML structure defining a composite

Figure 3 exhibits an example XML structure defining an SCA composite. It's not completely filled in, but it gives a rough idea what the configuration looks like. The implementation tag of a component can be configured based on the chosen implementation technology, e.g. Java, BPEL, etc. In case of Java the implementation tag defines the Java class implementing the component.

Deployment and the Service Component Architecture contributions, clouds, and nodes

SCA composites are deployed within an SCA domain. An SCA Domain (as shown in Figure 4) represents a set of services providing an area of business functionality that is controlled by a single organization. A single SCA domain defines the boundary of visibility for all SCA mechanisms. For example, SCA service bindings (recall the earlier explained SCA binding types) do only work within a single SCA domain. Connections to services outside the domain must use the standardized binding types like web service technology. The SCA policy definitions do also only work within the context of a single domain. In general, external clients of a service that is developed and deployed using SCA should not be able to tell that SCA was used to implement the service, it is an implementation detail.

SCA Domain diagram

Figure 4 - SCA Domain diagram (taken from the SCA Assembly Model specification)

An SCA domain is usually configured using XML files. However, an SCA runtime may also allow the dynamic modification of the configuration at runtime.

An SCA domain may require a large number of different artifacts in order to work. In general, these artifacts consists of the XML configuration of the composites, components, wires, services, and references. We of course also need the implementations of the components specified in all kinds of technologies (e.g. Java, C++, BPEL, etc.). To bundle these artifacts the SCA defines an interoperable packaging format for contributions (ZIP). SCA runtimes may also support other packaging formats like JAR, EAR, WAR, OSGi bundles, etc. Each contribution at least complies to the following characteristics:

  • It must be possible to present the artifacts of the packaging to SCA as a hierarchy of resources based off of a single root.
  • A directory resource should exist at the root of the hierarchy named META-INF.
  • A document should exist directly under the META-INF directory named ‘sca-contribution.xml' which lists the SCA Composites within the contribution that are runnable.

A goal of the SCA approach to deployment is that the contents of a contribution should not need to be modified in order to install and use the contents of the contribution in a domain.

An SCA domain can be distributed over a series of interconnected runtime nodes, i.e. it is possible to define a distributed SCA domain. This enables the creation of a cloud consisting of different nodes each running a contribution within the same SCA domain. Nodes can run on separate physical systems. Composites running on nodes can dynamically connect to other composites based on the composite name instead of its location (no matter in which node the other composite lives), because they all run in the same domain. This also allows for dynamic scaling, load balancing, etc. Figure 5 shows an example distributed SCA domain, running on three different nodes. In reality you don't need different nodes (and even different components) for this example, but it makes the idea clear.

A distributed SCA domain running on multiple nodes

Figure 5 - A distributed SCA domain running on multiple nodes (taken from the Tuscany documentation)

Implementations of the Service Component Architecture

The SCA is broadly supported with both commercial and open source implementations. 

Open source implementations:
  • Apache Tuscany (official reference implementation). The current version is 1.4, released in January 2009. They are also working on a 2.0 version which they aim to run in an OSGi enabled environment.
  • The Newton Project.
  • Fabric3. A platform for developing, assembling and managing distributed applications on SCA standards.Fabric3 leverages SCA to provide a standard, simplified programming model for creating services and assembling them into applications.

Commercial implementations:

Concluding… 

Concluding with the analysis of this kind of alternative, concerning the SCA in combination with the overall and therefore holistic Reference Architecture for S.O.A., we have expose a step toward for a complete Architectural Design and a Guide to implement the proper scheme concerning the case that should be applicable in the case at hand. Both the concerns covered in the Reference Architecture approach and the SCA approach here, in hand with the strategic decisions that the organisation or the project has in ‘mind’ would be able to provide specific specifications (in the technical, business and conceptual analysis) about the meanings and approaches that the approach is aiming to take. The definition of ‘B’s and ‘A’s is therefore the mission of these approaches and the clearance of the meaning for the entities involved from a context based view. 

So, as far as concerning the S.O.A. ‘infects’ that have to do with the proper definition of Architecture, we may have a logical solution from divide and conquer. Having as a next step after the Reference Architecture’s decisions, the Service Component Architecture (SCA) which provides the means for analysing and decomposing the Service Oriented Business Applications (SOBA) involved, we have a clearer decomposition and definition of the ‘B’ (– business meaning in case at hand) from the ‘A’ (– of the whole architecture and organisation or project).


Thursday

The Class Casting Exception in OSGi Environments

In previous posts, i have deal with the OSGi approach on building services, either the general approach or  using iPOJO. These are ok, if one is willing to implement a startup of services bundles or some project from the start, or to develop a solution with a nature of separation or independency on existing IT assets. Unfortunately, this is rarely the case. Usually, there is either the need to consume some objects from existing applications or to just the other way around,  to consume objects created in the OSGi environment. And real life, demands both. The first issue is normally easy to deal with. Actually, this is the way you should go on. Having in mind the S.O.A. enabling that your project, solution,architecture, code etc, should comply with,  there is an issue in the horizon.

Demystifying the Issue one level down

Consider the common condition: You have a non-OSGi Java Application in which an OSGi-based application is embedded. You want to consume objects created in the OSGi-based application (an OSGi Service for example).

The above condition will provide us a problem issue: When you try to consume (invoking methods, assigning to local variables, etc.) the objects created inside the OSGi Environment from the non-OSGi application you get Class Casting Exception.

The reason for getting this exception, is that any class loaded inside the OSGi Environment is loaded from a special class loader provided by the OSGi Framework which is different from the class loader used by the non-OSGi application. So even if the class of the non-OSGi application is exactly the same as the one of the OSGi-based application, the objects will still be treated as of different types since their classes were loaded using different class loaders.

Getting around

If you want to perform on the object a single method invocation or two without passing parameters of conflicting types (the classes that loaded from different class loaders) just invoke the method using reflection, if you have a more complex case then use utilities that would allow you to bridge the classes.Here comes another key resource place for the OSGi’s: DynamicJava. You can overcome the above exception with the help of  API-Bridge for instance, which provides such capabilities. Actually, this API was created to address this problem specifically.

Bridge objects act as their type is of the target class loader, so when the non-OSGi application works with the bridged object it will seem to it that it was loaded from the same class loader. Lets see an example that clarifies how API-Bridge functions.

API-Bridge

Below is a code snippet that loads the same class using two different class loaders and then bridges them using the API-Bridge provided by DynamicJava resources.

   1: public void testApiBridge() throws Exception {  


   2:      /// Class A which implements interface AInterface is loaded by two   


   3:      /// different class loaders so they are treated as different examples  


   4:      Class aClass1 = getClassLoader1().loadClass("org.test.A");  


   5:      Class aClass2 = getClassLoader2().loadClass("org.test.A");  


   6:      Class aInterface1 = getClassLoader1().loadClass("org.test.AInterface");  


   7:      Class aInterface2 = getClassLoader2().loadClass("org.test.AInterface");  


   8:        


   9:      /// Objects of AImplementation class loaded from Class Loader 1 are not  


  10:      /// assignable from objects loaded from Class Loader 2.  


  11:      assertFalse(aInterface1.isAssignableFrom(aInterface2));  


  12:        


  13:      Object a2 = aClass2.newInstance();  


  14:        


  15:      /// The object is not castable since it's an instance of a class  


  16:      /// that was loaded using a different class loader.  


  17:      assertFalse(aInterface1.isInstance(a2));  


  18:        


  19:      /// We create an API Bridge that bridges classes (their fields,  


  20:      /// method parameters, etc.) of the "org.test" package to  


  21:      /// the class loader 1.  


  22:      ApiBridge apiBridge = ApiBridge.getApiBridge(getClassLoader1(), "org.test");  


  23:        


  24:      Object bridgedA2 = apiBridge.bridge(a2);  


  25:      /// The bridged a2 object is now castable  


  26:      assertTrue(aInterface1.isInstance(bridgedA2));  


  27:  }  






Although the example doesn't look very clear at first glance, it's important to note that we wrote only 2 lines of code to bridge the API. What we did up there is that we bridged an object and verified the fact is that it's castable to the AInterface of a different class loader.



While API-Bridge supports classes when used as implementations, super classes and method parameters, casting is only supported for interfaces.



Real World


An example of this problem is a Java Web Application run in an OSGi-oblivious Servlet Container (Tomcat for example), the Web Application has an OSGi-based application embedded inside of it. The OSGi-based application registers an OSGi Service that implements the javax.servlet.Servlet interface. The Web Application has a Delegator Servlet which delegates requests to the registered OSGi Service. The problem with this application is that the Web Application can't consume the Servlet object since it's loaded by a class loader of the OSGi Environment, the fact that causes ClassCastException to be thrown. To overcome this problem, I used the API-Bridge to bridge the Servlet API. Below is the ServletDelegator class which delegates requests to the OSGi Service and uses the API-Bridge to bridge the Servlet objects





   1: public class DelegatorServlet extends HttpServlet {  


   2:       


   3:     @Override  


   4:     protected void doGet(HttpServletRequest request, HttpServletResponse response)  


   5:             throws ServletException, IOException {  


   6:         BundleContext bundleContext = getBundleContext();  


   7:         if (bundleContext != null) {  


   8:             ServiceReference servletServiceRef = bundleContext.getServiceReference(  


   9:                     Servlet.class.getName());  


  10:             if (servletServiceRef != null) {  


  11:                 /// We can't write a line like:  


  12:                 /// Servlet servletObject =  


  13:                 ///     (Servlet)bundleContext.getService(servletServiceRef);  


  14:                 /// because a class casting exception will be thrown since the  


  15:                 /// Servlet class in this code is loaded by Tomcat class loader   


  16:                 /// while the retrieved Servlet by the OSGi Class Loader.  


  17:                 Object servletObject = bundleContext.getService(servletServiceRef);  


  18:                   


  19:                 ApiBridge apiBridge = ApiBridge.getApiBridge(  


  20:                         Thread.currentThread().getContextClassLoader(),  


  21:                        "javax.servlet", "javax.servlet.http");  


  22:                 Servlet servlet = (Servlet)apiBridge.bridge(servletObject);  


  23:                 /// Note that method parameters will be bridged too by API Bridge,  


  24:                 /// so they could be passed to the OSGi Environment.  


  25:                 servlet.service(request, response);  


  26:             } else {  


  27:                 printHtml("Error: A Servlet OSGi Service was not found.",  


  28:                         response);  


  29:             }  


  30:         } else {  


  31:             printHtml("Error: Bundle Context was not found in the ServletContext.",  


  32:                     response);  


  33:         }  


  34:     }  


  35:       


  36:     protected BundleContext getBundleContext() {  


  37:         /// Here we retrieve the Bundle Context of the OSGi-based application  


  38:     }  


  39:       


  40:     /// ... Other supportive code  


  41:       


  42: }  


Tuesday

On the Reference Architecture for SOA Adoption


While talking with various people and thinking over the case of S.O.A. (note not SOA) in order to explore in more depth the holistic view needed, i came across Hariharan’s post on Reference Architecture for SOA Adoption. Hariharan states:

Fundamental objective of SOA is aligning your IT capabilities with business goals. SOA is not just IT architecture nor business strategy. It should be perfect combination of IT infrastructure and business strategy. So, when we are planning for SOA adoption, we need a strong roadmap and blueprint. Reference architecture model will be a blueprint of SOA.
Reference architecture is like a drawing of your corporate building. Before and after construction, you need blueprint for verification and reference. Similar to that SOA reference architecture could be a reference model for your enterprise business system. Reference architecture should be defined properly during the SOA roadmap definition phase. Refer my last blog in which I was talking about SOA roadmap.
Fundamentally, while defining reference architecture model for corporate, we should consider the following components as part of architecture.
1. Infrastructure and components services
2. Third party communication and data sharing services
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance
Currently, if you look at the SOA market, there are many product and service providers comes up with its’ own SOA reference architecture. Many major SOA players like IBM, TIBCO, Web Methods, BEA Oracle and MGL has defined reference architecture based on their product catalog and services. If we refer, all has its own model with explanation which may puzzle the SOA adaptors to choose the right approach. One important point should be remembered that product vendor or service provider’s reference model may not suit for your requirements completely. We should consider that as just one of the reference point to finalize the vendor. We should design the right blueprint model for our corporate and SOA need. This is a crucial step that should be planned as part of the SOA implementation roadmap.

Good starting point. But more or less, again the ‘bird eye’ view needed for S.O.A., is not complete. The reason, is services (and actually web-services) oriented view. As stated here S.O.A. is more than integration. It is integration as well, as is about the following of strategies and the future of the in hand infrastructure. So as concerning the types 1,2,6,8 and 12, the reference architecture should avoid or be irrelevant (as possible) to the ‘marketing effect’ proposed by vendors, as stated here. This last issue is recognized in the above statement in the last paragraph when stating :

“Many major SOA players like IBM, TIBCO, Web Methods, BEA Oracle and MGL has defined reference architecture based on their product catalog and services. If we refer, all has its own model with explanation which may puzzle the SOA adaptors to choose the right approach. One important point should be remembered that product vendor or service provider’s reference model may not suit for your requirements completely”

but not enough explored. In any way, i am not against any vendor, or trying to construct a polemic view against them when referring to ‘marketing effect’. They are trying to do business and make money, extent their ROIs and so on. Thats ok. My point is that we must have in mind the inner or hidden points on every co-operator when making our business or working.

So keeping in hand the 12 above points, but we have to be more holistic and therefore more concrete and safe in our reference. This reference is going to be the ground to explore our enterprise niches and evolve through market changes. A nice starting point in what a reference architecture should look like, is proposed from OASIS, among others proposals and standards, in SOA Reference Architecture. Compliant to this reference is the Conceptual Integration Technical Reference Architecture from the Washington State Department of Information Services. In the conceptual Integration Technical R.A. there is this diagram:

The conceptual reference architecture depicted in the diagram above (and defined in this document) adopts the Service-Oriented Architecture Reference Model (SOA-RM) developed by the Organization for the Advancement of Structured Information Standards (OASIS).

The SOA-RM defines its purpose as follows:

“A reference model is an abstract framework for understanding significant relationships among the entities of some environment. It enables the development of specific architectures using consistent standards or specifications supporting that environment. A reference model consists of a minimal set of unifying concepts, axioms and relationships within a particular problem domain, and is independent of specific standards, technologies, implementations, or other concrete details.” ([SOA-RM], p. 4).

“The goal of this reference model is to define the essence of service oriented architecture, and emerge with a vocabulary and a common understanding of SOA. It provides a normative reference that remains relevant for SOA as an abstract and powerful model, irrespective of the various and inevitable technology evolutions that will impact SOA.” ([SOA-RM], p. 4).

As you can see, the Reference is quite abstract. While the SOA-RM is a powerful model that provides the first vendor-neutral, open-standard definition of the service-oriented approach to integration, its abstract nature means that it is not capable of providing the architectural guidance needed for the actual design of integration solutions (or integrated software systems). That guidance comes from the definition of a reference architecture that rests on the foundation of the reference model’s concepts. The reference architecture builds on those concepts by specifying additional relationships, further defining and specifying some of the concepts, and adding key (high-level) software components necessary for integration solutions.

In the diagram above, SOA-RM concepts are shaded yellow. Concepts and components particular to the conceptual reference architecture defined by this document are shaded cyan. Relationships between concepts (indicated by arrows) are defined in the SOA-RM if the arrows connect concepts shaded yellow. Relationships between cyan-shaded concepts or between cyan-shaded and yellow-shaded concepts are particular to the reference architecture.

In order to conclude with Hariharan’s 12 points, which they try to describe and define in a next detailed plan the web services characteristics, we need to have some preliminaries which should work as the base for the next step Hariharan’s analysis:

Firstly, in the context of the organisation in hand (for whom we will define a more solid S.O.A. Reference Architecture), there should be an analysis of the models that exist and the models that we will going to need in the near future (obtained from the strategic decisions in short and long term of the organisation). So analysis of Models of the various entities that play crucial role in the infrastructure. In general these entities lying into 2 main categories:

  1. Data Modelling
  2. Generic Entity Modelling

What are the nature and model of the data need to be interchanged inside the infrastructure? what formats, categories and relations (dependencies) they follow? This analysis some times maybe needed in a system’s view note. From the point of view that the main system A used in the organisation is handling this data inside it (e.g. time/date/currency/locationGIS/Post code etc…).

What are the main compound entities that they exist and should be interchanged inside the organisation’s infrastructure? How they depend on the predefined data models above? E.g. Customer, Order, Products etc which they will be analysed in the context of main data models and their dependencies and more over, they will incorporate any missing data (used) elements missing from the Data Modelling analysis, depicting relations hierarchies etc. (Usually any missing attributes for the Data Modelling, should be declared and bound on the Entities relations and modelling).

So, now, we can describe our flows (existing) and found the business and IT rules that are needed in order to be able to have some services on the fly or from compounding existing ones. These rules and analysis, will show as if there is a need for the ‘canonical modelling’ that we are using. Depends of coarse on the systems used, to be used and therefore from the strategic decisions taken by the organisation’s next steps (e.g. in Telco's double/triple play which demand an extension to the product models) and of coarse from the vendors compliant modelling schemes of the existing or new systems to be used. Therefore, Hariharan’s list is extending to:

A1.1. Existing Data Modelling
A1.2. Existing Entities Modelling
A1.3. To-Be Data Modelling
A1.4. To-Be Entities Modelling
1. Infrastructure and components services
2. Third party communication and data sharing services
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance

Now we have the flows parts. Again, having the A1.1-A1.4, we can construct the existing data and entities flows, meaning a more generic Business flows that although they correspond to business analysis, they include the IT –systems steps for complying with the Modelling restrictions by organisation’s infrastructure. These are not only services, although if we generalise the term service, we are in. There might be p2p systems interconnections, hidden to the business flow for demanding, retrieving and consolidating data and so on. \

So we should obtain also an

A2.1 Existing Flows
A2.2 New-proposed Flows

from this point we should be able, having

a) our system’s capabilities
b) our modelling rules

The candidate services, which comply with A2.1 and A2.2 and create the web services (or the RPCs, XMLs, SQLs and so on) for obtaining the repository of available actions. So the list becomes:

A1.1. Existing Data Modelling
A1.2. Existing Entities Modelling
A1.3. To-Be Data Modelling
A1.4. To-Be Entities Modelling
1. Infrastructure and components services
2. Third party communication and data sharing services
A2.1 Existing Flows
A2.2 New-proposed Flows
3. Business rules services
4. Business process services
5. Data sharing and transformation services
6. Identity and Security Services
A6.1. Services Decomposition Strategy
7. Packaged Application access services
8. Integration and Event management services
9. Presentation Services
10. Registry and Repository
11. Messaging and Quality
12. Governance and Orchestration

With these in mind we should be able to develop an Decomposition Strategy (A6.1 in the list) by which we should be able to deconstruct and reconstruct meaningful information from existing services, categorise all the possible services to be used in the organisation, be able to have loosely coupled services and explore with the best possible way under the organisation’s context the services orchestration abilities and of coarse re-usability. In all the above we should have in mind Mike Kavis post concerning the Top 10 Reasons Why People Make SOA Fail.

[

    1. They fail to explain SOA's business value.
    2. They underestimate the impact of organizational change.
    3. They fail to secure strong executive sponsorship.
    4. They attempt to do SOA "on the cheap."
    5. They lack the required skills to deliver SOA.
    6. They have poor project management.
    7. They think of SOA as a project instead of an architecture.
    8. They underestimate the complexity of SOA.
    9. They fail to implement and adhere to SOA governance.
    10. They let the vendors drive the architecture.

]

If you have any additions or need help constructing a more concrete and specific Reference model for your instance, give a drop. All comments are welcomed.

Wednesday

S.O.A.s ‘nature’. On the integration issue – Web Services: soapbuilders reloaded and the prisoner’s dilemma

As stated in S.O.A.s ‘nature’ and integration, S.O.A. is more than integration. But one of the most practical and real-world aspects and promises of the SOAs is integration. Especially integration from the interoperability point of view.  In this aspect, the issue is behind the Web Services Interoperability abilities, that they could (after appropriate analysis and models consolidation) can provide to the proposed solution.

Interoperability is one of the design principles that distinguish Web Services as service middleware from its forefathers: RPC and distributed object computing and from its uncle: message-oriented middleware.

Over the last couple of years there has been a trend toward vendors offering complete SOA suites, focusing on strong cohesion (real or imagined) at the suite level. In this model, there's less motivation for cross vendor interoperability and open integration. This trend exposes the dangers of going with a SOA suite: vendor lock-in; isolated, non-integrated components; and exposure to the risk of needing ongoing consulting services for bespoke integration. As vendor consolidation slows this year, but continues, infrastructure suites will continue to dominate and selecting products to populate a heterogeneous services infrastructure will remain difficult. Actually, this is the ‘marketing’ case of the SOA adoption, instead of S.O.A., as discussed in SOA marketing and expenses. On the misleading promise on SOAs and what is dying?.

Under this assumptions, and any activity that tries to provide a solid ground for neutral, or open interoperability should be welcomed.  So I’m glad for this announcement of the Web Services Test Forum (WSTF), a vendor and end-user coalition set up to test Web Services interoperability scenarios. The approach is described in this essential post by Oracle’s Gilbert Pilz and another by Doug Davis from IBM.

December 8, 2008 – The Web Services Test Forum (WSTF) launched today, providing an open community to improve the quality of the Web services standards, with initial membership from Active Endpoints, AIAG, Axway, CISCO, eviware, FORD Motor Co., Fujitsu, Hitachi, IBM, Oracle, Red Hat, Software AG, Teamlog, TIBCO Software Inc, . Using customer-based scenarios, interoperability is validated in a multi-vendor testing environment. Customers and vendors alike, independent of their geographic location, can dynamically test their applications against available implementations to ensure interoperability is preserved. As an open community, WSTF has made it easy to introduce new interoperability scenarios and approve work through simple majority governance.

So it seems, that the fore mentioned ‘marketing’ SOA issues, were in vendors attention, and some ‘fix-ing’ actions is there to take place. In a long term of coarse, but is something.

This forum is an evolution in the interoperability story begun by the soapbuilders group. This group of SOAP stack vendors and interested 3rd parties was created in 2001 – kicked off by this seemingly innocent email from Tony Hong. Credit goes to IBM and Microsoft for nurturing the idea with support. The work of soapbuilders was carried on by Web Services Protocol Workshops that Jorgen Thelin ran at Microsoft.

And the ‘fix-ing’ action proof:

We need ask ourselves, why is this group coming together now? Why has the prophecy not come true? Vendors are being beaten up (not enough, IMO) by end-users for their interop failings – so in one way it’s a “look, we are doing something” measure.

Also, how does this relate to the work of the Web Service Interoperability Organization (WS-I)? Is this an implicit sigh of desperation at the lack of progress at the WS-I? For example, it’s taken a long time to get the crucial Reliable Secure Profile out of that hopper.

On the WSTF announcement call, Burton Group’s Anne Thomas Manes asked why this effort is not driven through the WS-I. Steve Harris from Oracle explained that while WS-I is a consensus driven effort, WFTF brings a different approach to interoperability. Harris highlighted vendor collaboration and lowering the barriers to entry as differentiators. IBM's Karla Norsworthy added that WSTF is complementary to WS-I, a more lightweight approach, giving the example that the forum could easily bring a few vendors together to test a scenario. Many of the WSTF’s 15 members are also members of WS-I.

Will WSFT make a real contribution to interoperability?

I’m impressed and made hopeful by a number of things. Firstly, that membership obliges a vendor to support live endpoints for each scenario. This means that debate and negotiation will not be the deliverables of the forum; live test results, published on the internet will be. We can view testing between the implementations as a fully connected network, i.e., every endpoint is tested with every other, so each additional implementation adds considerably (as c=(n^2 – n) / 2 for you topology geeks) to the level of interoperability assurance.

Secondly, having end-users like Ford and AIAG in the forum is an important step. Sure, there are a small number of end-users right now, but the usefulness of this forum grows dramatically with the number of end-user enterprises participating. The forum process encourages end-users to submit requirements for the tested scenarios which usefully turns attention onto things other than wire formats.

Actually, the real question is whether the participants, especially the vendors, recognise this opportunity as a prisoner’s dilemma scenario? If everybody co-operates, everybody benefits.

A number of issues may limit the WSTF’s impact.  Clearly, not having Microsoft as a member is a problem. While industry manoeuvres might keep analysts and vendors themselves amused, end-users couldn’t care less; they just want software to work. So having Microsoft involved is a core credibility issue for the WSTF. According to WSTF spokespeople, other members will be hosting Microsoft endpoints. That’s not quite the same thing. Microsoft are committed to WS-I and also to Apache Stonehenge which has similar aims to WSTF.

I think other web services approaches need to be included. To launch an initiative purely directed at Web Services in December 2008 looks antiquated, although the IBM and Oracle spokespeople did claim other approaches, like RESTful web services are not excluded in theory.

As with any community based activity, the only real measure of its success is its interactivity. Let’s give WSTF time and then count the live interoperable implementations and end-user organizations active in the forum.