Friday, March 14, 2014

The Importance of Application Integration Increases

How you could benefit from an Integration Competence Centre (ICC) or Integration Factory(IF)?

Are the processes for integration development and governance in shape in your organization? If not, NOW is the time to sort them out. Five trends are emerging:
  1. Hyper-connectivity, a world where people, processes, data and almost everything interacts with each other seamlessly, is quickly becoming the norm.

  2. All other processes that are not core business related, will be in the cloud or use vanilla applications.
  3. Collected data must be utilized better
  4. Decisions and opportunities are valid only in all the time narrowing time frames.
  5. Holistic context and location aware services are build to attract consumers and business users.

This means that application and business process development will be to a greater extent about integration.  When choosing new applications or cloud services, the ability to integrate with existing or new business processes, will be a larger priority than earlier.

Because of increased importance, and to reduce time from idea to launch, organizations must:
  1. Align integration practices

  2. Increase reusability and flexibility

  3. Provide easy to adapt templates (for design, development, testing and documentation),
The best way to meet these goals is to build an official Integration Competence Centre - organization and streamline its processes with automation to act more like integration factory.

Building The Integration Competence Centre

The Integration Competence Centre can have various structures depending on the organization, but a few key positions are vital for a successful ICC. We will focus on the Design Board, the ICC -manager and the Decision Board.

The Design Board

The most important entity is the Design Board. The Design Board is the strategic element of the ICC and its crucial task is to constantly scan the horizon and keep track on what kind of demands and technologies lay in the future. Based on these visions, the Design Board must decide which demands and technologies are relevant for the organization. They must compose the reference architecture and also include various education paths, guidelines, best practises, selected technologies, templates etc.

Because the design phase is so critical for the future success, there must be a diverse group of people discussing the forthcoming paths and solutions. If the Design Board consists of professionals from the organizations different functions (business, architecture, security, testing, development, maintenance), this diversity guarantees that every strategic unit can give their best, and all parties will commit to a common goal.

Image linked from

The ICC -manager

Another important role is the ICC -manager, who is the one who implements the Design Boards visions. The manager is responsible for keeping integration platforms up to date, efficient & reliable and ahead of current integration needs. The ICC -manager is also an important change agent: helping the implementation of the new ICC -model, marketing its services and possibilities to every business line and project manager. Especially during the launch phase of the ICC, the manager must be active and help diminish boundaries and old habits, and promote the new service model. To meet these requirements the ICC -manager must have enough educated resources for designing, developing, building, releasing and testing.

Image linked from

The Decision Board

The third essential role is the Decision Board. The Decision Board is responsible of reviewing the individual integration projects design documents and check that the Design Boards directions have been met. If the Decision Board finds new unmet requirements in the project documentation, it is responsible of preparing a proposition for the Design Board where it points out these lacks and possibly advises on a new or different approach. This is a very important part of the feedback loop.

Security and Feedback

In addition to these various positions, everyone involved with the Integration Competence Centre are responsible for security and feedback. Security can't be forgotten, and the best security practices must be captured and applied in every step from guidelines to monitoring and maintenance. Regular scans must be used continuously to evaluate the state of security in the processes, the runtime environment, the organization and the implementations.

The other shared responsibility is the feedback loop. Continuous feedback loops of the processes are a key factor when developing organization’s integration processes to be more efficient and resilient. The Design Board needs a lot of feedback to analyze their own strategies, adapt and improve their vision. This is only achieved by receiving authentic and almost immediate feedback on how their guidelines work in real integration situations. A successful feedback model in integration development and delivery is the lean or continuous model. By being ready to constantly adjust the process and apply changes when needed, the integration is much more successful and fluent.

Image linked from

Because the Integration Competence Centre involves a vast amount of people, projects and business lines, it is very important that everyone trusts each other, has pride in their own work and has privileges to design their own processes and make individual decisions in the boundaries of the mutual guidelines that are set by the Design Board. In addition to the guidelines, work processes can be streamlined also by self-reflection, feedback loops and additional education. When the ICC is up and running efficiently the future estimates will become more accurate, no unexpected surprises will happen and overall quality will increase. Who wouldn’t want that?

Start today!

The ICC effects so many projects and helps in many ways, probably the best funding solution is to fund it straight from the organizations overall budget instead of trying to allocate single expenditures to every project. It can be a very hard task to distribute general costs like infrastructure, platforms and maintenance, and virtually impossible for intangible costs like education and work of the Design Board. The costs of the ICC should be tracked and kept efficient.

Image linked from

The more connected your applications, processes, data and people become, the more likely there is a need for change. The Integration Competence Centre could solve many of your problems. So prepare your governance, development, testing and deployment processes and agile adapt to the new requirements, you are going to need them sooner or later.

This article was produced with help and guidance of my colleagues at Descom. Thank you for your input!

Monday, March 10, 2014

Why can't we finally solve common everyday life's IT -problems globally?

I read a document "Why Is Your Doctor Typing? Electronic Medical Records Run Amok" from Forbes at the weekend. It describes the same problem we have here in Finland. Our Healthcare IT -systems are not efficient, don't support staff's work well and have interoperability problems. Renewing the system or implementing new nationwide features is expensive. I can imagine that the same problem occurs somewhere else too. And I wonder if same kind of "global problems" don't exist in other areas of everyday life too.

It is a shame that in today's connected world we don't solve IT -problems concerning basic needs (in nutrition, education, science, environmental & healthcare)  together as one mankind. We could be so much more efficient and reduce unnecessary bureaucracy and reinventing, allowing us to use our resources to other important things and spend more time to listen and taking care of each other.

These everyday life's processes are globally almost the same. For example  no matter if you visit the doctor in Finland, US or Japan process, concepts, examinations, results, stored data, payment, insurance applications etc. are almost the same.

If one wants to build efficient digital, paperless, e-services for its citizens one have to solve same problems (identification, role based authorisation, confidentiality & privacy, audit trails, concepts, data structures, common interfaces....  few to mention). Shouldn't we act together and solve these only once? Resources saved, especially in developing countries, could be used more wisely. What could be the responsible institution / forum taking care of the hands-on work and politics? UN?

While conceiving the global solution, what can we learn from more local success stories in Denmark  and Estonia?

Both had

  • strong governance and guidance from public authority, 
  • agreed common processes, standard based interfaces and data structures,  
  • common urge to co-operate and use available resources efficiently and success.

Wednesday, July 3, 2013

Efficient integration development?

I have been involved in many integration development projects during my career. Some of them have been successful and efficient, some haven't. What is the magic behind efficient integration developent?

Preparation phase

Open conversation about the goals and the means is crucial. Every involved party must have uniform understanding about goals and functionalities that can be used to achive them. When everyone have clear picture about the challenge, it is much more easier to engage and try to put best effort to achieve it. Motivate project with positive awards.

Patterns and reusability: whenever possible try to identify known solution patterns by which desired fuctionality can be achieved. You should not reinvent a wheel every time, except you have room for innovation in our schedule or known bottleneck in the pattern.

Realistic schedule and resource planning: if you have a complex systems, processes or information objects, do not underestimate time schedules. I have witnessed many cases where decent processes / exchanged information were not known after specification phase but many testing-revising cycles were needed in SIT / UAT environments and estimations and schedule were freezed based on the original specs. This lead to feeling of hurry, extended hours, weekend work etc. which decreased the efficiency and quality of the outputs. One tip from the funding point of view: do not argue too much about time estimations but argue on price if you need to spare or meet some limits.

However set multiple deadlines: it is not a secret that nothing motivates better than deadline.

Specification phase

Spent time to think all the possible scenarios. Rethink. And think again. Produce test cases from and for the systems if possible. Simplify. Consider which party is responsible for the needed functionality and could it be handled at that party instead of the middleware. Innovate and procude material also for the exception cases.

Specify (or reuse) functionality for easy to view and ensure content of the flowing messages before and after conversions etc.  The easier way to check these message contents the more valuable  it will be at the SIT & UAT with complex business processes and structures involved.


Implementation phase

Step by step development and testing: implement one functionality / mapping rule  at the time and module test that it works as expected with the realistic testing data received from the specification phase. Do not try to implement and test all at once, because you will not be (or at least I'm not) attentive enough to check huge amount of the things in one shot.

If possible try to avoid personification, let development and testing parties to spread knowledge and work inside their teams. This can pay you back in one day when the only developer / tester get a flu and is not available for awhile.


System integration test phase

Arrange testing workshop(s)  where all the parties are in place and technical or logical issues can be debugged and fixed immediately. Run and rerun all the system acceptance test cases through in these workshops with increased logging and debugging levels to collect material and experience that these testing cases are working. Try to play with cases and cause exceptions.

Fix dataflows one by one. Do not run multiple dataflow testing and debugging streams with the same resources  at parallel. Your focus will be disturbed and your resources need time to refocus to different subjects and are not as efficent as they could be. Also arrange enough rooms for testing teams so that other subjects do not distract their focus.

Do not move to UAT too early

There are always extra work when debugging and fixing things on two environment parallel, especially you get extra work from evaluating what is set differently in the systems because they are not working the same etc. It is not unusual that some fixes or settings are forgotten to order to or copy to next testing environment. Another thing that might be issue from efficiency point of view is that in some cases deployments to UAT enviroment can be done only in certain time windows and by decent processes.

Friday, June 28, 2013

Question matrix for IT -projects

For each IT -project success, it is essential that every important questions have been asked and answered before or during the planning. I promptly collected evolving question matrix to help me to build questions and check if they have already been covered. Hopefully you can benefit from it too ;.) Any feedback is appreciated.

What How When Who Where Why
Business process
Change management
Project management
Partner management
Vendor management / selection
Deployments / release management
Application Management

For example: What benefits do we get? How do we get those benefits? Who will benefit from these new functionalities? When do we expect these benefits to become reality? Where will these new functionalities be taken in place? Why should we market these new functionalities internally? . . .

Monday, June 24, 2013

Homemade application from middleware / ESB layer point of view

While I was laying in the hammock during the midsummer weekend  I let my mind wander free. Suddenly my free mind picked up the path that led me to think about past integration projects and what I can learn from them.

I have been involved more than a decade in developing integrations from and to the complex applications. I strongly agree the idea of making integrations with vendor independent standard ways. Best approach, because of strong standardisation, so far have been to use Web Services (WS) as much as possible to consume services, fetch from or upload data to applications.

At the beginning we had no previous experience about designing WS's. We did services what we thought --at that time- to be suitable, but now we have learned that certain things could be done differently.

First of all GUI, business logic and WS part of the application should be packed into their own (J2EE) applications and persistences. This allows better possibilities to update user experience and business logic part while WS part remains the same and can be up and running while the service break for the others. Especially if you are developing homemade application with agile, tight scheduled intervals it could be usefull not to stop WS part (connectivity services) or not to use energy to build complex retry logic to middleware / ESB implementations to support product updates. And if the WS part is done like I propose below, the real business logic and data models could be updated too without touching the integrations or information objects used to interact with service consumers / clients. Separated GUI, business logic and WS parts can be also scaled individually based on the realised workloads.

Second lesson learned is that application specific and the other perhaps more common services the project produces should not be mixed in the same application and persistences. They should be held strictly separated in their own. Reasoning can be the same as above.

The level of the abstraction and granulation of the WS:s were deeply discussed in the past, but now we have learned that in some projects we chose to publish  too elementary services. For example in one project one needed to use several services / calls to upload data to the application. It was fine when there were a few connected systems and users. When volumes increased there were too much xml -parsing and serialization and logic processing for the one J2EE application that provided also web interface for the users at the same time. There were severe problems with application server resources (memory and cpu) and the response times for the WS:s calls and it caused unnecessary errors (time outs or internal server error responses) and management work to the middleware / ESB layer too.

Near time instead of the real time!

From the middleware / ESB layer's point of view the interface should be fast and reliable also with great volumes. So with the applications that applies complex computing (like route calculation) to the data it receives or produces one should consider to separate the real time application data from the data WS -layer (connectivity services) inserts or updates.

My suggestion is that one consider to use  the following model --idea copied from the SAP- for the complex applications.
  1. Received data will just be put to the persistence to wait for the complex processing and uploading to the application. Middleware / ESB layer gets the acknowledgement back that data were received by the WS part (connectivity service)  and   the status of the processing can be tracked with id xxxx.
  2. Just the syntax of the received data should be validated against the schema while interacting with the middleware  / ESB layer, not the content.
  3. Own services / operations should be provided for tracking the status of the complex data import / update to the application.
  4. Perhaps there could be usefull to implement a GUI from which business users can manually edit and reprocess failed / incomplete data from the WS part (connectivity service) to the application.
  5. Perhaps there should also be caches for the data produced or published with complex business logic. Cached preprocessed data could then be published with simple and light logic with great throughput.

In the picture above I try to express ideal architecture that helps to make maintainable, scalable and reliable application from middleware / ESB layer point of view.

Wednesday, March 13, 2013

Lesson learned about WS -interface design

I read a document IEC 61968-100 / Application integration at electric utilities –
System interfaces for distribution management –
Part 100: Implementation Profiles. Great advices how to implement WS-interfaces, on your application infrastructure, that comply with the standard. Some notes and thoughts below.

Start from Information Objects   

First thing for designing new WS-interfaces is to consider information objects that the new functionalities pertains to. If possible try to find standard  that defines objects and their naming conventions (noun) for you. There is a good chance that the same standard defines actors, roles and interaction patterns (verb) in that specific domain. If so, follow them as straight as possible.

Draw interaction and sequence diagrams

Draw interaction and sequence diagrams based on the use cases the interfaces are meant to be used for.

Define Service Semantics

Based on the diagrams, services are identified for the entire message flow. The naming of the services are based on their service pattern (based on the role in entire flow). Each service will have the same operation as indicated in the diagrams as messages, but have different service name.

Conventions for version controling

Use targetNamespace and version attributes to define the version of the used schema or wsdl.


<xs:schema targetNamespace="http://ORGANISATION/DOMAIN/STANDARD/RELEASE DATE OR YEAR/VERB + NOUN" version="1.0">

Add detailed version information with annotations

<xs:schema targetNamespace="http://ORGANISATION/DOMAIN/STANDARD/RELEASE DATE OR YEAR/VERB + NOUN" version="1.1">
       Major version 1.0 created in 2013/03/12
       Minor version 1.1 created in 2013/03/13 - one optional element added for the object xxx

Don't change targetNamespace in the minor backward compatible changes to avoid unnecessary WS-client rebuild. Data types are typically defined in the external schema files and imported in wsdl instead of being embedded for better version control.

Conventions for the wsdl 

Service and operation naming patterns can be used to implement strictly typed web services.

Wrapped Document style is normally used. 

  • document style means an XML document is included in a soap message. In the normally case, it is directly placed in the <soap:body>
  • if wsdl:operation name is the same as the input element name wsdl is considered as wrapped document style wsdl
  • The input message has a single part
  • The part is an element
  • The element has the same name as the operation
  • The element's complex type has no attributes

Operation name follows the Verb + Noun naming convention. The plural form of the information object name is used as a noun to avoid collisions within the xsd. For example CreateEndDeviceEvents

Service name -  <Service pattern name>+<Noun>. For example ReceiveEndDeviceEvents

Service patterns can be defined from used standard or the use cases the new interfaces are meant to be used for.

Role of the actor (service provider) in the message flow defines which service name pattern to use

Some common service naming conventions

To provide (send) information (information object) for public (enterprise) consumption. To be invoked by the system of record for the business object and only when the state of the business object has changed. This is normally used in conjunction with the verbs created, changed, closed, canceled and deleted. If ESB integration pattern that loosely couples the service provider and the consumer is used this naming pattern is applied with the interface the ESB provides for the consumers.

To consume (receive) information (business object) from an external source. This is used in conjunction with the verbs created, changed, closed, canceled and deleted. If ESB integration pattern that loosely couples the service provider and the consumer is used this naming pattern is applied with the interface the ultimate service provider provides.

To request another party to perform a specific service. This is used in conjunction with the verbs get, create, change, close, cancel and delete. If ESB integration pattern that loosely couples the service provider and the consumer is used this naming pattern is applied with the interface the ESB provides for the consumers.

To run a service provided to the public, which may include a state change request or a query request. This is used in conjunction with the verbs create, change, close, cancel and delete. If ESB integration pattern that loosely couples the service provider and the consumer is used this naming pattern is applied with the interface the ultimate service provider provides.

To reply with the result of the execution of a service (by the Execute service). This is used in conjunction with the verbs created, changed, closed, canceled and deleted.

To provide (show) information (business object) for public (enterprise) consumption, when the state of the business object is not changed, by the system of record or other system that has a copy of the same business object.

To request specific data of a business object to be provided.

Create template xsd:s and wsdl:s 

Create (or use those ones that comes with the used standard) template xsd:s and wsdl:s to support commonly used interaction patterns in your organisation. These templates will help you to follow selected best practices and reduce time consumed to produce new interface definitions.

Sunday, April 15, 2012

Purpose and description

Hi All,

In this blog I'll try to note my thoughts and observations from the fields of application integration. I hope this tool / blog entires help me to classify my experience and knowledge to meaningful ensembles from which I can draw wisdom which help me to solve problems at my everyday work.

And of course: perhaps my blog entries can arise thoughts and interesting discussion  from where you also can learn something.