The history of delivering IT Services is certainly an evolutionary process. This is not even considering the huge evolution that has taken place in the technology available to deliver such services. The evolution in IT delivery or IT operation is more or less an evolution of tools. It began with the host operating systems where much of the software that came with the computer was only used to manage the machine itself. Skipping many steps, these tools went through the various stages of network and system management to business service management or business transaction management tools. The latter’s claim to fame is actually achieving what business service management set out to do – making IT manageable from a business point of view.
Speaking abstractly all these tools are automation tools. They automate several steps of work that an IT operator, administrator or delivery manager previously had to perform manually. But they are still just tools. They make life easier for the one who is doing the job, but would you call an industrial hammer an automation tool? Therefore I think it is time to take a look into the fish tank of (IT-)tools and approaches available today and show how evolution points towards engines (not so much the tools) that actually decide what to do and then take the action autonomously – only asking for permission, reassurance or assistance if required by process or if no solution is available to them. Such an engine could be called an automation auto pilot and is sitting on top of all the tools available to IT experts today.
We have been developing and using such an engine for more than ten years now and have achieved very good results in quality improvement, availability of documentation as part of compliance and cost cutting. But why do I most strongly believe that this is not an exotic idea, but the logical next step?
If we focus on the two dimensions IT management tool that can takes actions automatically or facilitate taking complex actions on a complex IT and application landscape, we end up with a trigger axis and an approach axis. The trigger axis describes under what conditions an action or tool invocation is triggered. The approach axis describes what kind of action will be taken and how flexible these actions can be taking the trigger conditions into account.
At the left of the trigger axis (x) we place “scheduled”, in the middle “event triggered” and at the right automated. This means that a tool positioned to the far left of the trigger axis will take action at a predefined time. Tools placed in the middle will take action if certain events occur and tools to the far right will take action as they become necessary. On the approach axis we placed “standardized” at the bottom, “rationalized” in the middle and “dynamic” at the top. This means that tools that perform predefined actions without reacting to any information gathered while executing (e.g. cron scripts), would be placed on the bottom, tools following a predefined process but building branches into the process that take current conditions into account would be placed in the middle and tools that combine the best process to be taken for the given situation out of a pool of possible actions are placed on top.
Placing the tools and concepts currently on the market onto these axes will show a clear evolutionary development from a scheduled standardized batch process to an engine that combines possible actions to a solution as the situation requires. The auto pilot function that I was talking about earlier is such a tool that would be placed up and to the right on our chart of automation evolution.
In the chart presented below, the placement of “hot” topics such as data center automation, work load automation and even run book automation are much more “old school” in their approaches and are therefore placed accordingly. Our auto pilot engine clearly takes up the “new approach” position – with a very notable difference – we have been running a successful business on this model for a long time. Thus this is not a fancy idea, but a valid approach and current trends in management software are pointing to exactly this approach.
Maybe this “sorting of the tools” article has helped a little to place other thoughts on automation published here. It will certainly be necessary when we look at why dynamic automation becomes more and more unavoidable as complexity and change rate increase. E.g. following the current discussions on cloud computing from the Atlanta cloud camp organized by John Willis or even the dynamically evolving enterprise clouds as described by Mark Masterson, an automation auto pilot is the only way to keep track of an IT landscape that is fully distributed and dynamic. Just solving the problem of distributed computing and dynamic resources from an OS point of view by creating good cloud managers or VMs does not solve the problem of keeping business applications alive and available with proper execution quality and correct business results. If any of you have ever configured e.g. the Tivoli Correlation Engine in an Enterprise console successfully you know how much work that is. Putting your environment in a cloud would essentially mean you would have to review all correlation rues every time the cloud manager changes your environment. Not possible you say – well that was only the correlation engine. No other system management, IT service management or business service management tool or visualization was even touched. So you see, something will have to be done in order to keep the actual delivery of business services up and running when moving to a fully dynamic environment – this something is an autonomous automation engine or an automation auto pilot.