There's no doubt about it: ITSM is too often modelled on the principles of industrialization, as if all companies were factories. This results in an over-emphasis on processes, but also on standardization, despite the fact that this approach is not necessarily relevant to IT.
Time is the main factor that differentiates IT operations from their manufacturing counterparts. For instance, it is not uncommon in automotive production to spend several years designing a standardized platform that won't bear fruits until completion. This is obviously unimaginable in the world of IT: which product manager could request so much patience?
The reason why automotive companies can afford long production cycles is simple; their average monetization period for a given platform is more than double the development phase. Again, this is unthinkable in IT, where minor tweaks wouldn't be able to prevent such an old platform to be severely outdated.
Unfortunately, many IT players ignore these irreconcilable differences and try to bring standardization where it doesn't belong. All in vain efforts to avoid accepting complexity as a fact in any IT environment. In most cases, IT's reluctance to rigid standards means that they are set for failure. After all, there is a reason why our sector hasn't been able to agree on a single CPU or OS over the last decades.
The worst scenario, however, is the one in which companies succeed in creating a standardized environment in which to operate, and subsequently build processes that the smallest change could endanger.
As senior automation expert Thorsten Hilger previously explained, this is a lesson that arago learned early on when experimenting with scripts:
"Scripts are only suitable for automation to a very limited degree because they only ever cover precisely the type of situation the script envisages. If just a small change was made in the system with the last release, the event occurrence conditions will no longer be the same.
If you really wanted to maintain all scripts and occurrence conditions continuously, the effort involved would be greater than operating the configuration by hand."
Even though there are much better ways of doing “standard operating procedures”-based automation today, at their core they are still scripts. These RunBooks or recipes are fantastic when you have to do the exact same job many many times (like rolling out 20.000 PCs once every two years), but they lack flexibility, agility and learning from experience. These kind of tools are available from any of the big ITSM tool providers and there are great versions in the open source community, with Chris’ favorites being Puppet and Chef.
Understanding that flexibility was the key to a sustainable reduction in time spent on operational tasks, arago came to the conclusion that scripts were only automation 1.0, and would only ever cover small parts of everyday work. The logical consequence was a new kind of automation that would be based on reusable knowledge items rather than processes.
On a higher level, this also exemplify how generalized standardization generates inertia and disincentivizes adaptation. Once an IT department relies on certain standards, its team become averse to change, and may be inclined to put a vast amount of efforts into making sure the system remains unchanged as long as possible.
In other words, the time that should be spent on innovation is wasted on keeping obsolete environments alive. Not only is this bad for productivity in the short-term, but it also causes long-lasting damage to IT, which must rely on constant change and complexity.
Don't get this wrong: we don't suggest that standardization can't apply to IT at all. Nevertheless, it should be limited to the small portion of the IT stack to which it is often applicable, such as infrastructure and the lower part of the value chain. More importantly, there is no golden rule on what to standardize or not; if you want to attract and retain top IT talent, you need to standardize along knowledge.
Image credit: Valerie Everett / Creative Commons