Permalink

0

Three Reasons Why Scripts are Automation 1.0

Many solutions on the IT automation market are script bound – and why not? Scripts are the way we have been doing automation in IT operations for the past 30 odd years and they have served us well. But if we are just to continue things like we were doing them already, why is there such hype about automation now? Well, simply because the cost of operating IT is too high – despite the scripting we have already done – and because the talent parked in IT operations is missing in the innovation part of IT – which is what IT should all be about.

Naturally many reactions to the need for better automations are to refine things as they are and come up with better ways to write, manage and maintain scripts. Alternatively a disruptive leap in technology can achieve a completely new way of doing automation (the latter is what we are doing with the arago Autopilot for IT operation).

In either case you have to understand the shortcomings of scripts in order to make any improvement and this post is a brief and blunt summary of these shortcomings. Don’t get me wrong. I am not saying all scripting is crap and all the people who have written or are writing scripts are idiots – far from it. As I said scripts have served us well and will play an important role as they are the foundation of today’s IT operations, but we have to understand the limitations of scripts in order to push forward.

In my books there are three limitations to scripts and I will outline them here:


  1. Limited applicability
    Limited Applicability
    A script – like any other imperative computer program – has a clear precondition under which it will produce the desired result. Basically this means a script is like an assembly line. It will produce the correct result – reliably and mostly scalable – if applied to the context the script was written for. If the context changes only slightly the results are wrong or the script cannot be executed. This means for every slight change in context the script has to be reviewed as a whole and can either be cloned or changed so that it can handle the slightly changed context as well. This normally means adding “if” or “case” statements to a script and making it more complex. If you do this for long enough your script will evolve into a tool and if you do not stop there you might even end up with an organically grown product. The problem with such tools is that they become absolutely impossible to maintain, because scripts are nor managed like programming projects (which they were never intended to be) but can easily evolve to become huge programs.
    This limited applicability either creates a great many very similar scripts or some very complex scripts and if you look into any standard IT operations environment you will find such things – and no one really wants to touch them unless absolutely necessary, because no one understands them completely or the interdependencies between the many scripts in the environment.

  2. Limited reusability
    Limited Reuse
    IT operations is an ad-hoc business. Even if we do not really like to admit it, the job of IT ops is to handle events as they occur. Meaning that there is no long planning phase to get the reuse between all components involved in operations up to an optimal level. The job of IT ops is to get things done now and this is also how most scripts are generated. Someone does a job for the tenth time and needs it done faster or is annoyed that this thing gets in his way of completing other important assignments and in order to prevent that from happening again he writes a script. This is good and it produces instant results (what we are looking for) but it also means that things are scripted over and over again. There is simply no time to properly manage reuse of things that are already available or even to make the knowledge what has already been done by others available to everyone. This means that not only a great deal of time is spent writing parts of scripts that are already there (which is bad in hindsight but ok in terms of results achieved) but it also means that if change occurs in an environment there are potentially MANY MANY places where this change can affect current operations procedures (i.e. scripts).

  3. Limited flexibility
    Limited Flexibility
    Assuming we have been writing scripts for more than 30 years I believe it is safe to assume that all the low hanging fruit have been picked. In scripting terms this means that all the easy scripts are already there. Easy normally means something you can create all by yourself. Yes, I know you can create wonderfully complex scripts and programs all alone, but as long as you are the one who has to deal with them they are easy, because you know how to deal with them. Scripts become complicated, when you need more than one expertise – i.e. person – to write and thus to maintain them. Because at first it is hard to bring these two or more people together in order to write the new script and then it becomes even harder to bring them together again to change it. This means that because of this strong dependency on several skills in moderns scripts they are only changed as part of a dedicated change or refactoring effort or in case of utter emergency. Since IT operations has enough work as it is the dedicated change effort is something everyone dreams about and no one ever gets around to doing. And this in term means that changing the context – i.e. environment – in which these scripts are running cannot be done without creating a storm of additional tasks in changing all the things that have grown over the years and help maintain an environment.

To me the part 3 is the worst, because this is the simple reason why everyone understands that the “never change a running system” rule is not only followed but the status quo of a system is defended like it was the last bastion of humanity or admin appreciation. And if you cannot change your environment you cannot accommodate business requirements – or accommodating is always a big fight for everyone involved – and you can absolutely not entertain innovation as a constant companion in IT – which according to all the IT literature I have read is what IT should be all about and which is what makes IT people love IT.

All three points together make maintaining an IT operational environment an expensive and time consuming task and changing an IT operations environment an almost impossible mission.

Many people have seen these limitations or constraints of scripts and there are many products out there to help us overcome one or more of the restrictions mentioned above. Many of these approaches have fancy names like run-book automation or data-centre automation, but they are simply better ways of managing scripts, managing reusability in scripts or managing tasks to be scripted. I believe that this is not enough, because IT changes too fast to make a script that needs lifecycle management and everything attached with it an effective way to handle things. The result of elaborate operational lifecycle management normally is a very standardized environment that is slow to react to new requirements. Such an environment is great for commodity products and services like desktop provisioning or server provisioning, but it is not good for application maintenance, user feedback management and the like.

This is why a different approach is a good idea. Our approach is called autopilot and all the autopilot does is to have a big pool of knowledge (like you do) and write a script on the fly every time a task comes up (like you do when you handle something manually). The effects of this are simple. You do the interesting new stuff and the autopilot does the boring work, even in a changing and ever more complex environment – without the need to standardize everything.

Permalink

0

Passion Drives Business – Two Exemplary HackFwd Company Profiles

Now let me talk about the HackBoxes – the term for HackFwd companies. .

Delta Strike

Let me start out with a company I put into the dead pool only 9 months ago. DeltaStrike a company producing a universe as a platform for many games and writing their own game in this universe. Well 9 months ago I saw the first prototype and asked the 4 founders – in no uncertain terms – why they thought that someone should play their homemade stuff when other gaming companies invest 100 people into the same kind of game and you could see that. Since then they have completely turned around. I have rarely met a team that can handle criticism so well and actually take the content and base a decision on feedback and their own vision of what they want to do. The team has grown, mainly with passionate partners and other contributors and their game is one of the show off applications for the new Adobe 3D environment. I am not a hardcore gamer, but I think their new idea has potential and the way they dealt with really HARSH feedback gives me every confidence in this team.

Delta Strike Team

Also DeltaStrike is an impressive example how being on the edge of technical development can push a business ahead of the competition and of how the combination of creativity/art and technical skills allow for a flexibility single talented teams cannot show easily and an enterprise IT could never achieve.



Then I want to talk about Fantasy Shopper (quoted on twitter as the next facebook by someone from the British government). Well everyone at HckFwd loves the idea of shopping in shops you know with stock they actually have without having to spend the money but still with getting all the feedback and chatter involved with great shopping.

Originally this game – actually it should be called platform – was intended for teenage girls, but the first test showed that all of us seem to be a potential market for this kind of application. But I am getting ahead of myself. Fantasy Shopper is an virtual shopping environment that models its virtual shopping arcades after real location – i.e. if you go fantasy shopping in Exeter, UK (where the team is located) – you will find shops that actually exist in the real world Exeter with stock they actually have in the real world shop replicated in the fantasy universe. You can shop with fantasy money (the virtual currency) and stock your wardrobe and combine your acquisitions into outfits. You can then share your newest trends and fashion ideas with your friends and get into a lot of conversation, feedback loops, trend setting experiences and so on while doing so. To make the shopping experience more goal oriented Fantasy Shopper has created contests, where you have to e.g. create an outfit for a special event with a price limit. Obviously this sounds like great fun for every shopoholic, everyone interested in fashion and everyone who wants to get the feedback from their friends and peer group before actually spending the money. There are so many possible business models for Fantasy Shopper that an amazing case – always on the assumption that they can achieve a sustainable user base – can be built. The potential in popularity and the business interest from stores, fashion magazines and the ad industry is obvious and a first beta shows that the user interaction is even better than expected. Now the only question is when this brave new world will be online and live for the public. This is what everyone at HackFwd has been urging the team to do: PUT IT ONLINE. And I think we have succeeded in convincing the founding team that there is no point in making something 200% before giving it to the market. I hope we will soon all be fantasy shopping.

To me Fanatsy Shopper is also a great entrepreneurial story. The CEO actually posted an ad in the newspaper to find his CTO and together they applied (and were obviously accepted by) HackFwd. But that is not all. The entrepreneurs behind fantasy shopper also live on the bare minimum in order to use the budget available to them exclusively for developing the company. This is the spirit we are looking for and this is the spirit the big successes are made of. Connected to the passion is the ego and stubbornness to create a perfect solution and we all had a hard time to convince the team to get it out into the open, but as I said Fantasy Shopper will be available soon.

Permalink

0

Celebrating Come Back

Well some of you have noticed that I was gone for a while. And actually this time I did not take an eccentric trip on a submarine or ride through the desert. This time lime disease hit me hard. What started out as an ear infection while I was at PULSE 2011 (this is why you do not find any articles on PULSE 2011 on the blog yet, the conference was great and I will do a write up yet). After I returned from Las Vegas I had taken some antibiotics and felt better. But a week after I returned all of a sudden my hands, feet and other body parts started to hurt, feel inflamed and I had a hard time moving.

So I finally went to a local doctor and was diagnosed with gout, arthritis and rheumatouse arthritis (ok, I do feel very old sometimes, but I’ve never felt that old before). All the pills I got only ever helped for a day or two and then things started getting worse again. In the worst state I was actually in bed, completely unable to move and drugged up to the hilts.

BoreliaFinally my normal, every day doctor remembered that I had horses and that having horses out in the forest also means you can easily be bitten by ticks. This is when I was tested for Lyme disease and the tests were positive. I was immediately treated with the proper antibiotics and as the infection was obviously a while ago and wildly spread through my body (I can tell you it is amazing to find out which parts of your body can hurt when you try to move) I also got a cortisone therapy. Now this was a little more than 6 weeks after I had started to feel sick and it took another 6 weeks until I was modestly better.

So three months after lime disease hit me out of nowhere I was back up and started to catch up on my email, presentations and all other kind of stuff. It also took a while until my body was detoxed enough to be up for any sport. As you can imagine with the cortisone I added MANY pounds and a week ago I started running them off again.

So life is not only back to normal, but I have also caught up on most of my reading and this means I can finally get back to writing – which I am doing here and now.

Expect some very interesting posts on automation, clouds and the net in general to pop up, since I have done quite some thinking in bed and since all our guys were working away while I was gone, so there is a lot to share with you. Welcome back and stay tuned.

Permalink

0

CloudOps Summit – Run Your Cloud

Having organized a very successful CloudCamp in Frankfurt, Germany with about 150 attendees in 2009, since then we were asked again and again if we want to start a successor to this great event, providing a plattform for discussion and exchange about Cloud Computing. While the 2009 event dealt with questions like

  • What is Cloud Computing?
  • Is it secure?
  • Will it be suitable for Enterprises?
  • How far will the hype go?
  • What is happening outside of Europe?

the discussion moved on during 2010 and the hype grew and grew. Today Cloud Computing feels more like an avalanche, because unlike other technologies, the business case is widely accepted. The questions of enterprise customers today are shifting more towards how and when Cloud Computing hits their vicinity.

Customers today want to look deeper into vendor offerings and find out

  • How they can securely operate cloud-oriented solutions?
  • How to manage the additional complexity?
  • What issues arise with migration existing systems?
  • How they can benefit from best practices and open standards?
  • Which experience others already made with Cloud-based solutions?

To address these questions the CloudOps Summit on the 17th of March 2011 will provide a platform to discuss various aspects of Cloud Computing setting the focus on Operations, an area where the ‘flesh is put on the bones’.

The event will be structured along three tracks covering Management, Operations and Architecure. In addition we will have a separate track, which will offer the opportunity for Cloud Computing startups to present themselves, their products or present their experiences utilizing cloud-based offerings.

Please visit the CloudOps event page at http://www.cloudops.de for more details.

Permalink

0

Geeks Are Cool

I have recently spent a great weekend with the HackFwd crowd at the 3rd build event in Mallorca. This was one of the best technical events I have been to for a very long time – if not the best over all.  Compared to all the big conferences – and you know I especially love IBM PULSE – the HackFwd events have a totally different goal and of course a totally different setting. I will not say that this kind of event is better than e.g. PULSE, they are simply not comparable making HackFwd not a movement one of a kind but also making the build event a category of its own.

So what made it so special? Well the setting was special, because it was an actual retreat and everyone attended everything. But that was not it. The special thing about this event was to totally open exchange of concepts and ideas, the openness of everyone to give and receive (even tough) feedback and the “one step ahead” mentality of everyone contributing.

I have never after university encountered such a high level of technical discussion and the amazing fact was that all the techies at the event were not the typical pizza eating cave men, but were very interested in all things business. It is my personal belief that this movement will bring forth some of the most interesting technical ideas, possible the next game changing company and most definitely engineers everybody will want to hire – and cannot hire because they become entrepreneurs.


There is just one thing I saw that I felt a little strange about: Some of the great guys there whom HackFwd names geeks try very hard to be tough businessmen and actually tried to put their great technical abilities in the background. This feeling was summarized in a comment made at one of the feedback talks where a participant said “maybe we are overemphasizing the geek term, maybe we should appear a little more normal?”. No, please No! Could you imagine a violin soloist trying to NOT be a musician and show to the world that he understands the music business better than anyone else? Well if that is the case he will become a music manager, but if he is the best violinist, he will find a partner who will do the management. So my message is: Geeks are cool and we need many many more of them, If you have the ability, the passion and the will to actually deliver rather than just talk about doing great things, you are a geek and that is a great thing! No need to hide. In the US no one would have the idea to hide this kind of ability or play it down by pretending not to belong to the outlier group of geeks, they embrace it. In Europe we are a little shy about it and there is no need to be.

So if you think you are a geek, watch the HackFwd video and if you think you have something to show to the world, maybe you want to get in touch with us. Any which way, if you have the technical ability to think up and create tomorrow’s technical concepts and applications, please don’t try to be something else, be passionate about it and embrace your potential.

Permalink

0

The Devil Is in the Details

An idiom that anyone looking to manage any kind of highly reliable and well performing IT architecture will appreciate very much. Especially major incidents after changes or extensive delays in change or project work have a tendency to originate from some minute detail someone overlooked while “not changing anything”.
This can be especially unnerving, when you have no clue about these details and have to find them out the hard way – possible causing some collateral damage on the way. If you want to read my opinion on the level of details required and the level of detail getting you into a comfort zone without burying you under a mountain of useless data, read my guest blog post “Where Exactly Didn´t You Change Anything” on the Evolven blog.
By writing this post I learnt a little bit about the technology these guys use to determine even the most minor piece of information and not simply by “doffing” the content, but by actually understanding what all these little bits of information mean. After looking into the idea Evolven propagates I will definitely look into the technology – as much as they let me – and write about it.

Permalink

0

Capacity Management – reason enough for the Cloud

The issue of how better to use IT resources is currently the focus of a lot of interesting buzz words such as capacity management, Green IT, energy management and, above all, Cloud computing. The intention of the following piece is to outline why these themes have apparently landed out of the blue – and with great force – on the desks of CIOs and are being pumped out by marketing machines the world over, and above all to examine what they mean and which approaches you can choose to get involved with – and which challenges you will face in the process.

The fallacy of the command economy in the example of IT capacity

In most companies the required infrastructure is purchased simultaneously with IT projects. As this forms part of the project budget such infrastructure is only used for a particular project. However, this also means that infrastructure thus acquired is amortised over a period of three to five years and has to be so designed right from the outset to be adequate to the requirements of running the IT solutions created in the project for the same period without any expansion worthy of note. This alone leads to an incredible overcapacity because it goes without saying that assumptions regarding growth and demands on the infrastructure tend to err on the side of caution, leading in all probability to overcapacity even after the end of the period, let alone at the start of production, for which a massive overcapacity is held in reserve.

Old Servers

from Flickr, Carey Tilden

Seen in statistical terms hard- and software investments (including maintenance) of less than 20% of the total IT budget can still be supported. However, if you take into account the energy costs generated by the hardware once it has been set up, amounting to another 20%, in combination with analysts’ predictions of massive increases in energy costs, what you are left with is the urgent need to put an immediate end to the deliberate generation of overcapacities. The energy factor is of particular relevance here as the proportion of energy actually required for the IT solution only amounts to 1/6 of the total energy consumed. The energy otherwise consumed is transformed into waste heat e.g. for cooling and other data center measures. In this case the energy consumption of the hardware depends only to a very low degree on the extent to which the hardware is used.

It is for these reasons that capacity management and energy management are such important elements of a modern IT strategy.

Capacity management – overarching approach with a need for action on the organisational level

Capacity Managment Process Example

In the energy management field there are some approaches, very typical of IT, according to which management software is used in combination with electricity management hardware to cut back on the use of energy by existing hardware <Link to Sun-Wiki, only TOC available>. This can have a positive effect but bears no real comparison with the capacity approach, according to which unnecessary resources are not acquired in the first place, let alone tying up electricity or administration capacities or requiring additional hardware investments.

Capacity management is therefore the clean approach. The assumption behind capacity management is, however, that any plan to acquire new technical solutions  is preceded by the reallocation of the hardware acquisition budget from the IT projects to those specialist department s that have to pay the IT department over time for the actual use of IT resources. Secondly – and likewise before any technical measures are effected – the IT architects need to be given plausible reasons as to why they should no longer plan in hardware buffers as part of their architectures, as planning in such buffers immediately negates the positive effects of capacity management.  Once these operational and psychological steps have been successfully negotiated it is then worth exploring in greater depth the questions surrounding the implementation of capacity management.

Capacity management and its organisational implementation

The current implementation strategies for capacity management very frequently talk of the need for migration to a completely new platform. If you look instead at the successful capacity management environments of, among others, Google or Amazon, you will be struck by the fact that these two organisations above all consistently pursued the express objective of continuing to use existing hardware for as long as possible. This measure seems worth imitating, leading to the question of which applications and environments are of relevance for the capacity management issue in the first place.

As far as the architect is concerned this question can very easily be answered: ALL OF THEM. In practice, however, it is worth defining a few clear rules to determine the sequence in which applications are to be migrated to a capacity management environment and which hardware should continue to be used, under which conditions hardware should be disposed of and, accordingly, under which restrictive conditions new hardware should be acquired. If you assume that you currently have an overcapacity of at least 80% then the proportion of your hardware requiring decommissioning is significant.

Such a body of rules might look like the following:

  1. All hardware that is not be amortised in the next six
    months must continue to be used
  2. All hardware that has already been amortised and already regularly
    requires extended maintenance is to be decommissioned.
  3. All applications that have registered at least one incident in the last
    12 months arising from capacity problems are to be migrated along with priority 1.
  4. All applications for which new hardware acquisitions have already been
    decided but not yet implemented are to be migrated along with Priority 1.
  5. All applications working at less than 5% of capacity are to be seen as a
    pool for these priority 1 migrations and added to the migrated applications until such time as the required peak capacity has been reached.
  6. ….

Just how the actual migration is to be effected without initiating a 1:1 manual migration of all IT applications is a significant challenge that does not fall within the scope of this contribution. This much can however be said: without large-scale automation such a project is not realistic and should not be attempted in the first place.

The architecture of an environment with capacity management

The first obvious tool for the implementation of an environment in line with the requirements of capacity management – one that flexibly makes the existing IT resources available to the applications as and when needed – is virtualisation. It’s no coincidence that these technologies have seen significant growth in recent years. Alongside that of virtualisation, which makes it possible to simulate several virtual resources on one physical one – in other words enabling the utilisation to maximum capacity of existing resources – the question also arises of how to administer such an environment, how to monitor and predict the required capacities and, last but not least, how to operate such a newly-created environment.

Let’s first take a look at the subject of the virtualisation platform. The assumption in the case of those applications which are to be migrated in the first stage to a capacity-managed environment is that various different platforms should be deployed. This means that you either need to use different virtualisation technologies (e.g. VMware for Linux and Windows environments, Solaris 10 for SPARC environments …) or to migrate applications into another environment before deploying virtualisation – whereby the latter seems to be unrealistic if there is to be any expectation of short-term results.

A good architecture for an environment that supports virtualisation for different technologies, thus making it possible to use capacity management throughout, must support different platforms and define a simple interface for the placing of a system on these platforms. This is the only way to avoid the destructive collision of any existing in-house virtualisation initiatives and to use them smoothly together in pursuit of the grand aim of capacity management. Such a procedure also has the clear advantage of offering one methodology irrespective of the platform – opening up the possibility, depending on market developments,  of replacing one virtualisation technology with another and one platform provider with another, for example in cases where in future you also want to consider using external providers rather than your own data center.

A good architecture that brings together different platforms and providers for one and the same platform under one roof, with stable interfaces and procedures, is therefore a prerequisite. Firstly because this is the only way to guarantee long-term supervision of the newly-designed IT landscape and secondly because such interfaces represent the only way to automate transitions from one platform to another in normal operation and, above all, in the course of migration.

Capacity management, workload and the Cloud

When planning the size of such a capacity-managed environment you are automatically faced with the question of required total capacity. It will be at this point, if not before, that you will realise that Cloud approaches which independently assume the function of resource allocation are essential.

Because it is hard to get used to the idea that IT infrastructure is no longer important – a psychological problem given a rational face by discussions about security – the first point of contact with such an environment will logically be the “private cloud”. This means implementing the technical concepts of virtualisation, dynamic resource allocation etc. on  a platform that you control 100% or that you actually own before starting to think about whether it makes sense – or is even possible – to buy in external IT resources.

The combination of dynamic requirements with the unequal distribution of resources therefore gives rise to Cloud technology even in cases where the physical hardware remains the property of your company. But the question arises of how much of the possible capacity reduction can actually be brought about in this way.

The mean capacity requirement can easily be calculated by determining the average IT load in respect of CPU, memory, storage and bandwidth. This is the way of calculating the mean capacity requirement. The maximum capacity requirement is calculated by adding the time series of the capacity required by all applications and calculating the absolute maximum of this new time series. In a normal company the variance – the range between maximum, minimum and mean required capacity – is relatively high. This is the inevitable result of the fact that the IT usage of a company with a business model – even if the latter is global – is always subject to certain cycles. Thus, for example, invoicing always takes place at month end, 90% of all transactions are executed in a market etc. This means that the maximum load on the IT and the associated required capacity reserves are also subject to this cycle, in which the required capacity of many systems accumulates at the same time.

In order to derive the maximum benefit from the capacity management it is therefore necessary to bundle the load of completely different business models on one physical IT platform. This can be done either by turning the company itself into a Cloud provider (e.g. Amazon) or by turning over the physical platforms used at least in the medium term to one or more Cloud providers. As the necessary distribution of the required IT resources is relatively large a Cloud provider must have reached a certain minimum size and furthermore be able at this minimum size to influence the mixing ratio of the different business models using the platforms.

Conclusion

Placing one’s own IT under the control of capacity management and aspiring to ensure the availability at any given point of only those IT resources that are actually required at the time is a logical step that only requires the jettisoning of existing infrastructure which is not being used. The question of why this has not long been on the agenda of every business has a simple answer: the predicted massive rises in energy costs have for the first time made the possible financial damage caused by dormant IT resources significant enough to be worth looking at.

It can also be seen that the technical implementation of capacity management presupposes initial organisational steps to prevent the creation of further overcapacities and to separate out the budgets for infrastructure and projects.

If the aim is to create an environment under the control of capacity management, the attainment of the desired flexibility presupposes the development of an architecture that has standard interfaces to allow the coordination of different platforms and providers. Furthermore, from a technical point of view, a combination of virtualisation and management technologies for dynamic resource management – Cloud technology – is required for the implementation of such an undertaking.

In order to derive the maximum benefit, users of different business models will of necessity have to share one common physical infrastructure in order to reduce the variance between mean and maximum resource requirements. At this point it is not enough just to deploy Cloud technology – the use of Cloud providers is also essential.

As a final remark it needs to be said that capacity management, with its associated reductions in energy and investment costs, can and should be a key factor behind corporate decisions to turn to Cloud computing: for it is here that fruits are to be found which can easily be reaped and the experience is to be gained that will be required if further positive effects are to be derived from the use of these technologies.

Permalink

0

M2 at U2

Well you must know by now that I am a Plastic Paddy or what some people call an honorary Irish man. So it was  a complete no brainer to get U2 tickets when the guys decided to pop by Frankfurt. Despite back problems – and only due to German medical skills – Bono was in full swing and the band obviously had great fun playing.

The U2 concert was simply something else. The sound in Frankfurt´s stadium was as bad as usual, but the show with the stage in the middle of the arena and a full 360 degree performance made this an event I will most certainly not forget. Besides all the good old and the more modern songs the guys played passionately on stage, I really love that U2 also has a message. A message of peace and understanding – in times when tolerance and communication seem at a new all time low. One could consider it a pity that the rock-grandpas of U2 have to step back on stage to deliver this message to an audience of 70.000 in Frankfurt alone while so few younger bands who also have a message simply do not get the chance because the music industry is still struggling with its fate of missing the internet age. However a band with a message like that, performance skills like U2´s, passion and enough resources to pull off one of the greatest shows I have seen in recent years is enough consolation for the incredible prices attached to the tickets.

If I get the chance to see the 360 tour of U2 again in some other city I visit I will definitely go there – and not just because they are Irish Legends.

Permalink

0

Cloud Computing World Forum London – Where the Cloud Community meets

This week we are attending the Cloud Computing World Forum in London, a great event where the who is who of Cloud Computing meets. The three day conference will bring a number of sessions and workshop and a huge exhibition with about 45 stands to the Olympia Conference Centre. To bring the most ROI, the three conference days are packed with a special events like the Cloud Computing World Series Awards, where the best Cloud Solutions will be awarded and the co-located Cloudcamp London, a well-known unconference-series, where the Cloud  Community gathers for the 10th time in London.

The Cloud Computing World Forum conference topics for the three day will be

  • Business Models and the Current Marketplace
  • Deployment and integration strategies
  • The future of business computing

Find more information in the Online-Show Guide <here>.

Please feel free to contact us for a meetup via DM on twitter: @rjudas and @boosc

Permalink

0

Pulse comes to Frankfurt – PCTY2010

Last week I attended Pulse Comes To You 2010 in Frankfurt. This great one-day event series tries to capture the Pulse spirit and bring it to a number of major cities around the globe. The german issue, which attracted round about 180 people, was hosted by Tivoli’s General Manager Al Zollar in Frankfurt.

Pulse comes to You

The agenda was split into a generall session in the morning and workshop sessions in the afternoon. After German Tivoli Business Unit Executive Oliver Grell’s welcome note, Al Zollar took the stage and presented IBMs vision on Integrated Service Management, which is the foundation for their Smarter Planet vision. He showed how Tivoli tools like Maximo Asset Management will help us to manage growing complexity of IT systems and plenty of other devices beeing around in todays technical infrastructure. After presenting a couple of case studies from various customers and annouced partnerships with Ricoh, Johnson Controls and Juniper Networks.

 A second keynote was held by Forrester Research Director Thomas Mendel, Ph. D. who presentend Forresters view on IT Management 2.0. He made some interesting comments on the importance of infrastructure & operation for 2010 spendings and said the that the biggest concern for IT managers is, that they fear to be unable to support business growth in these troubled times. A new approach for IT Management 2.0 that Forrester promotes is  “Do less. But do those things superbly!“, which is quite a step-up from the much overstressed “Do more with less” meme. More interesting comments from Forrester were, that ‘Service Catalogues’ are currently the second most requested topic at Forrester and the  recommendation to build a “Just enough CMDB“, which from my point of view should be common sense, not to mention the usual call to “Break down the silos” and “know your Business” for Operations. Mendel concluded that Tivoli should grow into an abstraction layer between infrastrure/applications and the business processes.

Following the Forrester keynote was an fresh talk from geman author, management trainer and lateral thinker Anja Foerster, who tried to motivate the audience to expand their horizons and go for unconventinal solutions. After an excursion into examples of cross-industry innovations, she described homogeneity as the true killer of innovation and asked the audience to forster diversity amongst their subordinates and coworkers. Contradiction lays ground for creativity.

After the lunch break PCTY2010 contiued in 4 parallel workshop tracks, headlined

  • Late-breaking Service Management
  • Service Management for IT
  • Service Management for Development and Deployment
  • Technical News

where IBMers, partners and german customers presented a number of  Tivoli and Service Management related sessions (see Agenda). The day closed with a Get-Together and Dinner, where all attendees had the chance to get hold of the specialists and to continue the discussion.

All in all I had a great day, talking to many people and listening to interesting talks and sessions, but I’m confident that next year I will have the chance to go for the “real thing” in Las Vegas again.

What was really a pity that there was no Social Media coverage at all: There have be a handfull of Tweets – about 90 percent of them were sent be me (@rjudas) and some by IBMer Ingo Averdunk (@ingoa). I haven’t found any picures on Flickr, nor any blog article, yet, so:

Come on IBM, you can do better.