Reducing Windows OS migration costs

Upgrading Windows operating systems in an enterprise context can be a very expensive task. Not because the development of a new OS base install image is so complex, but the integration of all application packages in the image and the testing of those on the new platform are the main cost drivers.

Typically, the costs are split as follows:

  • 33% developing the actual OS image
  • 33% testing the applications on the new OS platform
  • 17% project management and admin
  • 17% actual roll out

To minimize Windows OS upgrade costs, but also to reduce costs for the running operation of a Windows based workplace, applications must be decoupled from the underlying operating system layer. When applications are decoupled (meaning, they are not directly installed on the Windows OS), the Windows base image is less complex (and therefore more easy to develop), and the integration testing of the all the applications on the new platform can be saved.

Now, decoupling is easy said, but hard achieved. New ways of how applications are provided to the user need to be explored. The most obvious way would be to drive applications to become HTML-5 based. Although this might be possible for applications which are newly introduced in the environment, we also need a solution for existing legacy programs. I see two main technologies which could be of use here:

  • Application virtualization
  • Application publishing

Application virtualization provides a sandbox around the application, so a new Windows platform does not interfere with the application context and the virtualized application in the sandbox can be deployed on the new OS without intensive testing. Further advantage of this technology is the possibility to manage the application from a central point and control application updates without the requirement to send out software update packages to thousends of clients.

Application publishing lets the software run on a terminal server. So again, the application context is independent from the workstation OS and can be upgraded centrally on the terminal server.

The more applications are provided with either option, the slimmer the OS base image can become and upgrades to new OS versions will be getting cheaper. This might be the first step to a new model of the client workplace where the actual client OS is not important any more, but just provides a runtime for required access software (like a browser or the Citrix receiver software).

Strategies for replacing Microsoft Office

In my previous blogpost I discussed why Microsoft is still so dominant in the productivity software space and why it is hard to move to alternative office products. However, if you are still considering to replace MS Office, here is how to do it:


The most important point is management commitment. Don’t be naive, it will be a hard process and without the full support from the management up to the CEO, this project will fail. IBM, my former employer tried to save MS Office licenses in favour for IBM’s own product Symphony and later for Apache OpenOffice. When thinking of 400.000 IBMers, internal communication could have easily been moved to the Open Document Format, however, this effort never had the buy in of the upper management. Although Office licenses were restricted strongly and you were required to run a complex exception process to get one, most management still produced PowerPoints and Excel files. Internal tools were still developed as Excel macros and it sooner or later became a real pain if you would not have Microsoft Office installed. My personal opinion is that the missing commitment from management was the main reason why they gave up on this mid 2014 and purchased Office licenses again.

Introduce an internal file format standard

Establish ODF as the one and only accepted internal standard for editable files. For non-editable files, PDF should be the way to go. Also for sharing files with externals, as long as they don’t need to be edited, use PDF. Provide your corporate templates in the new format.

Stop developing Excel macros

Get your developer on board and provide education how to start developing in your new productivity tool suite. Regardless if it is Apache OpenOffice or LibreOffice (or any other alternative), they all come with a more or less powerful scripting language to fulfill most requirements. If it is worth migrating existing Excel macros to the new platform depends on how many and how complex they are. Maybe they can still live in Excel until they are sunset anyway.

Provide education to your users

In the very end it is all about user acceptance. The better they get educated, the higher chances are they accept the new platform. Don’t underestimate this point, from a cost perspective this might be the biggest portion of the project!

Consider web based solutions

Give your end users new functionality by moving towards the web. There are alternatives out on the market (Google, IBM Docs, Zoho, ….). Maybe these new possibilities attract your users.

I am sure, there are a lot more points to consider, but without the ones mentioned above, I am pretty sure such a project will fail. Please feel free to join the conversation on Twitter via @emarcusnet!

Why is Microsoft Office still so dominant?

If you think about productivity tools, Microsoft Office is the product it is all about. Even the term Office is used as a synonyme for productivity tools and competitive products use it in their name (LibreOffice, OpenOffice, Softmaker Office, etc…).

At least there are competitive products available, and there always were. Actually the grounds of productivity tools was once prepared by Lotus with its 1-2-3 spreadsheet calculation tool accompanied by AmiPro and Freelance to form the Lotus SmartSuite. But Microsoft soon took over this market with Word, Excel and Powerpoint and kept it tight since then. Allthough Microsoft Office is rich on functions, the alternative players can provide what 99% of users require, so

why is Microsoft Office still so predominant?

In the recent years I saw a number of projects with the goal to replace Microsoft Office. But none of them declared victory over Redmond’s cashcow. Here are some reasons why:


None of the competitive tools achieved a decent file format compatibility. Meaning, when exchanging documents with Microsoft Office users, the layout, tables, etc… often get misplaced making the document look differently then the original. Allthough import/export filters for the older binary based Microsoft formats (like .doc, .xls and .ppt) made progress over the years, the new XML based formats (.docx, .xlsx, .pptx) are again quite a hurdle.

I would see this as the main reason for failing user acceptance.

Excel macros

Don’t underestimate the number of application like Excel macros which are out in the world and sometimes vital to companies. I saw enterprises running critical reports based on Excel macros. Those macros can be complex, reading input data from various sources etc… To migrate them to another platform is a project of its own and even if possible ruins every serious cost case.


A lot of 3rd party tools provide connectors to Microsoft Office. This could be an Outlook plugin or the possibility to produce an Excel sheet as the result of a query, etc. For alternative office tools such integrations are often missing.

User acceptance

Finally, the employees are used to the Microsoft products from home / school / previous jobs -make them use an alternative usually costs high education and motivation efforts.

In my next blopost I will talk about strategies that could be considered when attempting to move away from Microsoft Office to an alternative product.

How important is the client OS any more?

As I mentioned in one of my older posts, the client operating system becomes less and less important in today’s IT world. However, how important is a standardized client OS still for enterprises?

Up to today, enterprises’ workstation rollout strategy is based on a corporate OS built which includes all relevant policies and settings. Any application package can rely on this standardized OS and its unique features. This was best practice for years – even decades, but is it the answer for the challenges the new way of working introduces to the corporate world?

I am not so sure about that and I think this paradigm needs to be reviewed!

With Bring your own device, mobility and social collaboration new end user devices are used for corporate applications. Most of these devices come with its own operating system and might or might not be manageable. Some CIOs still believe they can cope with this challange by just prohibiting these new devices, which is more ostrich-like politics than a a future proof concept.

While I don’t say that a standardized OS platform is something bad, I think today’s applications must not rely on it any more. They must be robust enough to cope with any underlying OS configuration to be ready for the future.

Infrastructure is commodity and therefore getting more and more diverse. This means, specific OS vendors, versions or settings must become less and less important to higher level of applications!

Systems of record and systems of engagement

Discussing hybrid cloud is almost a never ending story; there are so many different aspects to have a closer look at and explore. In this post, I would like to focus on workloads, placement and interconnection.

In earlier posts about workloads and proper placement on different clouds, I introduced the terms cloud enabled and cloud native. While these terms are still valid definitions, in a hybrid cloud context they evolve to the paradigms of systems of record and systems of engagement.


The main difference to the cloud enabled / cloud native approach is that we are not talking about isolated workloads that are better placed here or there, but of integrated workload components spread over infrastructures enabled by hybrid clouds.

Lets look a bit closer to this new paradigm:

Systems of record fit well on cloud-enabled infrastructures. Those workloads have specific requirements about security, performance and infrastructure redundancy. Relational databases holding sensitive data could be a good example for a workload component that is referred to as a system of record.

Systems of engagement have more the cloud native supported requirements in terms of flexibility, ease of deployment, elasticity and more. A web server farm might be a good example here.

So, what is the thrilling news?

Because of the much tighter integration of hybrid cloud environments than we saw a year ago, there are completely new possibilities for how workloads are split and distributed over the environments.

For example, if we consider a web shop application, the presentation layer can be considered a system of engagement, whereas the data layer is more likely a system of record. In a hybrid cloud, the web server farm of the presentation layer can be placed on a cloud native environment like IBM SoftLayer, but the core database cluster, holding the credit card information of the customers, might be better placed on a PCI compliant infrastructure like a private cloud or IBM Cloud Managed Services.

Another example could be a SAP system that provides web access capabilities. Again, the web facing part could be on a public cloud, but the main SAP application is certainly better to be placed on a suiting infrastructure like IBM Cloud Managed Services for SAP Applications or even a traditional IT environment.

As mentioned above, tight integration is key for success with hybrid cloud scenarios. One crucial integration aspect is networking. With the interconnection of the IBM strategic cloud data centers to the SoftLayer private network, IBM provides a worldwide high speed network backbone for all its cloud data centers to enable components on different cloud offerings to communicate to each other properly. Other aspects are orchestration and governance which I covered in my other post.

The combination of systems of record and systems of engagement bring hybrid cloud to the next evolution level. By using the best of both worlds in a single workload, placing the components on the best fitting infrastructures, hybrid cloud computing becomes even more powerful. Prerequisite is a tight integration especially in the areas network, orchestration and governance.

Don’t hesitate to continue the discussion with me on Twitter via @emarcusnet!

How to achieve success in cloud outsourcing projects

Outsourcing is a stepping stone on the way to cloud computing.

I would even say that companies with outsourcing experience can much more easily adopt cloud than others. They are already used to dealing with a service provider, and they have learned how to trust and challenge it to get the desired services. But certain criteria must be met in order to ensure that both parties get the most out of the outsourcing relationship.

According to a new study on the adoption of cloud within strategic outsourcing environments from IBM’s Center for Applied Insights, key success factors for a cloud outsourcing project are:

• Better due diligence
• Higher attention on security
• Incumbent providers
• Helping the business adjust
• Planning integration carefully

I fully agree with all of these points, but found myself thinking back 15 years when the outsourcing business was on the rise. Actually, these success factors do not differ much from the early days. A company that has already outsourced parts of its information technology (IT) to an external provider had to cover those topics already—perhaps in a slightly different manner—but still enough to understand their importance.

Let’s briefly discuss these five key topics more in detail.

Due diligence

A common motivation for outsourcing is a historically grown environment, which is expensive to run. Outsourcing providers have experience in analyzing an existing environment and transforming it to a more standardized setup that can be operated for reasonable costs. Proper due diligence is key to understanding the transformation efforts and effects. For cloud computing, the story is basically identical; the only difference is the target environment, which might be even more standardized. But again, knowing which systems are in scope for the transformation and what their specific requirements are is essential for success.


When a client introduces outsourcing the first time in its history, the security department needs to be involved early and their consent and support is required. In most companies, especially in sensitive industries like finance or health care, security policies prevent systems from being managed by a third-party service provider. Even if that is not obvious at first glance, the devil is often in the details.

I remember an insurance company that restricted traffic to the outsourcing providers shared systems in such a way that proper management using a cost effective delivery was not possible. Those security policies required adaption to reflect the service provider as a trusted second party rather than an untrusted third party. Cloud computing does bring in even more new aspects, but in general it is just another step in the same direction.

Incumbent providers

If your current outsourcing provider has proven that it is able to run your environment to the standards you expect, you might trust that it is operating its cloud offering in the same manner. Let’s look at the big outsourcing providers in the industry like IBM; they all have a mature delivery model, developed over years of experience. This delivery model is also used for their cloud offerings.

Business adjustment

In an outsourced environment, the business is already used to dealing with a service provider for its requests. Cloud computing introduces new aspects, like self-service capabilities or new restrictions because of a more standardized environment. The business needs to be prepared, but the step is by far smaller than without an already outsourced IT.

Plan integration

Again, this is a task that had to be done during the outsourcing transformation, too. Outsourcing providers have shared systems and delivery teams that need to be integrated. Cloud computing might walk one step further to even put workloads on shared systems, but that is actually not a new topic at all.

Outsourced clients are already well prepared for the step into cloud. Of course there is the one or other hurdle to take, but compared to firms still maintaining their own IT only, the journey is just another step in the same direction.

What are your thoughts about this topic? Catch me on Twitter via @emarcusnet for an ongoing discussion!

Are containers the future of hybrid clouds?

I recently stumbled over the following video from James Bottomley, a Linux Kernel developer working for Parallels. It’s a very good explanation of container technology and how it will be integrated in OpenStack:

What really caught my attention was the part about hybrid clouds. Looking a bit closer at containers in a hybrid cloud environment reveals their potential to introduce easy application mobility.

The main difference between virtual machines (VMs) and containers are that virtual machines run a complete operating system (including its own kernel) on virtualized hardware (provided by the hypervisor). A container shares, at minimum, everything up to the OS kernel with the host system and all other containers on the host. But it can share even more; in a standardized setup, a container can share not only the kernel, but also the main parts of the operating system and libraries so that the container itself is actually rather tiny.

When we think about hybrid clouds today, we mainly think about fully-virtualized machines running on different infrastructures, at different service providers, in different data centers. Such a setup still cannot fulfill a use case that is as old as cloud computing: moving workloads easily from one infrastructure to another. I see this as a requirement in multiple scenarios, from bursting out to other infrastructures during peaks to continuous operation requirements during maintenance windows or data center availability problems. Using containers with hybrid clouds would give users a new degree of freedom in where to place their workloads as decisions are not final and can be changed at any given moment.

Because containers are much smaller in size than virtual machines, moving them over a wide area network (WAN) from one provider to another is far easier than with VMs. The only prerequisite is a highly standardized setup of the container host, but systems tend to already be standardized in cloud environments, so this would be a perfect fit!

Today, we are not as far along as we could be. Containers are not yet supported by the big cloud software stacks, but as the video points out OpenStack is about to include them into its application programming interfaces (APIs) soon.

Container technology provides an easy way to make applications more mobile in a hybrid cloud setup. Because of the tiny footprint of containers, moving them over wide area networks is far easier than moving full virtual machines. Containers might fulfill the cloud promises of easy bursting during peaks or flexible leveraging of multiple cloud environments.

What is your opinion on how long it may take until containers are as well supported in cloud environments as virtual machines are today? Tell me your thoughts in the comments or on Twitter @emarcusnet!

The hybrid cloud onion

In an earlier post, I defined a hybrid cloud and discussed possible scenarios including multiple public cloud providers, private clouds and traditional information technology (IT) environments.

While that post hopefully provided a good explanation of hybrid cloud infrastructures, it was not the full story, especially if you plan to implement a hybrid cloud in your environment. Like an onion that has many different layers around its core to protect it and keep it nice, white and juicy, hybrid cloud infrastructure has many different layers that keep it functional. Let’s take a look at these layers.


Don’t underestimate the complexity that is introduced as a result of the different technologies and service providers in a hybrid setup. Establishing a common management infrastructure might be extremely hard and might not always make sense; however, there are components that you might want to integrate and harmonize. Usually these components provide monitoring, alerting and ticketing tools.

Whenever a new piece of infrastructure is added to your hybrid setup, you should consider the extent to which you need to integrate it into your existing management systems, and how to manage it once it is integrated.


Once the infrastructure is managed properly, you can think about how to provision new workloads. The next layer we should consider is orchestration. As with the management of your hybrid cloud infrastructure, your goal here should be to have a single point for provisioning that spans services over different cloud infrastructures.

The ongoing standardization of cloud application programming interfaces (APIs) addresses this need. Amazon Web Services APIs and OpenStack may be considered industry standards in this arena. More and more cloud providers and cloud products support at least one of the two, often both. Tools like IBM Cloud Orchestrator can not only provision single workloads on different hybrid infrastructures, but can also define workload patterns for faster and easier deployment.


Orchestration enables the use of a hybrid infrastructure in an automated way. And once you are able to orchestrate your environment, you need to control how that is done. The main question to answer is which workloads should run where. This is crucial because each hybrid infrastructure has its strengths and weaknesses. Private clouds might be the place for sensitive data, while public clouds might provide the best price point. It is important to establish policies about hybrid cloud usage scenarios.


Hybrid clouds are defined by their infrastructures, which are much like the layers of an onion. To successfully establish a hybrid cloud setup, management, orchestration and governance must not be forgotten!

Share your comments and questions with me on Twitter @eMarcusNet.

What are Community Clouds?

The nature of any public cloud is to meet the requirements that a majority of its users need. There are always trade-offs in functionality, standardization and costs. So, in the end, the implemented requirements are some kind of least common denominator.

While this might be good (enough) for most industries, it often is not enough for client groups with special requirements, like financial institutions, government organizations or pharmaceutical companies. To drive cloud adoption for those clients, we need a type of cloud that can meet their particular needs. Such clouds are referred to as community clouds because they are designed to serve a special community of clients. A community cloud is an infrastructure that is shared by several organizations with similar concerns.Prescription Medicine 3348667

But why are community clouds so important for both the service providers and the industries and communities that use them?

The service providers can target new client segments that they could not reach with a standard cloud offering. Although investments into the cloud’s underlying infrastructure and security processes might be higher, competition in this segment is lower, and marketable prices are potentially higher than for standard public cloud services. Depending on the targeted industry, the offered cloud services develop from a commodity business to a high-value, high-margin business, which might be more attractive for service providers.

For special industry clients, a community cloud provides the possibility to gain the benefits of cloud computing but stay compliant with their industry requirements. The service provider takes over the burden for required certifications like the Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry (PCI) data security standards and so on.

Another attractive aspect might be the fact that a client’s neighbor in a community cloud is most likely from the same industry. In a governmental community cloud, sharing of infrastructure (as is the nature of public clouds) is only done between other governmental organizations, which might relieve them from certain security concerns they have with open public clouds.

government bldg


Although community clouds are still niche products, they are becoming more and more important with general cloud adoption across industries. They can be a good solution for both the service provider, who gets better margins with higher value services, and for the client, who can be sure that its industry-specific needs and regulatory requirements are met in a professional way. The fact that only clients from the same industry are on such a community cloud might help to increase trust in cloud computing, even for highly regulated and sensitive industries.

Could community clouds help increase cloud adoption? Continue the conversation with me on Twitter @eMarcusNet.

How Dropbox revolutionized enterprise IT

Some of you might remember those early days of computer networking when coaxial cables were used to interconnect PCs and Novell Netware was the market leader for file sharing. Although new players appeared in this space with IBM LAN Server and Microsoft Windows NT, the basic concept of shared network drives did not change much.

The general concept is based on centralized file repositories. Management and especially access management is usually limited to administrative personnel and based on groups rather than on individual users. And, because of the centralized approach, users are required to be online to access files.

This was state of the art for almost 20 years.

As with anything that stays for a long time, requirements change and the centralized concept was unable to meet the new needs of the millennium generation. Mobile computing started to become more natural, the number and kinds of devices changed from static PCs to notebooks and nowadays tablets and mobile phones. Users are not only able to take administrative responsibilities, but they can even demand to manage their resources themselves.

Although some tried to enhance the existing software with all kinds of add-ons (offline folders) and workarounds to help support the new requirements, the outcome was not really satisfying.

Dropbox was and still is so successful because it fulfills those new needs!

The paradigm switched from a centralized file store to a distributed, replicated file repository with easy access regardless of whether the user is online, offline or using a mobile device like a tablet or mobile phone or even only a web browser. The user is able to share his owned files easily with other users or groups through a simple web interface.

But how does this affect enterprise IT?

These new user requirements are not limited to consumers. Actually, the need to have access to your important files and work on them in a geographically distributed team is a very common requirement of today’s enterprises. Dropbox has inspired a number of other products and services specifically targeting the enterprise market to appear in recent years. Not only do these programs support the new file sharing paradigm, but they also support core enterprise requirements for data security, privacy and control.

IBM Connections (and its software as a service companion IBM SmartCloud for Social Business) is a perfect example.

File services today are no longer based on shared network drives, but rather on distributed file repositories with easy access through web interfaces or replication clients and which enables the user to perform limited management task themselves. If the enterprise IT department does not fulfill these new user requirements, shadow IT based on Dropbox and similar technologies may continue to rise. Please share your thoughts in the comments below.