The Workplace and Collaboration Multiversum

The core components of a modern office workplace are mainly the device, its operating system including management, a productivity software suite and a collaboration solution. As of today we see three big players on that market with three different solutions and ecosystems:

  • Apple including the iCloud
  • Microsoft and Office 365
  • Google and G-Suite

Although comparable and sometimes possible to combine, these three ecosystems come from totally different backgrounds which explains their feature set and focus.


What you must keep in mind when discussing the Apple solution is, that Apple is a hardware company. Everything on top is only to sell (more) hardware. This explains why Apple’s approach is still hardware centric instead of web centric. The Apple ecosystem contains three types of devices, the Mac on macOS as well as the iPad and iPhone on iOS. The main application of the iCloud is to provide a seamless user experience over all Apple devices. Even for collaboration the Apple approach is still very device centric, providing client software for most of their services – sometimes only for the Apple operating systems (for instance: Facetime).

Although Apple limits the use of its services to their own operating system and devices, the Apple platform is open to use other productivity and collaboration suites. Remember, Apple is a hardware company, it is not Apple’s aim to sell services. This is especially true for enterprise clients, Apple actually does not want large corporations to use iCloud services, as they are considered consumer oriented. There is no iCloud for Business option available. It seems that Apple is not willing to invest in their services to reach an enterprise grade service level. Apple even removed the option to login to macOS with iCloud credentials in fear to be responsible if millions of users cannot log in to their devices should their iCloud service fail.


As we all know, Microsoft is originally a software company. They expanded to services only recently. And their hardware efforts are more to provide a reference model what can be done with their software and services. Their new CEO recognized that the future of software sales is in the cloud on a subscription base rather than on premise installations based on perpetual licenses. Although their efforts to position Office 365 as the center of their ecosystem looks promising, their legacy of clients with on premise instances of Microsoft software is still evident. The Microsoft solutions works best with Windows 10 and locally installed Microsoft Office as complementary components to the Office 365 cloud services.

Microsoft understood that the general ecosystem is more important than the platform. This is reflected in the recent reorganizations of the Microsoft divisions, where Windows is now part of the Cloud organization and not a business unit on its own any more. Microsoft’s cloud services can now be consumed on any platform, be it Windows 10, iOS, macOS or Linux. Not with complete feature parity, tough.

In contrast to Apple, Microsoft’s most important target client group are enterprises. Due to Microsoft’s strong background in the enterprise business, Office 365 and its underlying Azure infrastructure are designed with enterprise requirements in mind.


Google on the other hand is a natural service company, born on the Internet.  The only function of Google’s hard- and software is to bring more users on their services. Therefore it is not very surprising that all Google services are web based and can be consumed by any device. To guarantee a certain consistent user experience over all platforms, Google provides its Chrome Browser as the interface for their services. Google is strong in the consumer and education business, but still weak in the enterprise area. Especially in Europe data protection is an important topic and seems not to be addressed properly by Google.


Three ecosystems that come from three totally different roots but have the same goal. Apple, a hardware company, providing software and services as additional value for its hardware. Microsoft, a software company that requires the service business for further growth and Google, a service company that provides hardware just to enable the consumers to use its services. Keeping the business models of these providers in mind explains their focus and strategy.

Why Directory as a Service is important for the modern workplace

In a majority of today’s enterprise environments Microsoft Active Directory is used as the primary directory service. This worked very well in the past 17 years, due to its ability to centralize user and device management and provide a clear hierarchical structure of enterprise resources.

However, today’s modern workplace introduces new requirements, which are hard to be met with traditional concepts. To understand these challenges a little bit better, lets first discuss the main tasks of a directory service:

  • Authenticate users
  • Authorize users to applications

A traditional on premise directory service like Microsoft AD can fulfill these tasks well as long as we are mainly dealing with non-mobile desktop computers and internal applications based on a corporate network.

But today’s world is different.

The mobile workforce is not only using mobile computers like laptops or MacBooks, it more and more does not use any computers at all, but mobile devices like tablets and smartphones. Although mobile accounts (in Active Directory)  is a workaround to solve the fact that more and more computers are not connected to the internal network when users log on, admins know the pain of cached credentials and the consequence of hanging machines waiting for timeouts while trying to reach a domain controller. The problem even increased since laptops are just put into sleep mode rather than shut down.

Beside the user behavior, also the application landscape has changed dramatically. Client / server applications are exchanged by web based apps, which tend to be more often off premise (cloud Software as a Service). While internal client/server software was mainly based on Kerberos authentication, web based SaaS offerings use different, more Internet compatible authentication protocols like SAML or OAuth.

In a nutshell, the modern workplace requires a different set on functionality:

  • Support for the mobile workforce on devices connected to the Internet
  • Support for Internet based SaaS applications using SAML/OAuth authentication protocols

Internet based directory services try to meet these requirements. As their name implies, these directory services are accessible from the Internet and therefore provide authentication services regardless if the user is in the internal corporate network or on the Internet. Furthermore they are designed to federate with SaaS applications and provide single sign on to internal and external web applications.

Even Microsoft understood the importance of Directory as a Service for the future workplace. Azure AD is Microsoft’s DaaS implementation, highly integrated in the Windows 10 platform. A Windows 10 device can be bound easily to Azure AD. The built in mechanism enables a user to logon to the device with his/her Azure AD credentials. The basic concept is similar to Active Directory mobile accounts. When the device is not online (or cannot reach Azure AD), cached credentials are used to authenticate the user. But, this new implementation of the mobile accounts can handle network switches while on standby far better than AD did in the past.

Independent DaaS providers are introducing more OS agnostic concepts. Jumpcloud for example, works with pure local accounts. It controls those accounts through an agent installed on the device. Local accounts work best in an offline / Internet based scenario as they don’t require any network connectivity or reachable infrastructure. The combination of local accounts and central management of them is an interesting best of both worlds approach. Furthermore, as local accounts exists on macOS and Linux, too, this solution is not limited to Windows devices only.


On premise directory solutions will have more and more problems to fulfill future requirements the modern workplace and application landscape will demand. The introduction of an Internet Directory or Directory as a Service offering will help getting the best out of the modern workplace and enables the seamless integration with Internet based SaaS applications.

The Modern Workplace

A significant change is happening about how workplaces are managed and used in the future. Software vendors like Microsoft refer to this as The Modern Workplace.

For the past 20 years enterprises followed the paradigm of a strictly controlled workplace. Workplaces should stick to a company standard. Deviations to this standard were unwanted and considered to lead to higher management costs. The goal was to have a golden (single) image of the base installation and only accept small changes to settings and software. To achieve this goal, users were restricted to a minimum of rights without any self service capabilities.

This model worked well for years, but todays requirements on productivity and the increasing complexity of usecases must lead to rethink this approach. All major software and hardware vendors (Microsoft, Apple) seem to have understood these new challenges and created their own vision on how a modern workplace will look like in the future:

Enable the end user to perform certain tasks by himself, easily supported by an self service engine that drives these user initiated tasks in a controlled way. Starting at the deployment by providing an out of the box experience to the end user, continuing for software distribution via an AppStore including the ability to install software updates when it fits the user’s work schedule. With such tools, the end user can tailor his workplace to optimize his own productivity.

Support highly mobile usecases where workstations could easily be out of the company network for weeks. Control must not end at the company’s network perimeter but instead must handle devices which mainly live on the Internet as well as those in the internal network.

A closer look on the current market reveals that most vendors have solutions to support this new workplace concept:

Mobile Device Management Software is used for the basic management of the devices instead of heavy tools like ADS-GPOs and SCCM. Most MDM vendors support the traditional computing operating systems (Windows, macOS) nowadays as good as the mobile platforms and keep focusing on them.

Deployment methods which leverage the hardware vendor’s preload instead of reimaging the device are upcoming and supported by zero touch technologies like DEP (Apple) or Autopilot (Windows).

Internet Directories like AzureAD are more and more replacing traditional identity providers like ADS.

MDM systems are usually provided as a cloud service and accessible from the Internet or when installed on premise reachable from the Internet to provide services and control to Internet living devices.

The biggest obstacle for moving towards the modern workplace in a traditional enterprise is the cultural change that comes with it. While Startups have already adapted to the new paradigm, most users of traditional enterprises consider self service more as a burden than an opportunity. Not to mention the security department which likes strict control much better than loose, lightweight management.

However, as vendors move fast in this direction and are stopping support for some traditional methods (Apple will very likely discontinue imaging technologies with the next macOS version) and Millennials are demanding a certain degree of freedom for their productivity, also enterprises should consider the modern workplace at least as an option.

Reducing Windows OS migration costs

Upgrading Windows operating systems in an enterprise context can be a very expensive task. Not because the development of a new OS base install image is so complex, but the integration of all application packages in the image and the testing of those on the new platform are the main cost drivers.

Typically, the costs are split as follows:

  • 33% developing the actual OS image
  • 33% testing the applications on the new OS platform
  • 17% project management and admin
  • 17% actual roll out

To minimize Windows OS upgrade costs, but also to reduce costs for the running operation of a Windows based workplace, applications must be decoupled from the underlying operating system layer. When applications are decoupled (meaning, they are not directly installed on the Windows OS), the Windows base image is less complex (and therefore more easy to develop), and the integration testing of the all the applications on the new platform can be saved.

Now, decoupling is easy said, but hard achieved. New ways of how applications are provided to the user need to be explored. The most obvious way would be to drive applications to become HTML-5 based. Although this might be possible for applications which are newly introduced in the environment, we also need a solution for existing legacy programs. I see two main technologies which could be of use here:

  • Application virtualization
  • Application publishing

Application virtualization provides a sandbox around the application, so a new Windows platform does not interfere with the application context and the virtualized application in the sandbox can be deployed on the new OS without intensive testing. Further advantage of this technology is the possibility to manage the application from a central point and control application updates without the requirement to send out software update packages to thousends of clients.

Application publishing lets the software run on a terminal server. So again, the application context is independent from the workstation OS and can be upgraded centrally on the terminal server.

The more applications are provided with either option, the slimmer the OS base image can become and upgrades to new OS versions will be getting cheaper. This might be the first step to a new model of the client workplace where the actual client OS is not important any more, but just provides a runtime for required access software (like a browser or the Citrix receiver software).

Strategies for replacing Microsoft Office

In my previous blogpost I discussed why Microsoft is still so dominant in the productivity software space and why it is hard to move to alternative office products. However, if you are still considering to replace MS Office, here is how to do it:


The most important point is management commitment. Don’t be naive, it will be a hard process and without the full support from the management up to the CEO, this project will fail. IBM, my former employer tried to save MS Office licenses in favour for IBM’s own product Symphony and later for Apache OpenOffice. When thinking of 400.000 IBMers, internal communication could have easily been moved to the Open Document Format, however, this effort never had the buy in of the upper management. Although Office licenses were restricted strongly and you were required to run a complex exception process to get one, most management still produced PowerPoints and Excel files. Internal tools were still developed as Excel macros and it sooner or later became a real pain if you would not have Microsoft Office installed. My personal opinion is that the missing commitment from management was the main reason why they gave up on this mid 2014 and purchased Office licenses again.

Introduce an internal file format standard

Establish ODF as the one and only accepted internal standard for editable files. For non-editable files, PDF should be the way to go. Also for sharing files with externals, as long as they don’t need to be edited, use PDF. Provide your corporate templates in the new format.

Stop developing Excel macros

Get your developer on board and provide education how to start developing in your new productivity tool suite. Regardless if it is Apache OpenOffice or LibreOffice (or any other alternative), they all come with a more or less powerful scripting language to fulfill most requirements. If it is worth migrating existing Excel macros to the new platform depends on how many and how complex they are. Maybe they can still live in Excel until they are sunset anyway.

Provide education to your users

In the very end it is all about user acceptance. The better they get educated, the higher chances are they accept the new platform. Don’t underestimate this point, from a cost perspective this might be the biggest portion of the project!

Consider web based solutions

Give your end users new functionality by moving towards the web. There are alternatives out on the market (Google, IBM Docs, Zoho, ….). Maybe these new possibilities attract your users.

I am sure, there are a lot more points to consider, but without the ones mentioned above, I am pretty sure such a project will fail. Please feel free to join the conversation on Twitter via @emarcusnet!

Why is Microsoft Office still so dominant?

If you think about productivity tools, Microsoft Office is the product it is all about. Even the term Office is used as a synonyme for productivity tools and competitive products use it in their name (LibreOffice, OpenOffice, Softmaker Office, etc…).

At least there are competitive products available, and there always were. Actually the grounds of productivity tools was once prepared by Lotus with its 1-2-3 spreadsheet calculation tool accompanied by AmiPro and Freelance to form the Lotus SmartSuite. But Microsoft soon took over this market with Word, Excel and Powerpoint and kept it tight since then. Allthough Microsoft Office is rich on functions, the alternative players can provide what 99% of users require, so

why is Microsoft Office still so predominant?

In the recent years I saw a number of projects with the goal to replace Microsoft Office. But none of them declared victory over Redmond’s cashcow. Here are some reasons why:


None of the competitive tools achieved a decent file format compatibility. Meaning, when exchanging documents with Microsoft Office users, the layout, tables, etc… often get misplaced making the document look differently then the original. Allthough import/export filters for the older binary based Microsoft formats (like .doc, .xls and .ppt) made progress over the years, the new XML based formats (.docx, .xlsx, .pptx) are again quite a hurdle.

I would see this as the main reason for failing user acceptance.

Excel macros

Don’t underestimate the number of application like Excel macros which are out in the world and sometimes vital to companies. I saw enterprises running critical reports based on Excel macros. Those macros can be complex, reading input data from various sources etc… To migrate them to another platform is a project of its own and even if possible ruins every serious cost case.


A lot of 3rd party tools provide connectors to Microsoft Office. This could be an Outlook plugin or the possibility to produce an Excel sheet as the result of a query, etc. For alternative office tools such integrations are often missing.

User acceptance

Finally, the employees are used to the Microsoft products from home / school / previous jobs -make them use an alternative usually costs high education and motivation efforts.

In my next blopost I will talk about strategies that could be considered when attempting to move away from Microsoft Office to an alternative product.

How important is the client OS any more?

As I mentioned in one of my older posts, the client operating system becomes less and less important in today’s IT world. However, how important is a standardized client OS still for enterprises?

Up to today, enterprises’ workstation rollout strategy is based on a corporate OS built which includes all relevant policies and settings. Any application package can rely on this standardized OS and its unique features. This was best practice for years – even decades, but is it the answer for the challenges the new way of working introduces to the corporate world?

I am not so sure about that and I think this paradigm needs to be reviewed!

With Bring your own device, mobility and social collaboration new end user devices are used for corporate applications. Most of these devices come with its own operating system and might or might not be manageable. Some CIOs still believe they can cope with this challange by just prohibiting these new devices, which is more ostrich-like politics than a a future proof concept.

While I don’t say that a standardized OS platform is something bad, I think today’s applications must not rely on it any more. They must be robust enough to cope with any underlying OS configuration to be ready for the future.

Infrastructure is commodity and therefore getting more and more diverse. This means, specific OS vendors, versions or settings must become less and less important to higher level of applications!

Systems of record and systems of engagement

Discussing hybrid cloud is almost a never ending story; there are so many different aspects to have a closer look at and explore. In this post, I would like to focus on workloads, placement and interconnection.

In earlier posts about workloads and proper placement on different clouds, I introduced the terms cloud enabled and cloud native. While these terms are still valid definitions, in a hybrid cloud context they evolve to the paradigms of systems of record and systems of engagement.


The main difference to the cloud enabled / cloud native approach is that we are not talking about isolated workloads that are better placed here or there, but of integrated workload components spread over infrastructures enabled by hybrid clouds.

Lets look a bit closer to this new paradigm:

Systems of record fit well on cloud-enabled infrastructures. Those workloads have specific requirements about security, performance and infrastructure redundancy. Relational databases holding sensitive data could be a good example for a workload component that is referred to as a system of record.

Systems of engagement have more the cloud native supported requirements in terms of flexibility, ease of deployment, elasticity and more. A web server farm might be a good example here.

So, what is the thrilling news?

Because of the much tighter integration of hybrid cloud environments than we saw a year ago, there are completely new possibilities for how workloads are split and distributed over the environments.

For example, if we consider a web shop application, the presentation layer can be considered a system of engagement, whereas the data layer is more likely a system of record. In a hybrid cloud, the web server farm of the presentation layer can be placed on a cloud native environment like IBM SoftLayer, but the core database cluster, holding the credit card information of the customers, might be better placed on a PCI compliant infrastructure like a private cloud or IBM Cloud Managed Services.

Another example could be a SAP system that provides web access capabilities. Again, the web facing part could be on a public cloud, but the main SAP application is certainly better to be placed on a suiting infrastructure like IBM Cloud Managed Services for SAP Applications or even a traditional IT environment.

As mentioned above, tight integration is key for success with hybrid cloud scenarios. One crucial integration aspect is networking. With the interconnection of the IBM strategic cloud data centers to the SoftLayer private network, IBM provides a worldwide high speed network backbone for all its cloud data centers to enable components on different cloud offerings to communicate to each other properly. Other aspects are orchestration and governance which I covered in my other post.

The combination of systems of record and systems of engagement bring hybrid cloud to the next evolution level. By using the best of both worlds in a single workload, placing the components on the best fitting infrastructures, hybrid cloud computing becomes even more powerful. Prerequisite is a tight integration especially in the areas network, orchestration and governance.

Don’t hesitate to continue the discussion with me on Twitter via @emarcusnet!

How to achieve success in cloud outsourcing projects

Outsourcing is a stepping stone on the way to cloud computing.

I would even say that companies with outsourcing experience can much more easily adopt cloud than others. They are already used to dealing with a service provider, and they have learned how to trust and challenge it to get the desired services. But certain criteria must be met in order to ensure that both parties get the most out of the outsourcing relationship.

According to a new study on the adoption of cloud within strategic outsourcing environments from IBM’s Center for Applied Insights, key success factors for a cloud outsourcing project are:

• Better due diligence
• Higher attention on security
• Incumbent providers
• Helping the business adjust
• Planning integration carefully

I fully agree with all of these points, but found myself thinking back 15 years when the outsourcing business was on the rise. Actually, these success factors do not differ much from the early days. A company that has already outsourced parts of its information technology (IT) to an external provider had to cover those topics already—perhaps in a slightly different manner—but still enough to understand their importance.

Let’s briefly discuss these five key topics more in detail.

Due diligence

A common motivation for outsourcing is a historically grown environment, which is expensive to run. Outsourcing providers have experience in analyzing an existing environment and transforming it to a more standardized setup that can be operated for reasonable costs. Proper due diligence is key to understanding the transformation efforts and effects. For cloud computing, the story is basically identical; the only difference is the target environment, which might be even more standardized. But again, knowing which systems are in scope for the transformation and what their specific requirements are is essential for success.


When a client introduces outsourcing the first time in its history, the security department needs to be involved early and their consent and support is required. In most companies, especially in sensitive industries like finance or health care, security policies prevent systems from being managed by a third-party service provider. Even if that is not obvious at first glance, the devil is often in the details.

I remember an insurance company that restricted traffic to the outsourcing providers shared systems in such a way that proper management using a cost effective delivery was not possible. Those security policies required adaption to reflect the service provider as a trusted second party rather than an untrusted third party. Cloud computing does bring in even more new aspects, but in general it is just another step in the same direction.

Incumbent providers

If your current outsourcing provider has proven that it is able to run your environment to the standards you expect, you might trust that it is operating its cloud offering in the same manner. Let’s look at the big outsourcing providers in the industry like IBM; they all have a mature delivery model, developed over years of experience. This delivery model is also used for their cloud offerings.

Business adjustment

In an outsourced environment, the business is already used to dealing with a service provider for its requests. Cloud computing introduces new aspects, like self-service capabilities or new restrictions because of a more standardized environment. The business needs to be prepared, but the step is by far smaller than without an already outsourced IT.

Plan integration

Again, this is a task that had to be done during the outsourcing transformation, too. Outsourcing providers have shared systems and delivery teams that need to be integrated. Cloud computing might walk one step further to even put workloads on shared systems, but that is actually not a new topic at all.

Outsourced clients are already well prepared for the step into cloud. Of course there is the one or other hurdle to take, but compared to firms still maintaining their own IT only, the journey is just another step in the same direction.

What are your thoughts about this topic? Catch me on Twitter via @emarcusnet for an ongoing discussion!

Are containers the future of hybrid clouds?

I recently stumbled over the following video from James Bottomley, a Linux Kernel developer working for Parallels. It’s a very good explanation of container technology and how it will be integrated in OpenStack:

What really caught my attention was the part about hybrid clouds. Looking a bit closer at containers in a hybrid cloud environment reveals their potential to introduce easy application mobility.

The main difference between virtual machines (VMs) and containers are that virtual machines run a complete operating system (including its own kernel) on virtualized hardware (provided by the hypervisor). A container shares, at minimum, everything up to the OS kernel with the host system and all other containers on the host. But it can share even more; in a standardized setup, a container can share not only the kernel, but also the main parts of the operating system and libraries so that the container itself is actually rather tiny.

When we think about hybrid clouds today, we mainly think about fully-virtualized machines running on different infrastructures, at different service providers, in different data centers. Such a setup still cannot fulfill a use case that is as old as cloud computing: moving workloads easily from one infrastructure to another. I see this as a requirement in multiple scenarios, from bursting out to other infrastructures during peaks to continuous operation requirements during maintenance windows or data center availability problems. Using containers with hybrid clouds would give users a new degree of freedom in where to place their workloads as decisions are not final and can be changed at any given moment.

Because containers are much smaller in size than virtual machines, moving them over a wide area network (WAN) from one provider to another is far easier than with VMs. The only prerequisite is a highly standardized setup of the container host, but systems tend to already be standardized in cloud environments, so this would be a perfect fit!

Today, we are not as far along as we could be. Containers are not yet supported by the big cloud software stacks, but as the video points out OpenStack is about to include them into its application programming interfaces (APIs) soon.

Container technology provides an easy way to make applications more mobile in a hybrid cloud setup. Because of the tiny footprint of containers, moving them over wide area networks is far easier than moving full virtual machines. Containers might fulfill the cloud promises of easy bursting during peaks or flexible leveraging of multiple cloud environments.

What is your opinion on how long it may take until containers are as well supported in cloud environments as virtual machines are today? Tell me your thoughts in the comments or on Twitter @emarcusnet!