How to achieve success in cloud outsourcing projects

Outsourcing is a stepping stone on the way to cloud computing.

I would even say that companies with outsourcing experience can much more easily adopt cloud than others. They are already used to dealing with a service provider, and they have learned how to trust and challenge it to get the desired services. But certain criteria must be met in order to ensure that both parties get the most out of the outsourcing relationship.

According to a new study on the adoption of cloud within strategic outsourcing environments from IBM’s Center for Applied Insights, key success factors for a cloud outsourcing project are:

• Better due diligence
• Higher attention on security
• Incumbent providers
• Helping the business adjust
• Planning integration carefully

I fully agree with all of these points, but found myself thinking back 15 years when the outsourcing business was on the rise. Actually, these success factors do not differ much from the early days. A company that has already outsourced parts of its information technology (IT) to an external provider had to cover those topics already—perhaps in a slightly different manner—but still enough to understand their importance.

Let’s briefly discuss these five key topics more in detail.

Due diligence

A common motivation for outsourcing is a historically grown environment, which is expensive to run. Outsourcing providers have experience in analyzing an existing environment and transforming it to a more standardized setup that can be operated for reasonable costs. Proper due diligence is key to understanding the transformation efforts and effects. For cloud computing, the story is basically identical; the only difference is the target environment, which might be even more standardized. But again, knowing which systems are in scope for the transformation and what their specific requirements are is essential for success.

Security

When a client introduces outsourcing the first time in its history, the security department needs to be involved early and their consent and support is required. In most companies, especially in sensitive industries like finance or health care, security policies prevent systems from being managed by a third-party service provider. Even if that is not obvious at first glance, the devil is often in the details.

I remember an insurance company that restricted traffic to the outsourcing providers shared systems in such a way that proper management using a cost effective delivery was not possible. Those security policies required adaption to reflect the service provider as a trusted second party rather than an untrusted third party. Cloud computing does bring in even more new aspects, but in general it is just another step in the same direction.

Incumbent providers

If your current outsourcing provider has proven that it is able to run your environment to the standards you expect, you might trust that it is operating its cloud offering in the same manner. Let’s look at the big outsourcing providers in the industry like IBM; they all have a mature delivery model, developed over years of experience. This delivery model is also used for their cloud offerings.

Business adjustment

In an outsourced environment, the business is already used to dealing with a service provider for its requests. Cloud computing introduces new aspects, like self-service capabilities or new restrictions because of a more standardized environment. The business needs to be prepared, but the step is by far smaller than without an already outsourced IT.

Plan integration

Again, this is a task that had to be done during the outsourcing transformation, too. Outsourcing providers have shared systems and delivery teams that need to be integrated. Cloud computing might walk one step further to even put workloads on shared systems, but that is actually not a new topic at all.

Outsourced clients are already well prepared for the step into cloud. Of course there is the one or other hurdle to take, but compared to firms still maintaining their own IT only, the journey is just another step in the same direction.

What are your thoughts about this topic? Catch me on Twitter via @emarcusnet for an ongoing discussion!

How Dropbox revolutionized enterprise IT

Some of you might remember those early days of computer networking when coaxial cables were used to interconnect PCs and Novell Netware was the market leader for file sharing. Although new players appeared in this space with IBM LAN Server and Microsoft Windows NT, the basic concept of shared network drives did not change much.

The general concept is based on centralized file repositories. Management and especially access management is usually limited to administrative personnel and based on groups rather than on individual users. And, because of the centralized approach, users are required to be online to access files.

This was state of the art for almost 20 years.

As with anything that stays for a long time, requirements change and the centralized concept was unable to meet the new needs of the millennium generation. Mobile computing started to become more natural, the number and kinds of devices changed from static PCs to notebooks and nowadays tablets and mobile phones. Users are not only able to take administrative responsibilities, but they can even demand to manage their resources themselves.

Although some tried to enhance the existing software with all kinds of add-ons (offline folders) and workarounds to help support the new requirements, the outcome was not really satisfying.

Dropbox was and still is so successful because it fulfills those new needs!

The paradigm switched from a centralized file store to a distributed, replicated file repository with easy access regardless of whether the user is online, offline or using a mobile device like a tablet or mobile phone or even only a web browser. The user is able to share his owned files easily with other users or groups through a simple web interface.

But how does this affect enterprise IT?

These new user requirements are not limited to consumers. Actually, the need to have access to your important files and work on them in a geographically distributed team is a very common requirement of today’s enterprises. Dropbox has inspired a number of other products and services specifically targeting the enterprise market to appear in recent years. Not only do these programs support the new file sharing paradigm, but they also support core enterprise requirements for data security, privacy and control.

IBM Connections (and its software as a service companion IBM SmartCloud for Social Business) is a perfect example.

File services today are no longer based on shared network drives, but rather on distributed file repositories with easy access through web interfaces or replication clients and which enables the user to perform limited management task themselves. If the enterprise IT department does not fulfill these new user requirements, shadow IT based on Dropbox and similar technologies may continue to rise. Please share your thoughts in the comments below.

Cloud Adoption in Europe

While cloud adoption booms around the world, Europe (especially Western Europe) seems to be moving at its own pace. This is somewhat surprising, because Western Europe is a high cost and high price market where adoption of new technologies used to be considered a differentiator against competition. This seems not to be the case for cloud computing—but why?

Let’s look at who can easily adopt cloud. The startup companies who save initial investment by not buying hardware themselves and receiving cloud services are great candidates. Especially when they intend to grow fast in the early years, cloud provides a perfect scalable platform. Traditionally, there are more startups based in the Americas and nowadays also in the Asia Pacific region than in Europe. One reason for that might be the amount of available venture capital. Europe tends to be more conservative and may take less risk in investments.

But, beside the startups, there are many mature companies and enterprises in Europe. Why are they adopting cloud more slowly?

One root cause is the diversity of national countries. This has a major impact on the following two areas that are prerequisites for successful cloud adaption:

Regulatory compliance issues: Every European country has its own set of data privacy rules and legislation. When using cloud computing, it is very likely that national boundaries are crossed, either by using a data center in another country or because the cloud service provider uses operational personnel sitting in a third party country. To be clear, this is not a potential show stopper, it just adds complexity. A CIO might feel safer keeping the data on his own property, just to be sure. The recent discussions about the NSA surveillance program and the US Patriot Act does not help here either.

Wide area network costs: Another disadvantage which directly derives from national diversity is the cost of wide area networks. International lines cost significantly more than data links within a country. This is caused by the original European structure of national carriers holding monopolies on the last mile.

But, beside national boundaries, client readiness is often a big blocker for cloud adoption. Many enterprises maintain security policies that are just not cloud ready. The cloud provider needs to be treated as a trusted partner and not as a third party. Many security policies are not flexible enough to adapt to the standards of the cloud provider.

Another reason is that many companies are not ready yet to give up a certain degree of control. This is not only in respect to infrastructure architecture, but also about server management. Servers in the cloud are still considered owned virtual machines. In reality, receiving cloud services means to use a cloud platform provided and owned by the cloud service provider.

As the world continues to adopt cloud, the situation starts to improve. More and more client decision makers understand the opportunities provided by cloud computing and are willing to invest. Regulatory compliance is starting to be more and more aligned, especially in the European Union. Lets see how cloud adoption develops in Europe!

What is hybrid cloud?

The National Institute of Standards and Technology defines hybrid cloud as “a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models.” Although this definition sounds very reasonable, it does not cover all aspects of hybrid clouds.

Let’s discuss possible deployment models first. There are five defined cloud deployment models, from a private cloud on-premises to a public cloud service with a cloud service provider.

Cloud Deployment Models

Often, hybrid cloud refers to a combination of a public cloud service and a private cloud on-premises; however, hybrid clouds could also consist of two public clouds provided by different providers or even a combination of a cloud and traditional IT. Actually, a setup where existing systems on a traditional IT infrastructure are combined with a public cloud service is currently the most frequent use case of a hybrid cloud.

Any hybrid cloud setup has some challenges that need to be considered during the planning and design phase:

  • The most obvious challenge is network connectivity, especially if remote cloud services like a public cloud or a hosted private cloud are involved. Not only must bandwidth, latency, reliability and associated cost considerations be taken into account, but also the logical network topology must be carefully designed (networks, routing, firewalls).
  • Another huge challenge is the manageability of different cloud services. When different cloud services are used, every service provider will have its own management and provisioning environment. Those environments can be considered completely independent from each other. By having instances in different cloud services, there is no complete picture available showing the number of totally deployed instances and their statuses. An orchestration layer can be a possible solution for this problem. This layer provides a single interface for all cloud-related tasks. The orchestration layer itself communicates with the different cloud services through application programming interfaces (APIs). The big advantage of an orchestration layer is the ability to track and control activities on a central point to maintain the big picture.

Today, plenty of cloud service providers maintain their own proprietary set of APIs. This makes the use of orchestration very complex as the orchestrator requires some kind of a driver component for each proprietary API set. However, the trend of standardized APIs is clearly seen in the industry. OpenStack seems to be the future cloud industry standard.

Hybrid clouds mainly work on an infrastructure and application level. On the infrastructure layer, a hybrid cloud means the combination of virtual machines from different cloud services. On the application or software as a service (SaaS) layer, a hybrid cloud describes an application setup with components in different SaaS offerings or existing applications within the data center of an enterprise. The challenge on an SaaS-based hybrid cloud is mainly the exchange of data between the different services and applications. Like orchestration works on the infrastructure level, data integrators work on the application layer.

Summary:

A hybrid cloud is a combination of different clouds, be it private, public or a mix. The biggest challenge is the integration of the different cloud services and technologies. Standardized APIs such as OpenStack seem to solve most of those issues.

IBM SmartCloud Enterprise+ 1.3 in a nutshell

On November 19, 2013, IBM SmartCloud Enterprise+ (SCE+) version 1.3 was released. While every new SCE+ release has brought some interesting improvements, I’m particularly excited about 1.3. Tons of new features and improvements were implemented, making it worth having a closer look at the highlights of this version of SCE+.

Completely new portal. Lets be polite, but the old portal had major room for improvement. The new portal was completely rewritten and now meets requirements clients have for such an interface.

New virtual machine (VM) sizes. New standard configurations were introduced including Jumbo for x86 VMs. However, what is even more important are the new maximum possible configurations for a single VM, which can be:

  • Up to 64 vCPUs
  • Up to 128 GB RAM
  • Up to 48 TB storage

These new configurations can enable more workloads to run on SCE+.

Clustering. Even more workloads can now be enabled because of the new clustering options. Clients can choose between operating system (OS) based clustering (for all on SCE+ supported operating systems and platforms) or simple anti-collocation which enables clients to cluster VMs on the application level. Anti-collocation means that two VMs will not be provisioned on the same physical host to ensure availability of at least one node in case a host goes down.

It is important to mention that service level agreements (SLAs) are still based on the individual VM, so there is no aggregated SLA for a cluster.

Anti-collocation (and clustering) does not guarantee that the physical hosts are based in different physical buildings. Even in dual-site based SCE+ data centers, the different nodes of cluster might still be located on one site. This could potentially be removed in a later release of SCE+.

Unmanaged instances. Clients can request unmanaged virtual machines on SCE+ with the following limitations:

  • Managed VMs cannot be transformed to unmanaged ones (or the other way around)
  • Clustering is not available on unmanaged VMs
  • Unmanaged VMs must be based on available SCE+ images; there is still no way to import custom images
  • Migration services are not available for unmanaged instances

Migration services. Migration services for x86 and IBM Power System platforms can now be contracted as an optional part of an SCE+ contract.

Active Directory integration. SCE+ now supports the integration of complex Microsoft Active Directory setups including a client-specific isolated domain or even joining (managed) VMs to the client’s active directory (AD) forest.

Database and middleware alerting and management. Beside the management of the operating system, clients can now choose database and middleware management as an option in two flavors:

  • Alerting only. The client maintains responsibility, but will be alerted by an automated monitoring system in case of failure.
  • Management. IBM provides management for selected database and middleware products (mainly IBM DB2 database software, MS SQL, Oracle, Sybase and IBM WebSphere products).

Custom hostnames and FQDN. Custom hostnames and full qualified domain names (FQDN) can now be chosen during the provisioning of a server VM.

Load Balancer as a service. Beside the currently available virtual software load balancer (vLBF), load balancing as a service is also available. The new service is based on industry leading hardware appliances and provides features like SSL offloading. Currently load balancing is only supported in a single site.

Increased number of security zones. Although three security zones remain standard, clients can request up to 12 security zones if required by the design of their environment when onboarding. Additional security zones can also be requested after boarding through an Request for Service (RFS), but provisioning is then subject to availability. However, there is a hard limit of 12 security zones per client.

Summary

SCE+ 1.3 is a milestone in terms of features and new possibilities. It enables a lot more workloads to be supported on SCE+ and SCE+ based offerings like IBM SmartCloud for SAP (SC4SAP) and IBM SmartCloud for Oracle Applications (SC4Oracle).

Making SCE+ elastic

IBM SmartCloud Enterprise+ (SCE+) is a highly scalable cloud infrastructure targeted for productive cloud enabled managed workloads. These target workloads make SCE+ a little bit heavier than SmartCloud Enterprise (SCE). The boarding of new clients alone takes a reasonable number of days to establish all the management systems for that client. This somehow applies to provisioning virtual machines, too. Due to the service activation process that is required to be completed before a managed system can be put into production, the provisioning time of a virtual instance takes at least some hours instead of just minutes on SCE. However, that is still a major improvement compared to a traditional IT where the provision of a new server can take up to several weeks!

The benefit of SCE+ is the management and high reliability of the platform. This makes it the perfect environment for core services and production workloads that can easily grow over time to meet the business requirements of tomorrow. Even scaling down is possible to a certain extent. However, if you need to react on heavy, short peaks of load, you need a platform that is truly elastic and can scale up and down on an hourly basis.

What you need is an elastic component, put on top of SCE+ to leverage SCE+’s high reliability for the important core functions of your business application, but have the possibility to scale out to a much lighter – and probably cheaper – platform for short peaks to cover the load. SCE provides all the necessary features you would expect from that elastic platform.



As shown in the video, f5’s BIG IP appliance can be used to control load on servers and scale out dynamically to SCE as required. The base infrastructure could be SCE+ as well as any other environment. However, what is required is a network connection between SCE and SCE+ as this is not part of any of the offerings.

The easiest way to establish a layer three connectivity between SCE and SCE+ is to create a Virtual Private Network tunnel between the two environments. Several options for that are feasible:

  • Connecting a software VPN device or appliance on SCE to the official SCE+ VPN service
  • Connecting a hardware VPN device or appliance on SCE+ to the official SCE VPN service
  • Use the Virtela Meet VPN service to interconnect VPN tunnels from both environments.

Summary:

SmartCloud Enterprise can be used to make SmartCloud Enterprise+ elastic. Several options exist to interconnect the two environments and leverage the benefits of both environments for specific use cases!

The future of managed cloud services

At the very beginning, there were unmanaged virtual machines as a service. These services were mainly used for development and test purposes or to host horizontal, born on the web workloads. For more traditional production workloads another type of cloud service is required. This new service needs to be more fault tolerant to provide the required availability for so called cloud enabled workloads, but also provide a certain management capability to ensure a stable operation. Managed infrastructure as a service (IaaS) solutions such as IBM’s SmartCloud Enterprise+ provide these service levels and capabilities.

But is this the future of managed cloud services?

I don’t think so, although this model fits for some workloads, it is just the start of the journey.

The ultimate goal of managed cloud services is to receive higher value services out of the cloud. In my opinion, the future of managed cloud services are platform as a service (PaaS) offerings.

Todays managed IaaS offerings cause some challenges, both for the service provider as well as for the service receiver. One reason for that is the higher complexity due to the shared responsibility of an IaaS managed virtual machine (VM). In this scenario the service provider manages the VM up to the operating system level, but the client is responsible for the middleware and application on top. In such a setup it is extremely hard to make a clear separation of the operating system (OS) from its higher services. Many applications require OS settings to be set or access to operating system resources.

A managed PaaS overcomes these challenges by providing a consistent software stack out of one hand. All components are designed for the purpose of this platform. If the PaaS service is a database, OS settings, IO subsystem and storage are designed and configured to provide a robust service with decent performance. The client can rely on the service providers expertise and does not need to support a database – or any other middleware – on an operating system platform he has no real insight on its configuration and not full control over its settings.

In one of my previous blog posts on how Middleware topology changes because of cloud, I discussed the change in middleware and database architecture. This is exactly where PaaS comes into play. By moving away from the consolidated database and middleware clusters, clients require agile, elastic and managed databases and middleware as a service to be able to handle the higher number of instances.

Summary

Due to the complexity and limitations, managed IaaS will become a niche offering for accommodating non standard software. For middle-ware and database with a certain market share, managed PaaS offerings will become commodity.

Middleware topology changes because of cloud

Once upon a time, applications ran on physical servers. These physical server infrastructures were sized to accommodate the application software as well as required middleware and database components. The sizing was mainly based on peak load expectations because only limited hardware upgrading was possible. This led to a very simple application landscape topology. Every application had its set of physical server systems. If an application had to be replaced or upgraded, only those server systems were affected.

When the number of applications grew, the number of server systems reached highs which were hard to manage and maintain. Consolidation was the trend of that time. Together with virtualization technologies which gained maturity, capacity upgrades were as easy as moving a slider bar. After the consolidation of the physical layer in the late 90s and early 2000s, the middleware and database layer was consolidated. Starting around 2005 we saw database hotels and consolidated middleware stacks to provide a standardized layer of capabilities to the applications.

Although this setup helped streamlining middleware and database management, and standardizing the software landscape, it introduced a number of problems:

The whole environment became more complex. Whenever a middleware stack was changed (due to a patch or even a version upgrade), multiple applications were affected and required to be retested. Maintenance windows needed to be coordinated with all application owners, and unplanned downtimes had a high impact on a higher number of applications.

Modern cloud computing is reversing this trend again. Because provisioning and management of standard middleware and database services can be highly automated, deploying and managing a higher number of smaller server images is less effort than it was in the early days. By de-consolidating these middleware and database blocks, we gain again higher flexibility and a far less complex environment.

There is another positive side effect of this approach: When application workloads are bundled together, they can more easily being moved to a fit for purpose infrastructure. Especially when we think about a migration of some workloads into the cloud, while others will stay on a more traditional IT infrastructure, the new model helps moving these isolated workloads, without affecting others.

Summary

I am not saying that deconsolidation of database and middleware blocks is the holy grail of middleware topology architecture, but in a cloud environment it can help to get rid of complex integration problems while not introducing new ones.

How SmarterComputing solutions can work for you

Last Saturday I finally registered my membership for car2go. You might have seen the white and blue Smart-cars in selected cities. Car2go basically is a car sharing offering, however, what fascinates me about it is the full integration with SmarterComputing and cloud technologies that brings a real value for me as a member.

I have to admit that my curiosity about how they managed to bring all these technologies together was one of my main motivations to join. I own a car, a motorbike and public transport in my hometown Vienna is definitely one of the best worldwide. so I am not really dependent on another transport option; however, the whole concept of a car sharing offering, well integrated with the Internet and mobile devices, caught my attention. Beside the fact that I always wanted to know what it is to drive a Smart-car and that is definitely one of the cheapest ways to find out.

Registrations is simple as it can be, you fill out a web form and then you just need to stop by the car2go shop to show your drivers license and pick up your membership card. Not really extraordinary until that, but now it starts getting thrilling almost as described by Sebastian in his blog post about driving as a service (http://thoughtsoncloud.com/index.php/2011/11/driving-as-a-service/).

You either look on the Internet or you use the car2go smartphone app to check out where the nearest free car is parked. You see that on a GoogleMap overlay. You can then choose to reserve a car simply by clicking on it in the map. It will then be set to occupied for the next 30min so that you can comfortably walk to it and check it. Checking in means you put your NFC equipped membership card on the windshield’s lower left corner where a device in the car reads out the data from that membership card and after validation unlocks the car for you.

Because the key is in the car you can then just drive around while a fee per minute is charged to your account, regardless how many kilometers you drive. The fee is quite reasonable.

Of course, the car2go area is limited. You can drive outside that area, but you can checkout only when within the boundaries. The car always knows its exact position and sends that to the central systems. Checking out is as easy as checking in. You just place your membership card on that space on the windshield and the car is locked. As soon as you check out, the car is immediately shown as available on the maps with its exact position!

I would really like to see more of these kind of offerings. It indeed shows what SmarterComputing can do for you in your day to day life!

When is a cloud open?

Todays blog post was inspired by Red Hat’s Vice President for cloud business, Scott Crenshaw, and his definition of an open cloud:

  • Open source
  • Viable, independent community
  • Based on open standards
  • Unencumbered by patents and other IP restrictions
  • Lets you deploy to your choice of infrastructure
  • Pluggable, extensible, and open API
  • Enables portability across clouds

Although I think this is a very good start for a discussion, I do not fully agree with his definition!

Open standards, APIs, and portability

I don’t doubt these points of Mr. Crenshaw’s definition I see them as the most important criteria for a cloud to be called open. Cloud consumers should be able to seamlessly move their workloads from one open cloud to another. There is no room for vendor locking I fully agree here with Mr. Crenshaw!

Open source, independent community and patents

Considering the fact that Mr. Crenshaw is a Red Hat employee, no wonder, he sees open source as a requirement for a cloud to be open. However, is that really the case? I doubt it.

Sure, open source software and the viable, independent communities have their benefits, but that is not specific to cloud computing, nor is it a requirement for an open cloud. I honor the fact that open source-based software stacks such as OpenStack for example are implementing open standards and interfaces, and are driving their definition. But after they are established, I see no reason why a closed source software that complies with those standards should not be considered open.

Traditionally software products are within the responsibility of the IT departments. Cloud computing changes this paradigm to a certain extent. With cloud computing, we often see a direct relationship between a business unit and the cloud vendor bypassing the IT department. Now, we can argue whether this is good or bad, but what we do need to pay attention to is seeing the product from a different viewpoint. The pure technical aspects become less important. So, if it is less important whether or not the cloud is based on open source software, the important question is: what can it do and not do?

Choice of infrastructure

I admit that the open choice of infrastructure can help eliminate vendor locking. But I personally consider the support of different platforms and infrastructures as a feature and nothing more. Of course, when selecting a cloud software stack or vendor, the provided features must fit to the requirements. And, the more flexible the features are, the more future-proof my selection might be, but that’s not a criteria for an open cloud at least not for me.

Summary

An open cloud must stick to open standards and implement open interfaces and APIs. I see those as the main criteria for an open cloud. Open source is definitely helping to push these criteria, but is not a mandatory requirement. At the end of the day, cloud consumers must be able to move their cloud workloads and data from one cloud to another, that’s what makes the open cloud reality!

Sources:

What’s an “Open Cloud,” Anyway? Red Hat Says It’s Not VMware by Joe Brockmeier (http://www.readwriteweb.com/cloud/2012/02/whats-an-open-cloud-anyway-red.php?sf3468400=1)