Are containers the future of hybrid clouds?

I recently stumbled over the following video from James Bottomley, a Linux Kernel developer working for Parallels. It’s a very good explanation of container technology and how it will be integrated in OpenStack:

What really caught my attention was the part about hybrid clouds. Looking a bit closer at containers in a hybrid cloud environment reveals their potential to introduce easy application mobility.

The main difference between virtual machines (VMs) and containers are that virtual machines run a complete operating system (including its own kernel) on virtualized hardware (provided by the hypervisor). A container shares, at minimum, everything up to the OS kernel with the host system and all other containers on the host. But it can share even more; in a standardized setup, a container can share not only the kernel, but also the main parts of the operating system and libraries so that the container itself is actually rather tiny.

When we think about hybrid clouds today, we mainly think about fully-virtualized machines running on different infrastructures, at different service providers, in different data centers. Such a setup still cannot fulfill a use case that is as old as cloud computing: moving workloads easily from one infrastructure to another. I see this as a requirement in multiple scenarios, from bursting out to other infrastructures during peaks to continuous operation requirements during maintenance windows or data center availability problems. Using containers with hybrid clouds would give users a new degree of freedom in where to place their workloads as decisions are not final and can be changed at any given moment.

Because containers are much smaller in size than virtual machines, moving them over a wide area network (WAN) from one provider to another is far easier than with VMs. The only prerequisite is a highly standardized setup of the container host, but systems tend to already be standardized in cloud environments, so this would be a perfect fit!

Today, we are not as far along as we could be. Containers are not yet supported by the big cloud software stacks, but as the video points out OpenStack is about to include them into its application programming interfaces (APIs) soon.

Container technology provides an easy way to make applications more mobile in a hybrid cloud setup. Because of the tiny footprint of containers, moving them over wide area networks is far easier than moving full virtual machines. Containers might fulfill the cloud promises of easy bursting during peaks or flexible leveraging of multiple cloud environments.

What is your opinion on how long it may take until containers are as well supported in cloud environments as virtual machines are today? Tell me your thoughts in the comments or on Twitter @emarcusnet!

The hybrid cloud onion

In an earlier post, I defined a hybrid cloud and discussed possible scenarios including multiple public cloud providers, private clouds and traditional information technology (IT) environments.

While that post hopefully provided a good explanation of hybrid cloud infrastructures, it was not the full story, especially if you plan to implement a hybrid cloud in your environment. Like an onion that has many different layers around its core to protect it and keep it nice, white and juicy, hybrid cloud infrastructure has many different layers that keep it functional. Let’s take a look at these layers.

Management

Don’t underestimate the complexity that is introduced as a result of the different technologies and service providers in a hybrid setup. Establishing a common management infrastructure might be extremely hard and might not always make sense; however, there are components that you might want to integrate and harmonize. Usually these components provide monitoring, alerting and ticketing tools.

Whenever a new piece of infrastructure is added to your hybrid setup, you should consider the extent to which you need to integrate it into your existing management systems, and how to manage it once it is integrated.

Orchestration

Once the infrastructure is managed properly, you can think about how to provision new workloads. The next layer we should consider is orchestration. As with the management of your hybrid cloud infrastructure, your goal here should be to have a single point for provisioning that spans services over different cloud infrastructures.

The ongoing standardization of cloud application programming interfaces (APIs) addresses this need. Amazon Web Services APIs and OpenStack may be considered industry standards in this arena. More and more cloud providers and cloud products support at least one of the two, often both. Tools like IBM Cloud Orchestrator can not only provision single workloads on different hybrid infrastructures, but can also define workload patterns for faster and easier deployment.

Governance

Orchestration enables the use of a hybrid infrastructure in an automated way. And once you are able to orchestrate your environment, you need to control how that is done. The main question to answer is which workloads should run where. This is crucial because each hybrid infrastructure has its strengths and weaknesses. Private clouds might be the place for sensitive data, while public clouds might provide the best price point. It is important to establish policies about hybrid cloud usage scenarios.

Summary

Hybrid clouds are defined by their infrastructures, which are much like the layers of an onion. To successfully establish a hybrid cloud setup, management, orchestration and governance must not be forgotten!

Share your comments and questions with me on Twitter @eMarcusNet.

What is hybrid cloud?

The National Institute of Standards and Technology defines hybrid cloud as “a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models.” Although this definition sounds very reasonable, it does not cover all aspects of hybrid clouds.

Let’s discuss possible deployment models first. There are five defined cloud deployment models, from a private cloud on-premises to a public cloud service with a cloud service provider.

Cloud Deployment Models

Often, hybrid cloud refers to a combination of a public cloud service and a private cloud on-premises; however, hybrid clouds could also consist of two public clouds provided by different providers or even a combination of a cloud and traditional IT. Actually, a setup where existing systems on a traditional IT infrastructure are combined with a public cloud service is currently the most frequent use case of a hybrid cloud.

Any hybrid cloud setup has some challenges that need to be considered during the planning and design phase:

  • The most obvious challenge is network connectivity, especially if remote cloud services like a public cloud or a hosted private cloud are involved. Not only must bandwidth, latency, reliability and associated cost considerations be taken into account, but also the logical network topology must be carefully designed (networks, routing, firewalls).
  • Another huge challenge is the manageability of different cloud services. When different cloud services are used, every service provider will have its own management and provisioning environment. Those environments can be considered completely independent from each other. By having instances in different cloud services, there is no complete picture available showing the number of totally deployed instances and their statuses. An orchestration layer can be a possible solution for this problem. This layer provides a single interface for all cloud-related tasks. The orchestration layer itself communicates with the different cloud services through application programming interfaces (APIs). The big advantage of an orchestration layer is the ability to track and control activities on a central point to maintain the big picture.

Today, plenty of cloud service providers maintain their own proprietary set of APIs. This makes the use of orchestration very complex as the orchestrator requires some kind of a driver component for each proprietary API set. However, the trend of standardized APIs is clearly seen in the industry. OpenStack seems to be the future cloud industry standard.

Hybrid clouds mainly work on an infrastructure and application level. On the infrastructure layer, a hybrid cloud means the combination of virtual machines from different cloud services. On the application or software as a service (SaaS) layer, a hybrid cloud describes an application setup with components in different SaaS offerings or existing applications within the data center of an enterprise. The challenge on an SaaS-based hybrid cloud is mainly the exchange of data between the different services and applications. Like orchestration works on the infrastructure level, data integrators work on the application layer.

Summary:

A hybrid cloud is a combination of different clouds, be it private, public or a mix. The biggest challenge is the integration of the different cloud services and technologies. Standardized APIs such as OpenStack seem to solve most of those issues.

Making SCE+ elastic

IBM SmartCloud Enterprise+ (SCE+) is a highly scalable cloud infrastructure targeted for productive cloud enabled managed workloads. These target workloads make SCE+ a little bit heavier than SmartCloud Enterprise (SCE). The boarding of new clients alone takes a reasonable number of days to establish all the management systems for that client. This somehow applies to provisioning virtual machines, too. Due to the service activation process that is required to be completed before a managed system can be put into production, the provisioning time of a virtual instance takes at least some hours instead of just minutes on SCE. However, that is still a major improvement compared to a traditional IT where the provision of a new server can take up to several weeks!

The benefit of SCE+ is the management and high reliability of the platform. This makes it the perfect environment for core services and production workloads that can easily grow over time to meet the business requirements of tomorrow. Even scaling down is possible to a certain extent. However, if you need to react on heavy, short peaks of load, you need a platform that is truly elastic and can scale up and down on an hourly basis.

What you need is an elastic component, put on top of SCE+ to leverage SCE+’s high reliability for the important core functions of your business application, but have the possibility to scale out to a much lighter – and probably cheaper – platform for short peaks to cover the load. SCE provides all the necessary features you would expect from that elastic platform.



As shown in the video, f5’s BIG IP appliance can be used to control load on servers and scale out dynamically to SCE as required. The base infrastructure could be SCE+ as well as any other environment. However, what is required is a network connection between SCE and SCE+ as this is not part of any of the offerings.

The easiest way to establish a layer three connectivity between SCE and SCE+ is to create a Virtual Private Network tunnel between the two environments. Several options for that are feasible:

  • Connecting a software VPN device or appliance on SCE to the official SCE+ VPN service
  • Connecting a hardware VPN device or appliance on SCE+ to the official SCE VPN service
  • Use the Virtela Meet VPN service to interconnect VPN tunnels from both environments.

Summary:

SmartCloud Enterprise can be used to make SmartCloud Enterprise+ elastic. Several options exist to interconnect the two environments and leverage the benefits of both environments for specific use cases!

Challenges for hybrid clouds

Hybrid clouds are starting to become more and more attractive for larger cnterprises. The claim is that hybrid clouds combine the benefits of both private and public clouds, but omit their drawbacks. Although this sounds great, let’s look at the challenges we face when we start to design and build a hybrid cloud:

Infrastructure as a service (IaaS) layer

On the IaaS layer, the main benefit we see in a hybrid cloud environment is that of leveraging the elasticity of public cloud resources, but maintaining a higher level of security for sensitive corporate data and applications by using a private cloud.

To address the security concerns for certain data and applications, a strong governance model together with adequate security policies need to be developed. It must be made crystal clear where applications and data are allowed to be placed and what the rational is behind these policies.

When applications scale out into public clouds to cover peak loads, we need to have a closer look at data placement. Not only for the sake of security, but this time also for the sake of performance. If an application requires a high amount of data, this application might not be suited for such a scenario unless, we have designed that properly beforehand (adequate network bandwidths, data replication, and so on).

Another challenging topic about hybrid clouds on the IaaS layer is their management. Clients who go for a full integrated hybrid cloud need to consider how to include the public cloud service catalog and automated provisioning into their local processes and infrastructure. Not all public cloud service providers offer open APIs or comply with open standards, but that’s a prerequisite for a seamless integration.

Challenges even grow when we consider managed public cloud services. Beside the technical boundaries of wide area network and data center locations, a split management responsibility comes with a large backpack of issues, which must be addressed. Monitoring, ticketing, backup and restore, and user management are just some of them. The service provider usually feels responsible only up to the operating system layer, sometimes also for middleware and databases, but very rarely for customer-specific applications. Those client specific applications must be operated by the clients themselves and therefore integrated into their systems management systems.

Software as a service (SaaS) layer

On the SaaS layer, a hybrid setup is less likely because of elasticity, but usually purely because of functionality. In this scenario, certain business functions are covered by a SaaS solution from an external service provider.

The challenge with this setup is to transport the required data to and from the public SaaS. First, the data that needs to be transferred must be identified, then, a secure interface must be developed to ensure the correct data is reliably fed into the remote software service, and result data is transferred back to the local environment. Because of the variety of data, and local and remote application combinations, not many standard software products are available to implement this linkage. IBM Cast Iron is one of them and provides a data field to data field link between many software products such as SAP and Salesforce.com.

Summary

A hybrid cloud is a complex animal, and preparation and design are key to address its challenges. However, most of these challenges can be solved, either by technology (Tivoli Service Automation Manager, Cast Iron, IBM Hybrid Cloud Integrator) or by governance and organization. Once established, a hybrid cloud is a very powerful asset combining today’s enterprise requirements with flexibility and cost efficiency!