Are containers the future of hybrid clouds?

I recently stumbled over the following video from James Bottomley, a Linux Kernel developer working for Parallels. It’s a very good explanation of container technology and how it will be integrated in OpenStack:

What really caught my attention was the part about hybrid clouds. Looking a bit closer at containers in a hybrid cloud environment reveals their potential to introduce easy application mobility.

The main difference between virtual machines (VMs) and containers are that virtual machines run a complete operating system (including its own kernel) on virtualized hardware (provided by the hypervisor). A container shares, at minimum, everything up to the OS kernel with the host system and all other containers on the host. But it can share even more; in a standardized setup, a container can share not only the kernel, but also the main parts of the operating system and libraries so that the container itself is actually rather tiny.

When we think about hybrid clouds today, we mainly think about fully-virtualized machines running on different infrastructures, at different service providers, in different data centers. Such a setup still cannot fulfill a use case that is as old as cloud computing: moving workloads easily from one infrastructure to another. I see this as a requirement in multiple scenarios, from bursting out to other infrastructures during peaks to continuous operation requirements during maintenance windows or data center availability problems. Using containers with hybrid clouds would give users a new degree of freedom in where to place their workloads as decisions are not final and can be changed at any given moment.

Because containers are much smaller in size than virtual machines, moving them over a wide area network (WAN) from one provider to another is far easier than with VMs. The only prerequisite is a highly standardized setup of the container host, but systems tend to already be standardized in cloud environments, so this would be a perfect fit!

Today, we are not as far along as we could be. Containers are not yet supported by the big cloud software stacks, but as the video points out OpenStack is about to include them into its application programming interfaces (APIs) soon.

Container technology provides an easy way to make applications more mobile in a hybrid cloud setup. Because of the tiny footprint of containers, moving them over wide area networks is far easier than moving full virtual machines. Containers might fulfill the cloud promises of easy bursting during peaks or flexible leveraging of multiple cloud environments.

What is your opinion on how long it may take until containers are as well supported in cloud environments as virtual machines are today? Tell me your thoughts in the comments or on Twitter @emarcusnet!

The hybrid cloud onion

In an earlier post, I defined a hybrid cloud and discussed possible scenarios including multiple public cloud providers, private clouds and traditional information technology (IT) environments.

While that post hopefully provided a good explanation of hybrid cloud infrastructures, it was not the full story, especially if you plan to implement a hybrid cloud in your environment. Like an onion that has many different layers around its core to protect it and keep it nice, white and juicy, hybrid cloud infrastructure has many different layers that keep it functional. Let’s take a look at these layers.

Management

Don’t underestimate the complexity that is introduced as a result of the different technologies and service providers in a hybrid setup. Establishing a common management infrastructure might be extremely hard and might not always make sense; however, there are components that you might want to integrate and harmonize. Usually these components provide monitoring, alerting and ticketing tools.

Whenever a new piece of infrastructure is added to your hybrid setup, you should consider the extent to which you need to integrate it into your existing management systems, and how to manage it once it is integrated.

Orchestration

Once the infrastructure is managed properly, you can think about how to provision new workloads. The next layer we should consider is orchestration. As with the management of your hybrid cloud infrastructure, your goal here should be to have a single point for provisioning that spans services over different cloud infrastructures.

The ongoing standardization of cloud application programming interfaces (APIs) addresses this need. Amazon Web Services APIs and OpenStack may be considered industry standards in this arena. More and more cloud providers and cloud products support at least one of the two, often both. Tools like IBM Cloud Orchestrator can not only provision single workloads on different hybrid infrastructures, but can also define workload patterns for faster and easier deployment.

Governance

Orchestration enables the use of a hybrid infrastructure in an automated way. And once you are able to orchestrate your environment, you need to control how that is done. The main question to answer is which workloads should run where. This is crucial because each hybrid infrastructure has its strengths and weaknesses. Private clouds might be the place for sensitive data, while public clouds might provide the best price point. It is important to establish policies about hybrid cloud usage scenarios.

Summary

Hybrid clouds are defined by their infrastructures, which are much like the layers of an onion. To successfully establish a hybrid cloud setup, management, orchestration and governance must not be forgotten!

Share your comments and questions with me on Twitter @eMarcusNet.

What are Community Clouds?

The nature of any public cloud is to meet the requirements that a majority of its users need. There are always trade-offs in functionality, standardization and costs. So, in the end, the implemented requirements are some kind of least common denominator.

While this might be good (enough) for most industries, it often is not enough for client groups with special requirements, like financial institutions, government organizations or pharmaceutical companies. To drive cloud adoption for those clients, we need a type of cloud that can meet their particular needs. Such clouds are referred to as community clouds because they are designed to serve a special community of clients. A community cloud is an infrastructure that is shared by several organizations with similar concerns.Prescription Medicine 3348667

But why are community clouds so important for both the service providers and the industries and communities that use them?

The service providers can target new client segments that they could not reach with a standard cloud offering. Although investments into the cloud’s underlying infrastructure and security processes might be higher, competition in this segment is lower, and marketable prices are potentially higher than for standard public cloud services. Depending on the targeted industry, the offered cloud services develop from a commodity business to a high-value, high-margin business, which might be more attractive for service providers.

For special industry clients, a community cloud provides the possibility to gain the benefits of cloud computing but stay compliant with their industry requirements. The service provider takes over the burden for required certifications like the Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry (PCI) data security standards and so on.

Another attractive aspect might be the fact that a client’s neighbor in a community cloud is most likely from the same industry. In a governmental community cloud, sharing of infrastructure (as is the nature of public clouds) is only done between other governmental organizations, which might relieve them from certain security concerns they have with open public clouds.

government bldg

Summary

Although community clouds are still niche products, they are becoming more and more important with general cloud adoption across industries. They can be a good solution for both the service provider, who gets better margins with higher value services, and for the client, who can be sure that its industry-specific needs and regulatory requirements are met in a professional way. The fact that only clients from the same industry are on such a community cloud might help to increase trust in cloud computing, even for highly regulated and sensitive industries.

Could community clouds help increase cloud adoption? Continue the conversation with me on Twitter @eMarcusNet.

How Dropbox revolutionized enterprise IT

Some of you might remember those early days of computer networking when coaxial cables were used to interconnect PCs and Novell Netware was the market leader for file sharing. Although new players appeared in this space with IBM LAN Server and Microsoft Windows NT, the basic concept of shared network drives did not change much.

The general concept is based on centralized file repositories. Management and especially access management is usually limited to administrative personnel and based on groups rather than on individual users. And, because of the centralized approach, users are required to be online to access files.

This was state of the art for almost 20 years.

As with anything that stays for a long time, requirements change and the centralized concept was unable to meet the new needs of the millennium generation. Mobile computing started to become more natural, the number and kinds of devices changed from static PCs to notebooks and nowadays tablets and mobile phones. Users are not only able to take administrative responsibilities, but they can even demand to manage their resources themselves.

Although some tried to enhance the existing software with all kinds of add-ons (offline folders) and workarounds to help support the new requirements, the outcome was not really satisfying.

Dropbox was and still is so successful because it fulfills those new needs!

The paradigm switched from a centralized file store to a distributed, replicated file repository with easy access regardless of whether the user is online, offline or using a mobile device like a tablet or mobile phone or even only a web browser. The user is able to share his owned files easily with other users or groups through a simple web interface.

But how does this affect enterprise IT?

These new user requirements are not limited to consumers. Actually, the need to have access to your important files and work on them in a geographically distributed team is a very common requirement of today’s enterprises. Dropbox has inspired a number of other products and services specifically targeting the enterprise market to appear in recent years. Not only do these programs support the new file sharing paradigm, but they also support core enterprise requirements for data security, privacy and control.

IBM Connections (and its software as a service companion IBM SmartCloud for Social Business) is a perfect example.

File services today are no longer based on shared network drives, but rather on distributed file repositories with easy access through web interfaces or replication clients and which enables the user to perform limited management task themselves. If the enterprise IT department does not fulfill these new user requirements, shadow IT based on Dropbox and similar technologies may continue to rise. Please share your thoughts in the comments below.

Cloud Adoption in Europe

While cloud adoption booms around the world, Europe (especially Western Europe) seems to be moving at its own pace. This is somewhat surprising, because Western Europe is a high cost and high price market where adoption of new technologies used to be considered a differentiator against competition. This seems not to be the case for cloud computing—but why?

Let’s look at who can easily adopt cloud. The startup companies who save initial investment by not buying hardware themselves and receiving cloud services are great candidates. Especially when they intend to grow fast in the early years, cloud provides a perfect scalable platform. Traditionally, there are more startups based in the Americas and nowadays also in the Asia Pacific region than in Europe. One reason for that might be the amount of available venture capital. Europe tends to be more conservative and may take less risk in investments.

But, beside the startups, there are many mature companies and enterprises in Europe. Why are they adopting cloud more slowly?

One root cause is the diversity of national countries. This has a major impact on the following two areas that are prerequisites for successful cloud adaption:

Regulatory compliance issues: Every European country has its own set of data privacy rules and legislation. When using cloud computing, it is very likely that national boundaries are crossed, either by using a data center in another country or because the cloud service provider uses operational personnel sitting in a third party country. To be clear, this is not a potential show stopper, it just adds complexity. A CIO might feel safer keeping the data on his own property, just to be sure. The recent discussions about the NSA surveillance program and the US Patriot Act does not help here either.

Wide area network costs: Another disadvantage which directly derives from national diversity is the cost of wide area networks. International lines cost significantly more than data links within a country. This is caused by the original European structure of national carriers holding monopolies on the last mile.

But, beside national boundaries, client readiness is often a big blocker for cloud adoption. Many enterprises maintain security policies that are just not cloud ready. The cloud provider needs to be treated as a trusted partner and not as a third party. Many security policies are not flexible enough to adapt to the standards of the cloud provider.

Another reason is that many companies are not ready yet to give up a certain degree of control. This is not only in respect to infrastructure architecture, but also about server management. Servers in the cloud are still considered owned virtual machines. In reality, receiving cloud services means to use a cloud platform provided and owned by the cloud service provider.

As the world continues to adopt cloud, the situation starts to improve. More and more client decision makers understand the opportunities provided by cloud computing and are willing to invest. Regulatory compliance is starting to be more and more aligned, especially in the European Union. Lets see how cloud adoption develops in Europe!

The future of SmartCloud Enterprise+ aka Cloud Managed Services

Because of the recent SoftLayer acquisition, SmartCloud Enterprise (SCE) was sunset January 31st 2014. There were just to many overlaps between the two offerings and SoftLayer seemed to me the more mature platform with more functionality. So, that SCE was stopped (and functionality like SCAS is merged into SoftLayer) was not much of a surprise.

But what does this mean for SCE+ – or, what should it mean for it?

First of all, it means a name change. As announced on this year’s Pulse (IBM’s cloud conference), SCE+ will be rebranded to Cloud Managed Services (CMS).

Second, the good news, CMS/SCE+ will stay with a strong roadmap at least until 2017 (well, the roadmap is specified until 2017, so it is very likely that CMS/SCE+ will stay even beyond 2017. But who knows what happens in IT in the next 5 years 🙂

But why two offerings anyway?

In an essence, SCE+ and SoftLayer are positioned the same way as SCE+ and SCE were originally positioned:

  • SCE+ for cloud enabled workloads
  • SoftLayer for cloud native workloads

CloudNativeVsCloudEnabled

To understand this positioning a little bit better, lets discuss the current capabilities of each offering and the current planned roadmap items:

SoftLayer

SoftLayer provides a highly flexible IaaS platform for cloud centric workloads. The underlying infrastructure is highly standardized and gives full control to the client for everything above the hypervisor including the operating system. Even if the client subscribes to one of the offered management options, it mainly means, SoftLayer is providing limited management tasks on a best can do basis, but without real SLAs and the client maintains full admin access to its instances.

The platform provides high flexibility, so all kind of possible setups can be implemented (by clients), but the responsibility for a given setup remains at the client, not at SoftLayer. In a nutshell, SoftLayer provides an IaaS environment with a very high degree of freedom and control for clients without taking over responsibilities for anything above the hypervisor.

These capabilities fit well for cloud centric or self-managed development (DevOps) workloads, but less for traditional high available workloads like SAP.

Cloud Managed Services (formally known as SmartCloud Enterprise+)

CMS on the other hand was designed and built to meet exactly the requirements of high available, managed production workloads originally hosted in client’s datacenters. CMS provides SLAs and technologies for accomodating high available workloads like clustering and disaster recovery (R1.4). While D/R setups can also be created on SoftLayer, the client must design, build and run them, but can not receive them as a service. This is the main differentiator in the SoftLayer / CMSpositioning. CMS is less flexible above the hypervisor, as it provides managed high available operating systems as a Service with given SLAs. To meet these SLAs, standards must be met and the underlying infrastructure must be technically capable to provide these SLAs (Tier 1 storage).

Due to the guaranteed service levels on the OS layer, this is IBM’s preferred platform for PaaS offerings of rather traditional software stacks like SAP or Oracle applications.

Summary

There are a lot of usecases were SoftLayer does not fit and CMS is the answer to fulfull the requirements. Based on the very clear distinct workloads for SoftLayer and CMS, there are no reasons to think about a CMS retirement.

What is hybrid cloud?

The National Institute of Standards and Technology defines hybrid cloud as “a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models.” Although this definition sounds very reasonable, it does not cover all aspects of hybrid clouds.

Let’s discuss possible deployment models first. There are five defined cloud deployment models, from a private cloud on-premises to a public cloud service with a cloud service provider.

Cloud Deployment Models

Often, hybrid cloud refers to a combination of a public cloud service and a private cloud on-premises; however, hybrid clouds could also consist of two public clouds provided by different providers or even a combination of a cloud and traditional IT. Actually, a setup where existing systems on a traditional IT infrastructure are combined with a public cloud service is currently the most frequent use case of a hybrid cloud.

Any hybrid cloud setup has some challenges that need to be considered during the planning and design phase:

  • The most obvious challenge is network connectivity, especially if remote cloud services like a public cloud or a hosted private cloud are involved. Not only must bandwidth, latency, reliability and associated cost considerations be taken into account, but also the logical network topology must be carefully designed (networks, routing, firewalls).
  • Another huge challenge is the manageability of different cloud services. When different cloud services are used, every service provider will have its own management and provisioning environment. Those environments can be considered completely independent from each other. By having instances in different cloud services, there is no complete picture available showing the number of totally deployed instances and their statuses. An orchestration layer can be a possible solution for this problem. This layer provides a single interface for all cloud-related tasks. The orchestration layer itself communicates with the different cloud services through application programming interfaces (APIs). The big advantage of an orchestration layer is the ability to track and control activities on a central point to maintain the big picture.

Today, plenty of cloud service providers maintain their own proprietary set of APIs. This makes the use of orchestration very complex as the orchestrator requires some kind of a driver component for each proprietary API set. However, the trend of standardized APIs is clearly seen in the industry. OpenStack seems to be the future cloud industry standard.

Hybrid clouds mainly work on an infrastructure and application level. On the infrastructure layer, a hybrid cloud means the combination of virtual machines from different cloud services. On the application or software as a service (SaaS) layer, a hybrid cloud describes an application setup with components in different SaaS offerings or existing applications within the data center of an enterprise. The challenge on an SaaS-based hybrid cloud is mainly the exchange of data between the different services and applications. Like orchestration works on the infrastructure level, data integrators work on the application layer.

Summary:

A hybrid cloud is a combination of different clouds, be it private, public or a mix. The biggest challenge is the integration of the different cloud services and technologies. Standardized APIs such as OpenStack seem to solve most of those issues.

IBM SmartCloud Enterprise+ 1.3 in a nutshell

On November 19, 2013, IBM SmartCloud Enterprise+ (SCE+) version 1.3 was released. While every new SCE+ release has brought some interesting improvements, I’m particularly excited about 1.3. Tons of new features and improvements were implemented, making it worth having a closer look at the highlights of this version of SCE+.

Completely new portal. Lets be polite, but the old portal had major room for improvement. The new portal was completely rewritten and now meets requirements clients have for such an interface.

New virtual machine (VM) sizes. New standard configurations were introduced including Jumbo for x86 VMs. However, what is even more important are the new maximum possible configurations for a single VM, which can be:

  • Up to 64 vCPUs
  • Up to 128 GB RAM
  • Up to 48 TB storage

These new configurations can enable more workloads to run on SCE+.

Clustering. Even more workloads can now be enabled because of the new clustering options. Clients can choose between operating system (OS) based clustering (for all on SCE+ supported operating systems and platforms) or simple anti-collocation which enables clients to cluster VMs on the application level. Anti-collocation means that two VMs will not be provisioned on the same physical host to ensure availability of at least one node in case a host goes down.

It is important to mention that service level agreements (SLAs) are still based on the individual VM, so there is no aggregated SLA for a cluster.

Anti-collocation (and clustering) does not guarantee that the physical hosts are based in different physical buildings. Even in dual-site based SCE+ data centers, the different nodes of cluster might still be located on one site. This could potentially be removed in a later release of SCE+.

Unmanaged instances. Clients can request unmanaged virtual machines on SCE+ with the following limitations:

  • Managed VMs cannot be transformed to unmanaged ones (or the other way around)
  • Clustering is not available on unmanaged VMs
  • Unmanaged VMs must be based on available SCE+ images; there is still no way to import custom images
  • Migration services are not available for unmanaged instances

Migration services. Migration services for x86 and IBM Power System platforms can now be contracted as an optional part of an SCE+ contract.

Active Directory integration. SCE+ now supports the integration of complex Microsoft Active Directory setups including a client-specific isolated domain or even joining (managed) VMs to the client’s active directory (AD) forest.

Database and middleware alerting and management. Beside the management of the operating system, clients can now choose database and middleware management as an option in two flavors:

  • Alerting only. The client maintains responsibility, but will be alerted by an automated monitoring system in case of failure.
  • Management. IBM provides management for selected database and middleware products (mainly IBM DB2 database software, MS SQL, Oracle, Sybase and IBM WebSphere products).

Custom hostnames and FQDN. Custom hostnames and full qualified domain names (FQDN) can now be chosen during the provisioning of a server VM.

Load Balancer as a service. Beside the currently available virtual software load balancer (vLBF), load balancing as a service is also available. The new service is based on industry leading hardware appliances and provides features like SSL offloading. Currently load balancing is only supported in a single site.

Increased number of security zones. Although three security zones remain standard, clients can request up to 12 security zones if required by the design of their environment when onboarding. Additional security zones can also be requested after boarding through an Request for Service (RFS), but provisioning is then subject to availability. However, there is a hard limit of 12 security zones per client.

Summary

SCE+ 1.3 is a milestone in terms of features and new possibilities. It enables a lot more workloads to be supported on SCE+ and SCE+ based offerings like IBM SmartCloud for SAP (SC4SAP) and IBM SmartCloud for Oracle Applications (SC4Oracle).

Why IBM should acquire Blackberry

After some weeks of negotiating with possible buyers, Blackberry (formally known as Research in Motion, RIM) decided that they won’t be for sale and stopped all acquisition talks. However, the various press articles about possible buying candidates, made me think about what it would mean for both companies if IBM would acquire Blackberry.

To my surprise, it would actually match very well.

Why this would be good for Blackberry:

Blackberry’s revenue is declining over the past years. Even the release of the new Blackberry OS 10 did not bring the expected turn around. Therefore, Blackberry was looking for a partner who would not only secure their future existence but also also add new distribution channels and opportunities for their assets.

IBM would provide these new distribution channels when bundling Blackberry hardware and services with IBM professional services on an enterprise level.

Why this would make sense for IBM:

One keypoint of IBM’s overall strategy is Mobile together with Cloud, Big Data / Analytics and Social Media. The Blackberry products – especially Blackberry OS 10 – could be an ideal platform for IBM’s Mobile First initative. Although Blackberry is not very popular in the consumer space and faces almost eroding market share there, it still maintains a rather strong share in the enterprise arena. And this does have a reason: Blackberry meets more enterprise requirements than any other mobile platform does.

Manageability and Control – it was always a core concept of Blackberry to enable an enterprise to keep control over its devices from a central management point.

Security – Blackberry can be considered the only truly secure mobile platform. iOS and Windows Phone are proprietary solutions where any enterprise depends on the vendor company, and open Android has frequent security voluntaries, Blackberry still maintains strong market creditability for its security concept and policies.

Container based concept to separate critical business data from private data.

Summary:

Buying Blackberry would give IBM a strong enterprise oriented mobile platform and would make IBM independent from other mobile vendors strategies and limitations. A solid solution with Blackberry technology and IBM software, services and distribution channels would certainly be a strong player and hard to beat. Unfortunately that is all not very likely to happen soon 🙂

IBM SmartCloud Enterprise+ disaster recovery considerations for DB2

Disaster recovery on IBM SmartCloud Enterprise + (SCE+) is usually referring to infrastructure based disaster recovery. Disaster recovery (DR) solutions on the infrastructure level intend to replicate while virtual machines (VMs) include all data from the main production site to the DR site. The advantage of that kind of solution approach is that if a disaster occurs, an exact copy of the production environment including all OS settings and patches is available on the DR site. The VMs on the DR site can than be started and take over the load quite seamlessly (beside the nasty reconfiguration of site specific network settings like IP ranges).

It is planned to provide an infrastructure as a service (IaaS) DR solution as part of the SmartCloud Enterprise + offering in a later release.

Although IaaS DR solutions do their job well, they are rather expensive and complex. Mirroring complete virtual machine images does not only cost a lot of storage space but also appropriates network bandwidth and traffic. So, the question that solution architects should ask is if IaaS based DR is really required!

In many scenarios, a more cost effective and less complex solution is, to consider application level disaster recovery. Let’s take DB2 as an example for many middleware or applications that provide the ability to be either clustered on application level or keep a cold standby aside. DB2 allows us to leverage its HADR function to collect all database update operations and queue them for distribution to other nodes.

Those collected updates can be sent over the network, using a variety of technologies or protocols. The interval between send operations depends on the recovery point objective (RPO) target. A shorter interval between send operations provides a better RPO but might generate more data traffic.

DB2 HADR setup

DB2 HADR setup

Another advantage of such a setup is the ability to provide fail over tests easily. Because the DR systems are up and running all the time, their proper function can be tested at any time by just accessing them.

However, the drawback of such an application level DR solution is the fact that the standby system is required to be up and running all the time and it must be ensured that all updates to the primary system itself (like changed configurations and software patching) are also done on the standby servers.

Summary

 Application level disaster recovery is not the solution for each and every scenario, but can be a valid, cost effective and less complex alternative. Sometimes a combination of infrastructure and application level DR might be the best solution for an environment.