Are containers the future of hybrid clouds?

I recently stumbled over the following video from James Bottomley, a Linux Kernel developer working for Parallels. It’s a very good explanation of container technology and how it will be integrated in OpenStack:

What really caught my attention was the part about hybrid clouds. Looking a bit closer at containers in a hybrid cloud environment reveals their potential to introduce easy application mobility.

The main difference between virtual machines (VMs) and containers are that virtual machines run a complete operating system (including its own kernel) on virtualized hardware (provided by the hypervisor). A container shares, at minimum, everything up to the OS kernel with the host system and all other containers on the host. But it can share even more; in a standardized setup, a container can share not only the kernel, but also the main parts of the operating system and libraries so that the container itself is actually rather tiny.

When we think about hybrid clouds today, we mainly think about fully-virtualized machines running on different infrastructures, at different service providers, in different data centers. Such a setup still cannot fulfill a use case that is as old as cloud computing: moving workloads easily from one infrastructure to another. I see this as a requirement in multiple scenarios, from bursting out to other infrastructures during peaks to continuous operation requirements during maintenance windows or data center availability problems. Using containers with hybrid clouds would give users a new degree of freedom in where to place their workloads as decisions are not final and can be changed at any given moment.

Because containers are much smaller in size than virtual machines, moving them over a wide area network (WAN) from one provider to another is far easier than with VMs. The only prerequisite is a highly standardized setup of the container host, but systems tend to already be standardized in cloud environments, so this would be a perfect fit!

Today, we are not as far along as we could be. Containers are not yet supported by the big cloud software stacks, but as the video points out OpenStack is about to include them into its application programming interfaces (APIs) soon.

Container technology provides an easy way to make applications more mobile in a hybrid cloud setup. Because of the tiny footprint of containers, moving them over wide area networks is far easier than moving full virtual machines. Containers might fulfill the cloud promises of easy bursting during peaks or flexible leveraging of multiple cloud environments.

What is your opinion on how long it may take until containers are as well supported in cloud environments as virtual machines are today? Tell me your thoughts in the comments or on Twitter @emarcusnet!

Secure boot – how dependent on Microsoft can Linux afford to be?

The new hardware generation that comes along, together with Windows 8, features UEFI Secure Boot. This boot feature was originally designed to make sure that no harmful code infects the system in its most vulnerable phase during boot where no anti-malware tool is active.

However, which looks good on the first thought turned out to be a real problem for all of us using open software like Linux.

UEFI Secure Boot will only boot operating systems which bootloader are signed with a trusted key. Those keys need to be stored in the hardware (BIOS) to ensure its integrity during boot. For security reasons, this hardware key storage is read only to omit harmful code of compromising the stored keys. This means that, all the keys need to be stored there during hardware production.

As it looks today, the only key which will be present in the hardware of the future will be the one of Microsoft.

To be able to still boot a Linux system, the Linux bootloader needs to be signed by that Microsoft key. Microsoft offers a signing service for less than $100,- – so some of the major Linux distributions consider using this signing service to get their boot loaders accepted by newer hardware.

But is this really the right way to go?

Of course, this is the most pragmatic solution to the problem. But I see two heavy drawbacks that could hit the distributors and users in the future:

Using the Microsoft signing service puts the whole Linux community in a situation where they are highly dependent on Microsoft. That can’t be a comfortable situation for any Linux distributor.

The second problem I see is with self compiled kernels. A main benefit of open source software is the ability to modify and change it to someone’s requirements. If we can only use MS signed kernels and bootloaders any more, we are not able to compile our own kernels.

In my point of view, the big Linux distributors should better work to get their keys into the hardware as well and should provide a decent and easy to use signing service for self compiled kernels. Or, UEFI Secure Boot should be optional at all to let the user decide the risk he is willing to take to run the software of his choice!