Secure boot – how dependent on Microsoft can Linux afford to be?

The new hardware generation that comes along, together with Windows 8, features UEFI Secure Boot. This boot feature was originally designed to make sure that no harmful code infects the system in its most vulnerable phase during boot where no anti-malware tool is active.

However, which looks good on the first thought turned out to be a real problem for all of us using open software like Linux.

UEFI Secure Boot will only boot operating systems which bootloader are signed with a trusted key. Those keys need to be stored in the hardware (BIOS) to ensure its integrity during boot. For security reasons, this hardware key storage is read only to omit harmful code of compromising the stored keys. This means that, all the keys need to be stored there during hardware production.

As it looks today, the only key which will be present in the hardware of the future will be the one of Microsoft.

To be able to still boot a Linux system, the Linux bootloader needs to be signed by that Microsoft key. Microsoft offers a signing service for less than $100,- – so some of the major Linux distributions consider using this signing service to get their boot loaders accepted by newer hardware.

But is this really the right way to go?

Of course, this is the most pragmatic solution to the problem. But I see two heavy drawbacks that could hit the distributors and users in the future:

Using the Microsoft signing service puts the whole Linux community in a situation where they are highly dependent on Microsoft. That can’t be a comfortable situation for any Linux distributor.

The second problem I see is with self compiled kernels. A main benefit of open source software is the ability to modify and change it to someone’s requirements. If we can only use MS signed kernels and bootloaders any more, we are not able to compile our own kernels.

In my point of view, the big Linux distributors should better work to get their keys into the hardware as well and should provide a decent and easy to use signing service for self compiled kernels. Or, UEFI Secure Boot should be optional at all to let the user decide the risk he is willing to take to run the software of his choice!

When is a cloud open?

Todays blog post was inspired by Red Hat’s Vice President for cloud business, Scott Crenshaw, and his definition of an open cloud:

  • Open source
  • Viable, independent community
  • Based on open standards
  • Unencumbered by patents and other IP restrictions
  • Lets you deploy to your choice of infrastructure
  • Pluggable, extensible, and open API
  • Enables portability across clouds

Although I think this is a very good start for a discussion, I do not fully agree with his definition!

Open standards, APIs, and portability

I don’t doubt these points of Mr. Crenshaw’s definition I see them as the most important criteria for a cloud to be called open. Cloud consumers should be able to seamlessly move their workloads from one open cloud to another. There is no room for vendor locking I fully agree here with Mr. Crenshaw!

Open source, independent community and patents

Considering the fact that Mr. Crenshaw is a Red Hat employee, no wonder, he sees open source as a requirement for a cloud to be open. However, is that really the case? I doubt it.

Sure, open source software and the viable, independent communities have their benefits, but that is not specific to cloud computing, nor is it a requirement for an open cloud. I honor the fact that open source-based software stacks such as OpenStack for example are implementing open standards and interfaces, and are driving their definition. But after they are established, I see no reason why a closed source software that complies with those standards should not be considered open.

Traditionally software products are within the responsibility of the IT departments. Cloud computing changes this paradigm to a certain extent. With cloud computing, we often see a direct relationship between a business unit and the cloud vendor bypassing the IT department. Now, we can argue whether this is good or bad, but what we do need to pay attention to is seeing the product from a different viewpoint. The pure technical aspects become less important. So, if it is less important whether or not the cloud is based on open source software, the important question is: what can it do and not do?

Choice of infrastructure

I admit that the open choice of infrastructure can help eliminate vendor locking. But I personally consider the support of different platforms and infrastructures as a feature and nothing more. Of course, when selecting a cloud software stack or vendor, the provided features must fit to the requirements. And, the more flexible the features are, the more future-proof my selection might be, but that’s not a criteria for an open cloud at least not for me.

Summary

An open cloud must stick to open standards and implement open interfaces and APIs. I see those as the main criteria for an open cloud. Open source is definitely helping to push these criteria, but is not a mandatory requirement. At the end of the day, cloud consumers must be able to move their cloud workloads and data from one cloud to another, that’s what makes the open cloud reality!

Sources:

What’s an “Open Cloud,” Anyway? Red Hat Says It’s Not VMware by Joe Brockmeier (http://www.readwriteweb.com/cloud/2012/02/whats-an-open-cloud-anyway-red.php?sf3468400=1)