Secure boot – how dependent on Microsoft can Linux afford to be?

The new hardware generation that comes along, together with Windows 8, features UEFI Secure Boot. This boot feature was originally designed to make sure that no harmful code infects the system in its most vulnerable phase during boot where no anti-malware tool is active.

However, which looks good on the first thought turned out to be a real problem for all of us using open software like Linux.

UEFI Secure Boot will only boot operating systems which bootloader are signed with a trusted key. Those keys need to be stored in the hardware (BIOS) to ensure its integrity during boot. For security reasons, this hardware key storage is read only to omit harmful code of compromising the stored keys. This means that, all the keys need to be stored there during hardware production.

As it looks today, the only key which will be present in the hardware of the future will be the one of Microsoft.

To be able to still boot a Linux system, the Linux bootloader needs to be signed by that Microsoft key. Microsoft offers a signing service for less than $100,- – so some of the major Linux distributions consider using this signing service to get their boot loaders accepted by newer hardware.

But is this really the right way to go?

Of course, this is the most pragmatic solution to the problem. But I see two heavy drawbacks that could hit the distributors and users in the future:

Using the Microsoft signing service puts the whole Linux community in a situation where they are highly dependent on Microsoft. That can’t be a comfortable situation for any Linux distributor.

The second problem I see is with self compiled kernels. A main benefit of open source software is the ability to modify and change it to someone’s requirements. If we can only use MS signed kernels and bootloaders any more, we are not able to compile our own kernels.

In my point of view, the big Linux distributors should better work to get their keys into the hardware as well and should provide a decent and easy to use signing service for self compiled kernels. Or, UEFI Secure Boot should be optional at all to let the user decide the risk he is willing to take to run the software of his choice!

When is a cloud open?

Todays blog post was inspired by Red Hat’s Vice President for cloud business, Scott Crenshaw, and his definition of an open cloud:

  • Open source
  • Viable, independent community
  • Based on open standards
  • Unencumbered by patents and other IP restrictions
  • Lets you deploy to your choice of infrastructure
  • Pluggable, extensible, and open API
  • Enables portability across clouds

Although I think this is a very good start for a discussion, I do not fully agree with his definition!

Open standards, APIs, and portability

I don’t doubt these points of Mr. Crenshaw’s definition I see them as the most important criteria for a cloud to be called open. Cloud consumers should be able to seamlessly move their workloads from one open cloud to another. There is no room for vendor locking I fully agree here with Mr. Crenshaw!

Open source, independent community and patents

Considering the fact that Mr. Crenshaw is a Red Hat employee, no wonder, he sees open source as a requirement for a cloud to be open. However, is that really the case? I doubt it.

Sure, open source software and the viable, independent communities have their benefits, but that is not specific to cloud computing, nor is it a requirement for an open cloud. I honor the fact that open source-based software stacks such as OpenStack for example are implementing open standards and interfaces, and are driving their definition. But after they are established, I see no reason why a closed source software that complies with those standards should not be considered open.

Traditionally software products are within the responsibility of the IT departments. Cloud computing changes this paradigm to a certain extent. With cloud computing, we often see a direct relationship between a business unit and the cloud vendor bypassing the IT department. Now, we can argue whether this is good or bad, but what we do need to pay attention to is seeing the product from a different viewpoint. The pure technical aspects become less important. So, if it is less important whether or not the cloud is based on open source software, the important question is: what can it do and not do?

Choice of infrastructure

I admit that the open choice of infrastructure can help eliminate vendor locking. But I personally consider the support of different platforms and infrastructures as a feature and nothing more. Of course, when selecting a cloud software stack or vendor, the provided features must fit to the requirements. And, the more flexible the features are, the more future-proof my selection might be, but that’s not a criteria for an open cloud at least not for me.

Summary

An open cloud must stick to open standards and implement open interfaces and APIs. I see those as the main criteria for an open cloud. Open source is definitely helping to push these criteria, but is not a mandatory requirement. At the end of the day, cloud consumers must be able to move their cloud workloads and data from one cloud to another, that’s what makes the open cloud reality!

Sources:

What’s an “Open Cloud,” Anyway? Red Hat Says It’s Not VMware by Joe Brockmeier (http://www.readwriteweb.com/cloud/2012/02/whats-an-open-cloud-anyway-red.php?sf3468400=1)

Challenges for hybrid clouds

Hybrid clouds are starting to become more and more attractive for larger cnterprises. The claim is that hybrid clouds combine the benefits of both private and public clouds, but omit their drawbacks. Although this sounds great, let’s look at the challenges we face when we start to design and build a hybrid cloud:

Infrastructure as a service (IaaS) layer

On the IaaS layer, the main benefit we see in a hybrid cloud environment is that of leveraging the elasticity of public cloud resources, but maintaining a higher level of security for sensitive corporate data and applications by using a private cloud.

To address the security concerns for certain data and applications, a strong governance model together with adequate security policies need to be developed. It must be made crystal clear where applications and data are allowed to be placed and what the rational is behind these policies.

When applications scale out into public clouds to cover peak loads, we need to have a closer look at data placement. Not only for the sake of security, but this time also for the sake of performance. If an application requires a high amount of data, this application might not be suited for such a scenario unless, we have designed that properly beforehand (adequate network bandwidths, data replication, and so on).

Another challenging topic about hybrid clouds on the IaaS layer is their management. Clients who go for a full integrated hybrid cloud need to consider how to include the public cloud service catalog and automated provisioning into their local processes and infrastructure. Not all public cloud service providers offer open APIs or comply with open standards, but that’s a prerequisite for a seamless integration.

Challenges even grow when we consider managed public cloud services. Beside the technical boundaries of wide area network and data center locations, a split management responsibility comes with a large backpack of issues, which must be addressed. Monitoring, ticketing, backup and restore, and user management are just some of them. The service provider usually feels responsible only up to the operating system layer, sometimes also for middleware and databases, but very rarely for customer-specific applications. Those client specific applications must be operated by the clients themselves and therefore integrated into their systems management systems.

Software as a service (SaaS) layer

On the SaaS layer, a hybrid setup is less likely because of elasticity, but usually purely because of functionality. In this scenario, certain business functions are covered by a SaaS solution from an external service provider.

The challenge with this setup is to transport the required data to and from the public SaaS. First, the data that needs to be transferred must be identified, then, a secure interface must be developed to ensure the correct data is reliably fed into the remote software service, and result data is transferred back to the local environment. Because of the variety of data, and local and remote application combinations, not many standard software products are available to implement this linkage. IBM Cast Iron is one of them and provides a data field to data field link between many software products such as SAP and Salesforce.com.

Summary

A hybrid cloud is a complex animal, and preparation and design are key to address its challenges. However, most of these challenges can be solved, either by technology (Tivoli Service Automation Manager, Cast Iron, IBM Hybrid Cloud Integrator) or by governance and organization. Once established, a hybrid cloud is a very powerful asset combining today’s enterprise requirements with flexibility and cost efficiency!

Microsoft versus OpenOffice: Not the battle of the future!

Anyone remember Super Audio CD (SACD)? Or Audio DVD (ADVD)? Those formats once had a battle about which would succeed the Audio CD as the primary media for audio content. However, after the invention of MP3 and its wide distribution, disc formats became obsolete.

When Blu-ray Disc and HD-DVD had a similar battle about who would be the next primary video media, experts already talked about a war where there was nothing to win for anybody. It was predicted that the Internet would become that primary media and that hard media such as discs or tapes would no longer be of any importance. Well, Blu-ray Discs did gain some market share, but only temporarily, as we look back today. Download portals, and also IPTV and video-on-demand offerings are slowly coming and will, for sure, get their pieces of the cake.

LibreOffice 3.5.1 came out recently, and Apache OpenOffice with the help of IBM will release a new version (4.0) approximately at the end of the year. These two alternative office suites run another attack on Microsoft’s dominance of the office software sector. But is this really a battle that is worth fighting for?

First of all, this battle is almost impossible to win. Not only that Microsoft’s dominant market share is equal to a de-facto industry standard, the office documents are mainly based on Microsoft’s proprietary document formats, which are often non-disclosed and therefore hard for non-Microsoft applications to interpret correctly. I think that any success of alternative office suites raise and fall with their ability to import and export Microsoft formats properly.

But, is this really the battlefield of the future? I don’t think so.

The actual battlefield about the future of software is in the cloud!

As Andreas Groth (@andreasgroth) and I mentioned in several earlier blog posts, the final goal of software evolution is to be web-based. There are several reasons for that: Web-based applications are easy to access (from any device), they are cheap to maintain and they support our new requirements in terms of collaboration and content sharing more easily as any local installed app does.

In regards to office software, all vendors had to start their development almost from zero again, which makes this race so interesting. Regardless, if we look at Microsoft’s Office 365, IBM Docs (formally known as LotusLive Symphony) or Google Docs, they all have in common that they were more or less developed from scratch. But beside the big three, there is a high momentum in that area to make applications accessible from a simple browser. Several examples are VMware AppBlast, Guacamole, and LibreOffice, which all use technology based on HTML-5.

But what will be the criteria to succeed in the cloud?

There is no doubt, that any office software needs to fulfil the productivity basics. I don’t think that cloud-based software must implement all fancy features of Office 2010, but it must enable the user to fulfil day-to-day tasks, including the capability to import and export office documents, display them properly, and run macros.

In terms of collaboration, cloud-based software needs to provide added value to any desktop- based application. It should be easily possible to share and exchange documents with coworkers.

But the most important factor will be its integration capabilities. Desktop and office workloads will not be moved into the cloud from one day to the other. There will be a certain time frame where the use of cloud-based applications start to grow, but the majority of people is still using locally installed applications. Being well integrated, both with the local installed software and also with server- based collaboration tools, will be the key factor for success. This is why I see Microsoft in a far better position than Google, although Google Docs has been around for quite some time and has started to become interesting, feature-wise.

IBM seems to be on the correct track. Its integration of IBM Docs into IBM Connections and IBM SmartCloud for Social Business (formally known as LotusLive), which can be tested in the IBM Greenhouse, looks very promising.

Summary

The new battlefield will be in the cloud, and although Microsoft did its homework, the productivity software market is changing. There are more significant solutions and vendors available than in the years before. If they play their cards right and provide good integration together with an attractive license model and collaboration features, they could get their share of (Microsoft’s) cake.

Let’s talk about clouds – seriously!

A cloud is a visible mass of liquid droplets or frozen crystals made of water, various chemicals (or water and chemicals) suspended in the atmosphere above the surface of a planetary body.” – Wikipedia

Today, I would like to discuss this topic from a slightly different angle. As you might know, clouds are organized in layers. Let’s discuss these layers and how they affect us.

Moderate vertical

This is the lowest cloud layer and the layer that affects us most. The most important clouds on this layer are called cumulus clouds. The cumulus clouds are the type of cloud everyone thinks of when we talk about clouds. They come with a rather plain base and a fluffy looking top, which can animate our imagination about their shapes.

Single cumulus cloud

Single cumulus cloud

Cumulus cloudsor cumulimainly form from the spring to fall season. They are usually caused by thermal up-winds which arise when a spot on the ground is warmer than its surroundings. The air above this hot spot is warmed and because of the nature of physics, starts climbing until it reaches a so-called “inversion,” which is a level that has even warmer air. At that level, the air can’t climb any further and the contained water is condensing, causing the cloud to form. This is why all cumulus clouds in an area all exist on the very same altitude.

Set of cumulus clouds, all on the same altitude

Set of cumulus clouds, all on the same altitude

But, what are they good for? What are their use cases?

There are two user groups that are interested in cumulus clouds: farmers and glider pilots.

Farmers, because cumuli might mean rain (depends on proper sizing) and glider pilots, because the thermal up-wind, which caused the cumulus cloud, is a perfect up lift.

Let me say a few words about sizing…

When cumulus clouds become oversizedthere is more and more warmer air climbing from the ground, bringing more condensed water to the cloud—the clouds start towering. They can tower up to very high altitudesso, they are then referred to as towering cumulus clouds. If they reach a certain size, the internal forces basically break through and a thunderstorm is the final result. Those clouds are then called cumulonimbus.

Low layer

The clouds on the low layer are mainly stratus clouds. Although cumulus clouds are objects with more or less well-defined borders, status clouds are more like a sea of clouds without an end. Stratus clouds arise when wet air cools and can no longer hold water, which then condenses to that “cloudy layer”. When stratus clouds hit the ground, they are called fog!

Stratus cloud layer from above

Stratus cloud layer from above

Middle layer

Middle layered clouds have the prefix “alto” to indicate that they reside on a higher layer than the clouds discussed before. Depending on its origin, they are called altocumulus, altostatus, and so on.

High layer

The cloud family that forms on the high layer are called cirrus clouds. Cirrus clouds include ice crystals, which give the clouds their beautiful shapes.

I hope you enjoyed reading about some other aspects of cloud. And, by the way, its April Fools’ Day today!

Why service providers should not ignore cloud

For many service providers, cloud computing seems to be disruptive for their business model. Especially in the outsourcing business, many service providers are reluctant to offer cloud-based services to their clients. There are two main reasons behind this scepticism:

  • Cloud computing gives agility to the client to subscribe or unsubscribe to services quickly. Although this is a big advantage for the client, it brings challenges to the provider. Smaller service providers have difficulty estimating the required capacity correctly and they risk keeping expensive resources underutilized.
  • Larger providers fear losing their clients once the clients have cloud services with more flexible, standardized contracts. In today’s outsourcing world, it is difficult for a client to switch providers. Services and contracts are not standardized and relatively complex relationships exist between the client’s and service providers’ organizations. The cloud business model breaks this all up by standardizing service descriptions and consumption.

But in any case, ignoring the new trends on the horizon cannot be the solution.

The example of Kodak somehow shows what it means to answer a disruptive business change by just ignoring it. Kodak, inventor of modern photography and one of the market leaders in the early 1990s, decided not to move towards digital photography because this was considered disruptive for their photo development labs all over the world. When they finally accepted the industry trend, they had already lost ground against their competition and you can read in the news what happened to them in 2012.

The point is, if you are not offering your clients what they want, someone else will.

So, there are good reasons not to ignore cloud to keep your current clients with you, but there are even better reason to think about cloud to win clients you have not thought about before.

Cloud computing enables small and medium businesses (SMB) to leverage services that were only affordable for large enterprises before. Just consider a professional CRM application before Salesforce.com, such solutions where out of reach for small and medium business (SMB) clients. The same is true for professional platforms like an Oracle database server, which has initial license costs that do not fit in to the business plans of many startup companies. Receiving database services from the cloud, together with others from a shared system, makes this a positive business case. But, to be able to share the infrastructure and costs with others, service providers are required.

Summary:

The new business and delivery model of cloud computing brings risks to service providers. Clients are less forced to loyalty and might change providers more frequently. But it also means chances for new business that should not be under estimated!

Windows 8 – the last one?

Will Windows 8 be the last client operating system out of Redmond? Probably not, but we need to ask ourselves which value a new client operating system (OS) version can bring to the desktop.

Up to today, functionality was required on the desktop, because the desktop was the platform for all the various applications. Each new operating system version brought new features and enabled us, the users, to do things we couldn’t do with an older version.

But, things have changed.

Now, more and more applications are moving away from the desktop. We still use a desktop to access applications by using a web browser and using web applications instead of locally installed ones (Gmail, for example, taught us that a local email client is no longer required). So, the desktop itself becomes more and more a platform for our preferred Internet browser. And in this role, the functionality of the operating system gets more and more unimportant.

But if additional operating system features are of no value any more, what motivation do users have to upgrade to the next OS version? We see this problem already on the enterprise level today. The main reason for most companies to migrate from Windows XP to Windows 7 is the simple fact that support for Windows XP will end in 2014 – and not all the new features of Windows 7.

So, how will the future look? Well, Google does give us an outlook with Chrome OS, an operating system, not based on Windows, that has a main task of running a web browser with the best possible performance. By moving functionality to the web, the capabilities of the web become more important. No wonder that HTML 5 addresses a lot of these new requirements. HTML 5 not only enables a new kind of user experience for web applications, it also provides a foundation for new technologies like AppBlast to bring traditional desktop applications to the web.

OK, but what will Redmond do?

Well, they are reaching out in other areas. It seems that Microsoft understood very well the challenge. Microsoft’s focus moves away from the traditional PC toward new devices such as tablets and smartphones. Windows 8 is designed more for tablets than it is for PCs; and Windows Phone 7 is a pure smartphone OS.

Coming back to our original question will Windows 8 be that last operating system from Microsoft? Definitely not, but whether it is the last one for PCs, I don’t know…

The private cloud is the new datacenter

Normally, data centers are historically grown and contain a number of different, heterogeneous systems. Depending on the age of the data center, you can see the evolutionary steps of the IT industry. At older companies, you find mainframe computers, large midrange systems, and a number of rack-based Intel servers all together on one data floor. Looking at younger companies, (less than 10 years old), you won’t see this variety of platforms. They will rather count on a larger number of similar hardware, but highly virtualized to achieve the required flexibility and be able to run a large variety of workloads.

Virtualization certainly was the industry trend of the last decade.

But, what’s next? When we look ahead 10 years from now, what will be the trend of the next decade? I predict, it will be the private cloud!

From a technology point of view, the private cloud is less a revolution than virtualization was. I see it more as a logical next step. Although virtualization changed the way users perceived servers, with cloud computing, users perceive them now as a service.

It is also not a very big effort to add private cloud capabilities to today’s data centers. Every virtualized server farm can be equipped with a cloud computing layer that handles the user interaction, and the provisioning and deprovisioning of virtual servers. So, it is quite easy to adapt to this new technology.

Another reason for private clouds to conquer more and more data center space is because of cloud computing in general: workloads are not limited to run on servers in a specific data center of a company any more. In the next years, we will see more workloads put on public clouds. These remote workloads still require some degree of management and a central control point for provisioning and deprovisioning. Building up this control point for consuming remote public cloud services enables the local private cloud layer to hook in and be managed from the same infrastructure in a hybrid cloud set up.

I mentioned that with cloud computing IT is more perceived as a service than just technology; this is exactly what users outside the IT department will expect in the future. The cloud computing delivery model has already existed in the consumer market for quite some time now. People are used to visiting app stores to install their application software. They understand video on demand and software as a service for their private day-to-day IT usage. In the very near future, users will expect that on their workplaces too.

Summary

If a new data center is designed today, or an existing one is expanded to a larger extent, there are very good reasons to think about a cloud layer right from the start. At least, there are no good reasons not to think about it!

The 10 biggest myths about desktop cloud

The biggest myths are as follows:

  1. Desktop cloud is cheaper than traditional PCs.
    As I stated in my other blog post “Motivations for moving the desktop to the cloud,” if the only driver for a desktop cloud initiative is cost savings, the project might not succeed. There are many parameters to take into account that can make a desktop cloud solution cheap – or expensive.
  2. You can’t run multimedia applications on a virtual PC.
    You can run multimedia applications on VDI environments. All known vendors of VDI products have solutions available. For lightweight multimedia applications, such as audio or YouTube videos, state-of-the-art protocols such as HDX (Citrix) or PCoIP (VMware) can handle them and provide decent results.
  3. You can’t run CAD applications on a virtual PC.
    Solutions are on the market that can provide high-end graphics in a VDI environment. Most of them are able to access a GPU built into the server for rendering. However, if such a solution does make sense, it needs to be carefully evaluated on a case by case basis.
  4. You can access your desktop from anywhere at anytime.
    Although you can access your virtual desktop as soon as you have a working Internet connection, whether you can work with it depends on a few more parameters such as latency and bandwidth. Latency greater than 100 ms makes a remote desktop feel heavy; latency greater than 250 ms can be annoying; and if latency exceeds 500 ms, it is almost impossible to work with.
  5. You can’t use local attached devices such as printers or scanners.
    You can use local attached devices very well today. It’s more a question about whether all the necessary drivers are installed in the virtual desktop or terminal server or whether the user is entitled to use them.
  6. You can equip 100% of your user population with virtual PCs.
    Even with very high ambitions, you will only be able to transfer a certain percentage of your users to a virtual desktop. For highly standardized clients, an average of 80% is a good number.
  7. You cannot install additional software or device drivers to your virtual PC.
    Usually, this is true. Especially for installing device drivers, administrative privileges are required. Although, from a technical point of view, it would be possible to grant normal users admin rights for their virtual PCs, that is usually not the case in reality. For applications, it might be a different story. Using application virtualization, users can be entitled to access and locally install new applications based on their profile.
  8. You don’t need on site support any more.
    Even with traditional PCs, on-site support is not mandatory. Only about 5 – 10% of all problem tickets are hardware-dependent. The usual problem is related to software or configuration, which can be solved remotely, too. However, users prefer to have someone from the support team in person when discussing a problem – and that’s not changing with a virtual PC.
  9. It is the same effort to patch OS or distribute new versions of a software to all workstations.
    Having all virtual PCs and data in a central data center makes patching them much easier. The whole electronic software distribution and patch management infrastructure is much less complex because it does not require fan-out servers or WAN links.
  10. Desktop cloud does not change anything for the user, so the user gladly accepts the new workstation.
    Don’t underestimate the cultural change when you replace a user’s physical PC with a virtual PC in a cloud. It is like stealing something out of the user’s pocket!

Considerations when defining a desktop cloud solution

In the previous blog posts of this series, we discussed motivations for moving desktops to the cloud and also desktop cloud technologies. Now let’s bring all of them together!

Let’s look at a desktop cloud solution from an architecture perspective. First, we need to define our users and user groups that will actually use our solution. The users will require some hardware, a thin-client or other device to access their virtual desktop. On their virtual desktop, they run applications and compute data.

Simplified desktop cloud architecture

Simplified desktop cloud architecture

So, users will access desktops. But which desktop technology fits for a specific user? To answer this question, we need to define our user groups and their requirements. Typical user groups are task workers, travelers, or developers. All of them have different requirements for their desktops; perhaps not all of them can be met by a single technology! Trying to cover all my users with a desktop cloud solution is a very ambiguous goal and almost impossible to reach. Better approach is to identify the user groups that would benefit most from a desktop cloud, or that would bring most benefit to the company and start with those.

Mapping technologies to usergroups

Mapping technologies to usergroups

Next step is to think of the applications. Designing a desktop cloud solution is a perfect opportunity to review your application landscape and to identify potential for consolidation. There are also a number of ways to provide applications to the users. Applications can be published on terminal servers, streamed using application streaming, or provided purely from the web. Ideally, applications are either web-based or at least support a number of distribution technologies. Application selection principles and development guidelines help to clean up the application landscape in the long term.

Moving further up in the architectural hierarchy, we should discuss user data. By introducing desktop cloud, I might also be required to redesign my user data concept. Locally stored data might not fit the purpose any more when I want to access the data from any device at any time. Technologies such as central data stores, web enabled data, or synchronization mechanism come into consideration.

Designing a desktop cloud solution is not trivial, especially because it directly hits the users in the way they access IT resources. Design steps need to be done carefully, always having the full picture in mind to ensure success!