Desktop cloud technologies

Today, a desktop cloud can consist of various technologies. There are different technologies for delivering the actual desktop, providing the applications, or organizing the underlying infrastructure such as storage. A good desktop cloud solution is a well designed combination of those technologies to support the needed requirements. In today’s article, I want to briefly discuss the various technologies, and explain what they can do and what they can’t.

Let’s start with a user’s desktop and how it can be provided.

Shared desktop

A shared desktop today is what used to be called a terminal server. Basically, all users of a terminal server share the server hardware and the operating system instance. To ensure that users are separated from each other, they are granted only limited rights.

Pro:

  • Use of the hardware is very efficient because there is only one operating system instance
  • Software distribution and patch management is easy because it only needs to be performed once per server.

Con:

  • Applications need to be terminal server-ready.
  • If the operating system hangs, all users on that server are affected.
  • If a single user consumes too many resources, all other users on the same server experience performance issues.
  • Users might not accept the grade of limitation caused by its low user rights.

Virtual PC

A virtual PC is a virtual machine hosting the user’s desktop and operating system. Compared to the shared desktop, the virtual PC can be perceived as a full PC including a private instance of the operating system for every user. Therefore, users can theoretically gain administrative rights on their virtual PCs.

Pro:

  • Users can have more rights, up to administrative privileges.
  • They can run any software as on their traditional PC.
  • If the operating system of one user fails, the other users on that server are not affected.

Con:

  • Because every user has his or her own operating system instance, the overhead is higher.
  • Each operating system instance instance needs to be patched and managed.

Streaming

Streaming tries to combine the performance and response time of a traditional PC with the central manageability and accessibility of a desktop cloud. The main difference between a server-based desktop solution and streaming is that the desktop is sent from a central storage down to the user’s device and then actually runs on the user’s computer. After finishing working, the changes are sent back to the central storage.

Pro

  • Offers good performance.
  • Works offline with a local cache.
  • Desktops can be patched and updated centrally.

Con

  • It is complex to set up and maintain.
  • The sync process requires high network bandwidth.
  • Conflict management is required if the local cache and the central master are out of sync.

 Client hypervisor

Bare metal or type 1 and type 2 hypervisors running on client computers must support requirements that differ from server hypervisors. On a client, it is crucial to support 3D graphic acceleration, WiFi network, and all types of USB-attached hardware, such as printers and scanners. But supporting the latest SCSI adapter is not that important. So, what is the point of having a client hypervisor at all? One aspect of a hypervisor is to separate the operating system (and its included desktop) from the underlying hardware. This approach makes the OS hardware-independent and reduces the hassle with different requirements of drivers. An additional benefit is that the desktop can flexibly be moved from one physical machine to another, for example if either the physical PC or laptop is broken, or in combination with streaming, the desktop can be moved from a data center server to a local PC and vice versa.

Pro

  • Desktop can be moved from one physical hardware device to another.
  • Multiple desktops with different purposes can be used simultaneously.

Con

  • Hardware support for WiFi and 3D graphics is not mature today.
  • Additional overhead exists because of the hypervisor

 Golden Image (copy on write) and non-persistent desktop

Non-persistent desktops are virtual machines that are set back to their original state during reboot and therefore lose all changes made while they were online. A non-persistent client setup is usually combined with a persistent data partition, so that the users can store documents and files that will not be deleted when rebooting. However, all changes made to the operating system itself would vanish. As anyone can image, this setup is very robust and ensures a working desktop at any time.

Pro:

  • Storage requirement is low because the system partition is required only once.
  • Offers easy patching and software distribution. After the master image is patched, all rebooted virtual machines are automatically patched.
  • It is a very robust solution, because any misconfigured desktop only needs to be rebooted to be operational again.

Con:

  • It has a low user acceptance.

Offline Patching

As discussed above, the drawback of persistent virtual PCs are the need to patch each and every machine as with traditional client computers. However, there is still one big advantage over distributed PCs: while traditional desktop and laptop computers are carried around, left as spare devices in cupboards and drawers, or are simply turned off during a software distribution phase – and are therefore not reachable – virtual PCs are always residing in the data center, even if they are off line (virtually turned off).

But, in any case, they must be virtually turned on, patched and turned off again, unless an offline patching technology is used. Offline patching can patch the actual image files of virtual PCs while they are offline and therefore ensure that they get the software update they require.

Summary

For the sake of the length of this blog, these technologies are only a subset of what is available today but the description should provide a good overview about the main aspects that need to be looked at when thinking about a desktop cloud solution.

In the next blog of my desktop cloud series, I will discuss best practices of how to map technologies to client requirements.