Middleware topology changes because of cloud

Once upon a time, applications ran on physical servers. These physical server infrastructures were sized to accommodate the application software as well as required middleware and database components. The sizing was mainly based on peak load expectations because only limited hardware upgrading was possible. This led to a very simple application landscape topology. Every application had its set of physical server systems. If an application had to be replaced or upgraded, only those server systems were affected.

When the number of applications grew, the number of server systems reached highs which were hard to manage and maintain. Consolidation was the trend of that time. Together with virtualization technologies which gained maturity, capacity upgrades were as easy as moving a slider bar. After the consolidation of the physical layer in the late 90s and early 2000s, the middleware and database layer was consolidated. Starting around 2005 we saw database hotels and consolidated middleware stacks to provide a standardized layer of capabilities to the applications.

Although this setup helped streamlining middleware and database management, and standardizing the software landscape, it introduced a number of problems:

The whole environment became more complex. Whenever a middleware stack was changed (due to a patch or even a version upgrade), multiple applications were affected and required to be retested. Maintenance windows needed to be coordinated with all application owners, and unplanned downtimes had a high impact on a higher number of applications.

Modern cloud computing is reversing this trend again. Because provisioning and management of standard middleware and database services can be highly automated, deploying and managing a higher number of smaller server images is less effort than it was in the early days. By de-consolidating these middleware and database blocks, we gain again higher flexibility and a far less complex environment.

There is another positive side effect of this approach: When application workloads are bundled together, they can more easily being moved to a fit for purpose infrastructure. Especially when we think about a migration of some workloads into the cloud, while others will stay on a more traditional IT infrastructure, the new model helps moving these isolated workloads, without affecting others.

Summary

I am not saying that deconsolidation of database and middleware blocks is the holy grail of middleware topology architecture, but in a cloud environment it can help to get rid of complex integration problems while not introducing new ones.

Considerations when defining a desktop cloud solution

In the previous blog posts of this series, we discussed motivations for moving desktops to the cloud and also desktop cloud technologies. Now let’s bring all of them together!

Let’s look at a desktop cloud solution from an architecture perspective. First, we need to define our users and user groups that will actually use our solution. The users will require some hardware, a thin-client or other device to access their virtual desktop. On their virtual desktop, they run applications and compute data.

Simplified desktop cloud architecture

Simplified desktop cloud architecture

So, users will access desktops. But which desktop technology fits for a specific user? To answer this question, we need to define our user groups and their requirements. Typical user groups are task workers, travelers, or developers. All of them have different requirements for their desktops; perhaps not all of them can be met by a single technology! Trying to cover all my users with a desktop cloud solution is a very ambiguous goal and almost impossible to reach. Better approach is to identify the user groups that would benefit most from a desktop cloud, or that would bring most benefit to the company and start with those.

Mapping technologies to usergroups

Mapping technologies to usergroups

Next step is to think of the applications. Designing a desktop cloud solution is a perfect opportunity to review your application landscape and to identify potential for consolidation. There are also a number of ways to provide applications to the users. Applications can be published on terminal servers, streamed using application streaming, or provided purely from the web. Ideally, applications are either web-based or at least support a number of distribution technologies. Application selection principles and development guidelines help to clean up the application landscape in the long term.

Moving further up in the architectural hierarchy, we should discuss user data. By introducing desktop cloud, I might also be required to redesign my user data concept. Locally stored data might not fit the purpose any more when I want to access the data from any device at any time. Technologies such as central data stores, web enabled data, or synchronization mechanism come into consideration.

Designing a desktop cloud solution is not trivial, especially because it directly hits the users in the way they access IT resources. Design steps need to be done carefully, always having the full picture in mind to ensure success!