Systems of record and systems of engagement

Discussing hybrid cloud is almost a never ending story; there are so many different aspects to have a closer look at and explore. In this post, I would like to focus on workloads, placement and interconnection.

In earlier posts about workloads and proper placement on different clouds, I introduced the terms cloud enabled and cloud native. While these terms are still valid definitions, in a hybrid cloud context they evolve to the paradigms of systems of record and systems of engagement.

SystemOfRecordsVsSystemOfEngagement

The main difference to the cloud enabled / cloud native approach is that we are not talking about isolated workloads that are better placed here or there, but of integrated workload components spread over infrastructures enabled by hybrid clouds.

Lets look a bit closer to this new paradigm:

Systems of record fit well on cloud-enabled infrastructures. Those workloads have specific requirements about security, performance and infrastructure redundancy. Relational databases holding sensitive data could be a good example for a workload component that is referred to as a system of record.

Systems of engagement have more the cloud native supported requirements in terms of flexibility, ease of deployment, elasticity and more. A web server farm might be a good example here.

So, what is the thrilling news?

Because of the much tighter integration of hybrid cloud environments than we saw a year ago, there are completely new possibilities for how workloads are split and distributed over the environments.

For example, if we consider a web shop application, the presentation layer can be considered a system of engagement, whereas the data layer is more likely a system of record. In a hybrid cloud, the web server farm of the presentation layer can be placed on a cloud native environment like IBM SoftLayer, but the core database cluster, holding the credit card information of the customers, might be better placed on a PCI compliant infrastructure like a private cloud or IBM Cloud Managed Services.

Another example could be a SAP system that provides web access capabilities. Again, the web facing part could be on a public cloud, but the main SAP application is certainly better to be placed on a suiting infrastructure like IBM Cloud Managed Services for SAP Applications or even a traditional IT environment.

As mentioned above, tight integration is key for success with hybrid cloud scenarios. One crucial integration aspect is networking. With the interconnection of the IBM strategic cloud data centers to the SoftLayer private network, IBM provides a worldwide high speed network backbone for all its cloud data centers to enable components on different cloud offerings to communicate to each other properly. Other aspects are orchestration and governance which I covered in my other post.

The combination of systems of record and systems of engagement bring hybrid cloud to the next evolution level. By using the best of both worlds in a single workload, placing the components on the best fitting infrastructures, hybrid cloud computing becomes even more powerful. Prerequisite is a tight integration especially in the areas network, orchestration and governance.

Don’t hesitate to continue the discussion with me on Twitter via @emarcusnet!

How to achieve success in cloud outsourcing projects

Outsourcing is a stepping stone on the way to cloud computing.

I would even say that companies with outsourcing experience can much more easily adopt cloud than others. They are already used to dealing with a service provider, and they have learned how to trust and challenge it to get the desired services. But certain criteria must be met in order to ensure that both parties get the most out of the outsourcing relationship.

According to a new study on the adoption of cloud within strategic outsourcing environments from IBM’s Center for Applied Insights, key success factors for a cloud outsourcing project are:

• Better due diligence
• Higher attention on security
• Incumbent providers
• Helping the business adjust
• Planning integration carefully

I fully agree with all of these points, but found myself thinking back 15 years when the outsourcing business was on the rise. Actually, these success factors do not differ much from the early days. A company that has already outsourced parts of its information technology (IT) to an external provider had to cover those topics already—perhaps in a slightly different manner—but still enough to understand their importance.

Let’s briefly discuss these five key topics more in detail.

Due diligence

A common motivation for outsourcing is a historically grown environment, which is expensive to run. Outsourcing providers have experience in analyzing an existing environment and transforming it to a more standardized setup that can be operated for reasonable costs. Proper due diligence is key to understanding the transformation efforts and effects. For cloud computing, the story is basically identical; the only difference is the target environment, which might be even more standardized. But again, knowing which systems are in scope for the transformation and what their specific requirements are is essential for success.

Security

When a client introduces outsourcing the first time in its history, the security department needs to be involved early and their consent and support is required. In most companies, especially in sensitive industries like finance or health care, security policies prevent systems from being managed by a third-party service provider. Even if that is not obvious at first glance, the devil is often in the details.

I remember an insurance company that restricted traffic to the outsourcing providers shared systems in such a way that proper management using a cost effective delivery was not possible. Those security policies required adaption to reflect the service provider as a trusted second party rather than an untrusted third party. Cloud computing does bring in even more new aspects, but in general it is just another step in the same direction.

Incumbent providers

If your current outsourcing provider has proven that it is able to run your environment to the standards you expect, you might trust that it is operating its cloud offering in the same manner. Let’s look at the big outsourcing providers in the industry like IBM; they all have a mature delivery model, developed over years of experience. This delivery model is also used for their cloud offerings.

Business adjustment

In an outsourced environment, the business is already used to dealing with a service provider for its requests. Cloud computing introduces new aspects, like self-service capabilities or new restrictions because of a more standardized environment. The business needs to be prepared, but the step is by far smaller than without an already outsourced IT.

Plan integration

Again, this is a task that had to be done during the outsourcing transformation, too. Outsourcing providers have shared systems and delivery teams that need to be integrated. Cloud computing might walk one step further to even put workloads on shared systems, but that is actually not a new topic at all.

Outsourced clients are already well prepared for the step into cloud. Of course there is the one or other hurdle to take, but compared to firms still maintaining their own IT only, the journey is just another step in the same direction.

What are your thoughts about this topic? Catch me on Twitter via @emarcusnet for an ongoing discussion!

Are containers the future of hybrid clouds?

I recently stumbled over the following video from James Bottomley, a Linux Kernel developer working for Parallels. It’s a very good explanation of container technology and how it will be integrated in OpenStack:

What really caught my attention was the part about hybrid clouds. Looking a bit closer at containers in a hybrid cloud environment reveals their potential to introduce easy application mobility.

The main difference between virtual machines (VMs) and containers are that virtual machines run a complete operating system (including its own kernel) on virtualized hardware (provided by the hypervisor). A container shares, at minimum, everything up to the OS kernel with the host system and all other containers on the host. But it can share even more; in a standardized setup, a container can share not only the kernel, but also the main parts of the operating system and libraries so that the container itself is actually rather tiny.

When we think about hybrid clouds today, we mainly think about fully-virtualized machines running on different infrastructures, at different service providers, in different data centers. Such a setup still cannot fulfill a use case that is as old as cloud computing: moving workloads easily from one infrastructure to another. I see this as a requirement in multiple scenarios, from bursting out to other infrastructures during peaks to continuous operation requirements during maintenance windows or data center availability problems. Using containers with hybrid clouds would give users a new degree of freedom in where to place their workloads as decisions are not final and can be changed at any given moment.

Because containers are much smaller in size than virtual machines, moving them over a wide area network (WAN) from one provider to another is far easier than with VMs. The only prerequisite is a highly standardized setup of the container host, but systems tend to already be standardized in cloud environments, so this would be a perfect fit!

Today, we are not as far along as we could be. Containers are not yet supported by the big cloud software stacks, but as the video points out OpenStack is about to include them into its application programming interfaces (APIs) soon.

Container technology provides an easy way to make applications more mobile in a hybrid cloud setup. Because of the tiny footprint of containers, moving them over wide area networks is far easier than moving full virtual machines. Containers might fulfill the cloud promises of easy bursting during peaks or flexible leveraging of multiple cloud environments.

What is your opinion on how long it may take until containers are as well supported in cloud environments as virtual machines are today? Tell me your thoughts in the comments or on Twitter @emarcusnet!

The hybrid cloud onion

In an earlier post, I defined a hybrid cloud and discussed possible scenarios including multiple public cloud providers, private clouds and traditional information technology (IT) environments.

While that post hopefully provided a good explanation of hybrid cloud infrastructures, it was not the full story, especially if you plan to implement a hybrid cloud in your environment. Like an onion that has many different layers around its core to protect it and keep it nice, white and juicy, hybrid cloud infrastructure has many different layers that keep it functional. Let’s take a look at these layers.

Management

Don’t underestimate the complexity that is introduced as a result of the different technologies and service providers in a hybrid setup. Establishing a common management infrastructure might be extremely hard and might not always make sense; however, there are components that you might want to integrate and harmonize. Usually these components provide monitoring, alerting and ticketing tools.

Whenever a new piece of infrastructure is added to your hybrid setup, you should consider the extent to which you need to integrate it into your existing management systems, and how to manage it once it is integrated.

Orchestration

Once the infrastructure is managed properly, you can think about how to provision new workloads. The next layer we should consider is orchestration. As with the management of your hybrid cloud infrastructure, your goal here should be to have a single point for provisioning that spans services over different cloud infrastructures.

The ongoing standardization of cloud application programming interfaces (APIs) addresses this need. Amazon Web Services APIs and OpenStack may be considered industry standards in this arena. More and more cloud providers and cloud products support at least one of the two, often both. Tools like IBM Cloud Orchestrator can not only provision single workloads on different hybrid infrastructures, but can also define workload patterns for faster and easier deployment.

Governance

Orchestration enables the use of a hybrid infrastructure in an automated way. And once you are able to orchestrate your environment, you need to control how that is done. The main question to answer is which workloads should run where. This is crucial because each hybrid infrastructure has its strengths and weaknesses. Private clouds might be the place for sensitive data, while public clouds might provide the best price point. It is important to establish policies about hybrid cloud usage scenarios.

Summary

Hybrid clouds are defined by their infrastructures, which are much like the layers of an onion. To successfully establish a hybrid cloud setup, management, orchestration and governance must not be forgotten!

Share your comments and questions with me on Twitter @eMarcusNet.

What are Community Clouds?

The nature of any public cloud is to meet the requirements that a majority of its users need. There are always trade-offs in functionality, standardization and costs. So, in the end, the implemented requirements are some kind of least common denominator.

While this might be good (enough) for most industries, it often is not enough for client groups with special requirements, like financial institutions, government organizations or pharmaceutical companies. To drive cloud adoption for those clients, we need a type of cloud that can meet their particular needs. Such clouds are referred to as community clouds because they are designed to serve a special community of clients. A community cloud is an infrastructure that is shared by several organizations with similar concerns.Prescription Medicine 3348667

But why are community clouds so important for both the service providers and the industries and communities that use them?

The service providers can target new client segments that they could not reach with a standard cloud offering. Although investments into the cloud’s underlying infrastructure and security processes might be higher, competition in this segment is lower, and marketable prices are potentially higher than for standard public cloud services. Depending on the targeted industry, the offered cloud services develop from a commodity business to a high-value, high-margin business, which might be more attractive for service providers.

For special industry clients, a community cloud provides the possibility to gain the benefits of cloud computing but stay compliant with their industry requirements. The service provider takes over the burden for required certifications like the Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry (PCI) data security standards and so on.

Another attractive aspect might be the fact that a client’s neighbor in a community cloud is most likely from the same industry. In a governmental community cloud, sharing of infrastructure (as is the nature of public clouds) is only done between other governmental organizations, which might relieve them from certain security concerns they have with open public clouds.

government bldg

Summary

Although community clouds are still niche products, they are becoming more and more important with general cloud adoption across industries. They can be a good solution for both the service provider, who gets better margins with higher value services, and for the client, who can be sure that its industry-specific needs and regulatory requirements are met in a professional way. The fact that only clients from the same industry are on such a community cloud might help to increase trust in cloud computing, even for highly regulated and sensitive industries.

Could community clouds help increase cloud adoption? Continue the conversation with me on Twitter @eMarcusNet.

How Dropbox revolutionized enterprise IT

Some of you might remember those early days of computer networking when coaxial cables were used to interconnect PCs and Novell Netware was the market leader for file sharing. Although new players appeared in this space with IBM LAN Server and Microsoft Windows NT, the basic concept of shared network drives did not change much.

The general concept is based on centralized file repositories. Management and especially access management is usually limited to administrative personnel and based on groups rather than on individual users. And, because of the centralized approach, users are required to be online to access files.

This was state of the art for almost 20 years.

As with anything that stays for a long time, requirements change and the centralized concept was unable to meet the new needs of the millennium generation. Mobile computing started to become more natural, the number and kinds of devices changed from static PCs to notebooks and nowadays tablets and mobile phones. Users are not only able to take administrative responsibilities, but they can even demand to manage their resources themselves.

Although some tried to enhance the existing software with all kinds of add-ons (offline folders) and workarounds to help support the new requirements, the outcome was not really satisfying.

Dropbox was and still is so successful because it fulfills those new needs!

The paradigm switched from a centralized file store to a distributed, replicated file repository with easy access regardless of whether the user is online, offline or using a mobile device like a tablet or mobile phone or even only a web browser. The user is able to share his owned files easily with other users or groups through a simple web interface.

But how does this affect enterprise IT?

These new user requirements are not limited to consumers. Actually, the need to have access to your important files and work on them in a geographically distributed team is a very common requirement of today’s enterprises. Dropbox has inspired a number of other products and services specifically targeting the enterprise market to appear in recent years. Not only do these programs support the new file sharing paradigm, but they also support core enterprise requirements for data security, privacy and control.

IBM Connections (and its software as a service companion IBM SmartCloud for Social Business) is a perfect example.

File services today are no longer based on shared network drives, but rather on distributed file repositories with easy access through web interfaces or replication clients and which enables the user to perform limited management task themselves. If the enterprise IT department does not fulfill these new user requirements, shadow IT based on Dropbox and similar technologies may continue to rise. Please share your thoughts in the comments below.

Cloud Adoption in Europe

While cloud adoption booms around the world, Europe (especially Western Europe) seems to be moving at its own pace. This is somewhat surprising, because Western Europe is a high cost and high price market where adoption of new technologies used to be considered a differentiator against competition. This seems not to be the case for cloud computing—but why?

Let’s look at who can easily adopt cloud. The startup companies who save initial investment by not buying hardware themselves and receiving cloud services are great candidates. Especially when they intend to grow fast in the early years, cloud provides a perfect scalable platform. Traditionally, there are more startups based in the Americas and nowadays also in the Asia Pacific region than in Europe. One reason for that might be the amount of available venture capital. Europe tends to be more conservative and may take less risk in investments.

But, beside the startups, there are many mature companies and enterprises in Europe. Why are they adopting cloud more slowly?

One root cause is the diversity of national countries. This has a major impact on the following two areas that are prerequisites for successful cloud adaption:

Regulatory compliance issues: Every European country has its own set of data privacy rules and legislation. When using cloud computing, it is very likely that national boundaries are crossed, either by using a data center in another country or because the cloud service provider uses operational personnel sitting in a third party country. To be clear, this is not a potential show stopper, it just adds complexity. A CIO might feel safer keeping the data on his own property, just to be sure. The recent discussions about the NSA surveillance program and the US Patriot Act does not help here either.

Wide area network costs: Another disadvantage which directly derives from national diversity is the cost of wide area networks. International lines cost significantly more than data links within a country. This is caused by the original European structure of national carriers holding monopolies on the last mile.

But, beside national boundaries, client readiness is often a big blocker for cloud adoption. Many enterprises maintain security policies that are just not cloud ready. The cloud provider needs to be treated as a trusted partner and not as a third party. Many security policies are not flexible enough to adapt to the standards of the cloud provider.

Another reason is that many companies are not ready yet to give up a certain degree of control. This is not only in respect to infrastructure architecture, but also about server management. Servers in the cloud are still considered owned virtual machines. In reality, receiving cloud services means to use a cloud platform provided and owned by the cloud service provider.

As the world continues to adopt cloud, the situation starts to improve. More and more client decision makers understand the opportunities provided by cloud computing and are willing to invest. Regulatory compliance is starting to be more and more aligned, especially in the European Union. Lets see how cloud adoption develops in Europe!

The future of SmartCloud Enterprise+ aka Cloud Managed Services

Because of the recent SoftLayer acquisition, SmartCloud Enterprise (SCE) was sunset January 31st 2014. There were just to many overlaps between the two offerings and SoftLayer seemed to me the more mature platform with more functionality. So, that SCE was stopped (and functionality like SCAS is merged into SoftLayer) was not much of a surprise.

But what does this mean for SCE+ – or, what should it mean for it?

First of all, it means a name change. As announced on this year’s Pulse (IBM’s cloud conference), SCE+ will be rebranded to Cloud Managed Services (CMS).

Second, the good news, CMS/SCE+ will stay with a strong roadmap at least until 2017 (well, the roadmap is specified until 2017, so it is very likely that CMS/SCE+ will stay even beyond 2017. But who knows what happens in IT in the next 5 years 🙂

But why two offerings anyway?

In an essence, SCE+ and SoftLayer are positioned the same way as SCE+ and SCE were originally positioned:

  • SCE+ for cloud enabled workloads
  • SoftLayer for cloud native workloads

CloudNativeVsCloudEnabled

To understand this positioning a little bit better, lets discuss the current capabilities of each offering and the current planned roadmap items:

SoftLayer

SoftLayer provides a highly flexible IaaS platform for cloud centric workloads. The underlying infrastructure is highly standardized and gives full control to the client for everything above the hypervisor including the operating system. Even if the client subscribes to one of the offered management options, it mainly means, SoftLayer is providing limited management tasks on a best can do basis, but without real SLAs and the client maintains full admin access to its instances.

The platform provides high flexibility, so all kind of possible setups can be implemented (by clients), but the responsibility for a given setup remains at the client, not at SoftLayer. In a nutshell, SoftLayer provides an IaaS environment with a very high degree of freedom and control for clients without taking over responsibilities for anything above the hypervisor.

These capabilities fit well for cloud centric or self-managed development (DevOps) workloads, but less for traditional high available workloads like SAP.

Cloud Managed Services (formally known as SmartCloud Enterprise+)

CMS on the other hand was designed and built to meet exactly the requirements of high available, managed production workloads originally hosted in client’s datacenters. CMS provides SLAs and technologies for accomodating high available workloads like clustering and disaster recovery (R1.4). While D/R setups can also be created on SoftLayer, the client must design, build and run them, but can not receive them as a service. This is the main differentiator in the SoftLayer / CMSpositioning. CMS is less flexible above the hypervisor, as it provides managed high available operating systems as a Service with given SLAs. To meet these SLAs, standards must be met and the underlying infrastructure must be technically capable to provide these SLAs (Tier 1 storage).

Due to the guaranteed service levels on the OS layer, this is IBM’s preferred platform for PaaS offerings of rather traditional software stacks like SAP or Oracle applications.

Summary

There are a lot of usecases were SoftLayer does not fit and CMS is the answer to fulfull the requirements. Based on the very clear distinct workloads for SoftLayer and CMS, there are no reasons to think about a CMS retirement.

What is hybrid cloud?

The National Institute of Standards and Technology defines hybrid cloud as “a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models.” Although this definition sounds very reasonable, it does not cover all aspects of hybrid clouds.

Let’s discuss possible deployment models first. There are five defined cloud deployment models, from a private cloud on-premises to a public cloud service with a cloud service provider.

Cloud Deployment Models

Often, hybrid cloud refers to a combination of a public cloud service and a private cloud on-premises; however, hybrid clouds could also consist of two public clouds provided by different providers or even a combination of a cloud and traditional IT. Actually, a setup where existing systems on a traditional IT infrastructure are combined with a public cloud service is currently the most frequent use case of a hybrid cloud.

Any hybrid cloud setup has some challenges that need to be considered during the planning and design phase:

  • The most obvious challenge is network connectivity, especially if remote cloud services like a public cloud or a hosted private cloud are involved. Not only must bandwidth, latency, reliability and associated cost considerations be taken into account, but also the logical network topology must be carefully designed (networks, routing, firewalls).
  • Another huge challenge is the manageability of different cloud services. When different cloud services are used, every service provider will have its own management and provisioning environment. Those environments can be considered completely independent from each other. By having instances in different cloud services, there is no complete picture available showing the number of totally deployed instances and their statuses. An orchestration layer can be a possible solution for this problem. This layer provides a single interface for all cloud-related tasks. The orchestration layer itself communicates with the different cloud services through application programming interfaces (APIs). The big advantage of an orchestration layer is the ability to track and control activities on a central point to maintain the big picture.

Today, plenty of cloud service providers maintain their own proprietary set of APIs. This makes the use of orchestration very complex as the orchestrator requires some kind of a driver component for each proprietary API set. However, the trend of standardized APIs is clearly seen in the industry. OpenStack seems to be the future cloud industry standard.

Hybrid clouds mainly work on an infrastructure and application level. On the infrastructure layer, a hybrid cloud means the combination of virtual machines from different cloud services. On the application or software as a service (SaaS) layer, a hybrid cloud describes an application setup with components in different SaaS offerings or existing applications within the data center of an enterprise. The challenge on an SaaS-based hybrid cloud is mainly the exchange of data between the different services and applications. Like orchestration works on the infrastructure level, data integrators work on the application layer.

Summary:

A hybrid cloud is a combination of different clouds, be it private, public or a mix. The biggest challenge is the integration of the different cloud services and technologies. Standardized APIs such as OpenStack seem to solve most of those issues.

IBM SmartCloud Enterprise+ 1.3 in a nutshell

On November 19, 2013, IBM SmartCloud Enterprise+ (SCE+) version 1.3 was released. While every new SCE+ release has brought some interesting improvements, I’m particularly excited about 1.3. Tons of new features and improvements were implemented, making it worth having a closer look at the highlights of this version of SCE+.

Completely new portal. Lets be polite, but the old portal had major room for improvement. The new portal was completely rewritten and now meets requirements clients have for such an interface.

New virtual machine (VM) sizes. New standard configurations were introduced including Jumbo for x86 VMs. However, what is even more important are the new maximum possible configurations for a single VM, which can be:

  • Up to 64 vCPUs
  • Up to 128 GB RAM
  • Up to 48 TB storage

These new configurations can enable more workloads to run on SCE+.

Clustering. Even more workloads can now be enabled because of the new clustering options. Clients can choose between operating system (OS) based clustering (for all on SCE+ supported operating systems and platforms) or simple anti-collocation which enables clients to cluster VMs on the application level. Anti-collocation means that two VMs will not be provisioned on the same physical host to ensure availability of at least one node in case a host goes down.

It is important to mention that service level agreements (SLAs) are still based on the individual VM, so there is no aggregated SLA for a cluster.

Anti-collocation (and clustering) does not guarantee that the physical hosts are based in different physical buildings. Even in dual-site based SCE+ data centers, the different nodes of cluster might still be located on one site. This could potentially be removed in a later release of SCE+.

Unmanaged instances. Clients can request unmanaged virtual machines on SCE+ with the following limitations:

  • Managed VMs cannot be transformed to unmanaged ones (or the other way around)
  • Clustering is not available on unmanaged VMs
  • Unmanaged VMs must be based on available SCE+ images; there is still no way to import custom images
  • Migration services are not available for unmanaged instances

Migration services. Migration services for x86 and IBM Power System platforms can now be contracted as an optional part of an SCE+ contract.

Active Directory integration. SCE+ now supports the integration of complex Microsoft Active Directory setups including a client-specific isolated domain or even joining (managed) VMs to the client’s active directory (AD) forest.

Database and middleware alerting and management. Beside the management of the operating system, clients can now choose database and middleware management as an option in two flavors:

  • Alerting only. The client maintains responsibility, but will be alerted by an automated monitoring system in case of failure.
  • Management. IBM provides management for selected database and middleware products (mainly IBM DB2 database software, MS SQL, Oracle, Sybase and IBM WebSphere products).

Custom hostnames and FQDN. Custom hostnames and full qualified domain names (FQDN) can now be chosen during the provisioning of a server VM.

Load Balancer as a service. Beside the currently available virtual software load balancer (vLBF), load balancing as a service is also available. The new service is based on industry leading hardware appliances and provides features like SSL offloading. Currently load balancing is only supported in a single site.

Increased number of security zones. Although three security zones remain standard, clients can request up to 12 security zones if required by the design of their environment when onboarding. Additional security zones can also be requested after boarding through an Request for Service (RFS), but provisioning is then subject to availability. However, there is a hard limit of 12 security zones per client.

Summary

SCE+ 1.3 is a milestone in terms of features and new possibilities. It enables a lot more workloads to be supported on SCE+ and SCE+ based offerings like IBM SmartCloud for SAP (SC4SAP) and IBM SmartCloud for Oracle Applications (SC4Oracle).