Why IBM should acquire Blackberry

After some weeks of negotiating with possible buyers, Blackberry (formally known as Research in Motion, RIM) decided that they won’t be for sale and stopped all acquisition talks. However, the various press articles about possible buying candidates, made me think about what it would mean for both companies if IBM would acquire Blackberry.

To my surprise, it would actually match very well.

Why this would be good for Blackberry:

Blackberry’s revenue is declining over the past years. Even the release of the new Blackberry OS 10 did not bring the expected turn around. Therefore, Blackberry was looking for a partner who would not only secure their future existence but also also add new distribution channels and opportunities for their assets.

IBM would provide these new distribution channels when bundling Blackberry hardware and services with IBM professional services on an enterprise level.

Why this would make sense for IBM:

One keypoint of IBM’s overall strategy is Mobile together with Cloud, Big Data / Analytics and Social Media. The Blackberry products – especially Blackberry OS 10 – could be an ideal platform for IBM’s Mobile First initative. Although Blackberry is not very popular in the consumer space and faces almost eroding market share there, it still maintains a rather strong share in the enterprise arena. And this does have a reason: Blackberry meets more enterprise requirements than any other mobile platform does.

Manageability and Control – it was always a core concept of Blackberry to enable an enterprise to keep control over its devices from a central management point.

Security – Blackberry can be considered the only truly secure mobile platform. iOS and Windows Phone are proprietary solutions where any enterprise depends on the vendor company, and open Android has frequent security voluntaries, Blackberry still maintains strong market creditability for its security concept and policies.

Container based concept to separate critical business data from private data.

Summary:

Buying Blackberry would give IBM a strong enterprise oriented mobile platform and would make IBM independent from other mobile vendors strategies and limitations. A solid solution with Blackberry technology and IBM software, services and distribution channels would certainly be a strong player and hard to beat. Unfortunately that is all not very likely to happen soon 🙂

IBM SmartCloud Enterprise+ disaster recovery considerations for DB2

Disaster recovery on IBM SmartCloud Enterprise + (SCE+) is usually referring to infrastructure based disaster recovery. Disaster recovery (DR) solutions on the infrastructure level intend to replicate while virtual machines (VMs) include all data from the main production site to the DR site. The advantage of that kind of solution approach is that if a disaster occurs, an exact copy of the production environment including all OS settings and patches is available on the DR site. The VMs on the DR site can than be started and take over the load quite seamlessly (beside the nasty reconfiguration of site specific network settings like IP ranges).

It is planned to provide an infrastructure as a service (IaaS) DR solution as part of the SmartCloud Enterprise + offering in a later release.

Although IaaS DR solutions do their job well, they are rather expensive and complex. Mirroring complete virtual machine images does not only cost a lot of storage space but also appropriates network bandwidth and traffic. So, the question that solution architects should ask is if IaaS based DR is really required!

In many scenarios, a more cost effective and less complex solution is, to consider application level disaster recovery. Let’s take DB2 as an example for many middleware or applications that provide the ability to be either clustered on application level or keep a cold standby aside. DB2 allows us to leverage its HADR function to collect all database update operations and queue them for distribution to other nodes.

Those collected updates can be sent over the network, using a variety of technologies or protocols. The interval between send operations depends on the recovery point objective (RPO) target. A shorter interval between send operations provides a better RPO but might generate more data traffic.

DB2 HADR setup

DB2 HADR setup

Another advantage of such a setup is the ability to provide fail over tests easily. Because the DR systems are up and running all the time, their proper function can be tested at any time by just accessing them.

However, the drawback of such an application level DR solution is the fact that the standby system is required to be up and running all the time and it must be ensured that all updates to the primary system itself (like changed configurations and software patching) are also done on the standby servers.

Summary

 Application level disaster recovery is not the solution for each and every scenario, but can be a valid, cost effective and less complex alternative. Sometimes a combination of infrastructure and application level DR might be the best solution for an environment.

Trust, but verify!

If you ask five people to give a definition of cloud computing, you get six answers! This is a common joke about cloud computing. The same might be true if you ask cloud service providers about their service portfolio and service levels. Today, the cloud service provider market is very diverse. You find all kind of providers, from small local startups to large enterprises that are well known in the IT business.

Each of these cloud service providers might have their own set of services, business model and client base. Depending on their background, the understanding of Service Level Agreements (SLAs) and what they could mean for clients is completely different. There is no right or wrong here, but every client needs to make sure that the service provider’s perception of SLAs does meet the client’s expectations and requirements.

Penalty driven SLAs are useless!

Cloud computing is a service offering and SLAs are a valid way to describe what a client actually gets. So, we need SLAs to define a cloud service. However, the main question arising is:how much is the given SLA for a specific service worth? – also meaning, how motivated is the service provider to meet the SLA?

Today, a common approach to make an SLA stronger is to tie penalties in case of missing. I can give you two reasons why I think penalty driven SLAs are useless:

  • Penalty driven SLAs are mainly agreed on critical workloads or infrastructures because these SLAs are more expensive than penalty free Service Level Objectives (SLOs). However, talking about important workloads, the agreed penalty will never ever cover the actual loss a client might have because an important or even critical system goes down.
  • Penalties on SLAs are financial risks for the service provider. And, this risk is usually priced into the service. So, actually, clients pay their own penalties over the duration of the contract.

But what can a client do to make sure to contract the right cloud service provider? One recommendation would be to analyze the cloud service provider’s understanding of SLAs and its background as a service provider in general.

A few questions to ask:

  • Experience and operational excellence — is the provider well known in the market for providing services, does he have proper processes in place, does he comply to industry standards (ITIL) and certifications (ISO, SSAE)?
  • Is being cloud service provider his core business or is that just something he does to utilize his IT equipment?
  • What is the typical client base and how important is his reputation as a service provider for him?

Summary

When choosing a cloud service provider, penalty driven SLAs shouldn’t be trusted blindly, but rather being verified as to how likely it is that those SLAs are actually met. Is the provider just gambling, or does his technical infrastructure, processes and culture give me confidence. A service provider that requires a $ 100 penalty for motivation to hold SLAs might not be the right choice!

Cloud Computing: it’s about trust!

Google announced that it will discontinue its very popular GoogleReader service by July 1, 2013. For many people, this was a real shock. GoogleReader is perceived as one of the most useful tools out of the Google application portfolio and many users built their daily news consumption completely on this service.

A journalist of one of the biggest German computer magazines (C’T, Computer Technik) found direct words for Google’s announcement: “The sunset of GoogleReader kills any trust in Google. But why are people so upset? It’s not the first time that Google has stopped one if its services. Actually, since Larry Page took over, services have been discontinued on a frequent basis. I think, the reason for the users to be angry now is the fact that nobody expected that Google would get rid of one of its most popular services on such short notice.

However, transparency never was Google’s strength anyway.

Lets take this move from Google as an opportunity to discuss what we can learn about the relationship of cloud service providers and consumers.

First of all lets discuss why Google is doing this: because they don’t make money with GoogleReader. Google’s business model is to sell advertisements. Most Google services are only vehicles to reach users or gather data to either sell or use to better customize the advertisements and commercial banners for certain users. GoogleReader seemed not valuable enough to support this business model.

So, one important lesson we can learn is to always understand the service provider’s business model and how this is supported by the service you plan to consume.

Next, we can think who are regular clients of this service provider. Is that service provider targeting mainly users or large enterprise clients? In the large enterprise space, continuity of services is considered very important. Migration to or from a certain service can cost a fortune and steps are well planned and usually long term. In the end, user space providers tend to be more on the bleeding edge of technology, including the risk of failing or discontinuing a service.

Who is the regular client base of a service provider and how do I fit in there?

Although Google mainly serves the user area with its services, it tried to target larger companies with its GoogleDocs and GMail services, too. I don’t know if the end of GoogleReader was a very smart move for them to succeed in this space.

Making SCE+ elastic

IBM SmartCloud Enterprise+ (SCE+) is a highly scalable cloud infrastructure targeted for productive cloud enabled managed workloads. These target workloads make SCE+ a little bit heavier than SmartCloud Enterprise (SCE). The boarding of new clients alone takes a reasonable number of days to establish all the management systems for that client. This somehow applies to provisioning virtual machines, too. Due to the service activation process that is required to be completed before a managed system can be put into production, the provisioning time of a virtual instance takes at least some hours instead of just minutes on SCE. However, that is still a major improvement compared to a traditional IT where the provision of a new server can take up to several weeks!

The benefit of SCE+ is the management and high reliability of the platform. This makes it the perfect environment for core services and production workloads that can easily grow over time to meet the business requirements of tomorrow. Even scaling down is possible to a certain extent. However, if you need to react on heavy, short peaks of load, you need a platform that is truly elastic and can scale up and down on an hourly basis.

What you need is an elastic component, put on top of SCE+ to leverage SCE+’s high reliability for the important core functions of your business application, but have the possibility to scale out to a much lighter – and probably cheaper – platform for short peaks to cover the load. SCE provides all the necessary features you would expect from that elastic platform.



As shown in the video, f5’s BIG IP appliance can be used to control load on servers and scale out dynamically to SCE as required. The base infrastructure could be SCE+ as well as any other environment. However, what is required is a network connection between SCE and SCE+ as this is not part of any of the offerings.

The easiest way to establish a layer three connectivity between SCE and SCE+ is to create a Virtual Private Network tunnel between the two environments. Several options for that are feasible:

  • Connecting a software VPN device or appliance on SCE to the official SCE+ VPN service
  • Connecting a hardware VPN device or appliance on SCE+ to the official SCE VPN service
  • Use the Virtela Meet VPN service to interconnect VPN tunnels from both environments.

Summary:

SmartCloud Enterprise can be used to make SmartCloud Enterprise+ elastic. Several options exist to interconnect the two environments and leverage the benefits of both environments for specific use cases!

The future of managed cloud services

At the very beginning, there were unmanaged virtual machines as a service. These services were mainly used for development and test purposes or to host horizontal, born on the web workloads. For more traditional production workloads another type of cloud service is required. This new service needs to be more fault tolerant to provide the required availability for so called cloud enabled workloads, but also provide a certain management capability to ensure a stable operation. Managed infrastructure as a service (IaaS) solutions such as IBM’s SmartCloud Enterprise+ provide these service levels and capabilities.

But is this the future of managed cloud services?

I don’t think so, although this model fits for some workloads, it is just the start of the journey.

The ultimate goal of managed cloud services is to receive higher value services out of the cloud. In my opinion, the future of managed cloud services are platform as a service (PaaS) offerings.

Todays managed IaaS offerings cause some challenges, both for the service provider as well as for the service receiver. One reason for that is the higher complexity due to the shared responsibility of an IaaS managed virtual machine (VM). In this scenario the service provider manages the VM up to the operating system level, but the client is responsible for the middleware and application on top. In such a setup it is extremely hard to make a clear separation of the operating system (OS) from its higher services. Many applications require OS settings to be set or access to operating system resources.

A managed PaaS overcomes these challenges by providing a consistent software stack out of one hand. All components are designed for the purpose of this platform. If the PaaS service is a database, OS settings, IO subsystem and storage are designed and configured to provide a robust service with decent performance. The client can rely on the service providers expertise and does not need to support a database – or any other middleware – on an operating system platform he has no real insight on its configuration and not full control over its settings.

In one of my previous blog posts on how Middleware topology changes because of cloud, I discussed the change in middleware and database architecture. This is exactly where PaaS comes into play. By moving away from the consolidated database and middleware clusters, clients require agile, elastic and managed databases and middleware as a service to be able to handle the higher number of instances.

Summary

Due to the complexity and limitations, managed IaaS will become a niche offering for accommodating non standard software. For middle-ware and database with a certain market share, managed PaaS offerings will become commodity.

Middleware topology changes because of cloud

Once upon a time, applications ran on physical servers. These physical server infrastructures were sized to accommodate the application software as well as required middleware and database components. The sizing was mainly based on peak load expectations because only limited hardware upgrading was possible. This led to a very simple application landscape topology. Every application had its set of physical server systems. If an application had to be replaced or upgraded, only those server systems were affected.

When the number of applications grew, the number of server systems reached highs which were hard to manage and maintain. Consolidation was the trend of that time. Together with virtualization technologies which gained maturity, capacity upgrades were as easy as moving a slider bar. After the consolidation of the physical layer in the late 90s and early 2000s, the middleware and database layer was consolidated. Starting around 2005 we saw database hotels and consolidated middleware stacks to provide a standardized layer of capabilities to the applications.

Although this setup helped streamlining middleware and database management, and standardizing the software landscape, it introduced a number of problems:

The whole environment became more complex. Whenever a middleware stack was changed (due to a patch or even a version upgrade), multiple applications were affected and required to be retested. Maintenance windows needed to be coordinated with all application owners, and unplanned downtimes had a high impact on a higher number of applications.

Modern cloud computing is reversing this trend again. Because provisioning and management of standard middleware and database services can be highly automated, deploying and managing a higher number of smaller server images is less effort than it was in the early days. By de-consolidating these middleware and database blocks, we gain again higher flexibility and a far less complex environment.

There is another positive side effect of this approach: When application workloads are bundled together, they can more easily being moved to a fit for purpose infrastructure. Especially when we think about a migration of some workloads into the cloud, while others will stay on a more traditional IT infrastructure, the new model helps moving these isolated workloads, without affecting others.

Summary

I am not saying that deconsolidation of database and middleware blocks is the holy grail of middleware topology architecture, but in a cloud environment it can help to get rid of complex integration problems while not introducing new ones.

Active Directory on a managed IaaS

In a hybrid cloud environment, parts of the infrastructure are located in a public or shared cloud environment whereas other parts are in a different environment, either on a private cloud or on a traditional infrastructure. As long as this is all managed by one service provider, there is not much of a problem. But usually that’s not the case.

While servers located in the traditional infrastructure are often managed by the client himself, the servers that are hosted in a managed shared-cloud environment are operated by the service provider of that cloud. As long as we are talking about a managed infrastructure as a service (IaaS), that is up to the operating system level. Everything beyond the operating system is normally in the responsibility of the client himself because he knows the combination of middleware and application best.

This setup leads to all sorts of challenges. For all servers in the cloud we have a strict responsibility boundary, however, the layers above the OS are highly dependent on the OS settings and it is indeed very hard to isolate impacts of changes done in one layer to the other layer. The situation gets even more challenging when we talk about services which span not only the responsibility boundaries of a single host, but also over different environments (public/shared cloud and private cloud/traditional IT).

Microsoft Active Directory currently gives clients and service providers some headaches.

Lets briefly scan the interests of the different parties:

The service provider wants to maintain exclusive administrative rights on OS level for the servers in his responsibility. Otherwise it would be impossible to guarantee any service level agreements (SLAs) and/or a certain contracted level of security.

The client requires servers belonging to him in a single, or at least in a consistent environment. This starts with a certain server naming convention, but also includes dns suffixes and namespaces.

On first sight, these requirements sound reasonable, but in respect of Microsoft Active Directory, they are somehow conflicting.

When we talk about exclusive administrative rights on OS level in an MS ADS environment, we need to separate the environments based on responsibility in different ADS forests. Otherwise, the owner of the root domain of the forest automatically holds the Enterprise Admin rights and can create domain and server admin user ids in all subdomains of the forest at will.

ADS trust relationship

ADS trust relationship

However, if we would split the servers in two different ADS forests, they would also live in different name spaces. Furthermore, we would need to find a solution on how users and services of one forest can access resources on the other forest. Well, this can be handled by trusts, but this would introduce a lot of complexity and would be a perfect source for all kind of problems.

And, there is another limitation about the two forest solution: There could be no domain controllers of the forest the client owns hosted in the cloud environment. And that is a real problem, especially when we consider that most clients would like to move most of their easy Windows workloads (like domain controllers) in the cloud.

There are no easy answers on that.

Another solution could be to reduce complexity by moving all Windows servers in the cloud and let the cloud provider manage not only the pure server OS but also the Active Directory Service. However, this would require the service provider to offer ADS management as a service, including all tasks that come along with that (like OUs, user ids, certificates, public keys).

Another possibility could be if one party does not insist on its exclusive administrative rights and accepts this as a risk. If ownership of the domains is with the service provider, the client can have the rights to operate his ADS settings and probably local server admin ids for the servers not in the cloud.

ADS single domain

ADS single domain

Summary

There is currently not a single solution for that problem. The client’s requirements and the service providers’ capabilities need to be considered when designing the future environment. In any case, this needs to be done carefully and well in advance to limit later surprises!

The software defined datacenter

It went through the press recently: “VMware to acquire Nicira, a network infrastructure company.” But why should a virtualization company buy a network specialist, especially as VMware maintains a strong partnership with Cisco, one of the worlds leading network equipment vendors?

The answer can be found in VMware’s vision which was announced at this year’s VMworld’s keynote by Pat Gelsinger:

VMware wants to become the operating system for datacenters. They call this the software defined data center and define it by the following three criteria:

  1. All components of the infrastructure are virtualized.
  2. They are delivered automated, as a service.
  3. The automation is done completely by software.

In a nutshell, this reads like yet another definition of cloud computing in a data center context.

In today’s data center infrastructure, we see three types of core components:

  • Compute
  • Network
  • Storage

Virtualizing compute power is VMware’s home ground. That’s what they do; that’s what they are known for. Since the announcement of vCloud Director, they can also claim to provide the necessary software to deliver virtual machines (VM) as a service, or enable providers to do so. Software defined computing power is available today!

However, as Pat Gelsinger pointed out in his VMworld keynote, that’s only half of the journey. Once you have your virtual machine provisioned, you still need the supporting components to be able to fully consume the service. Having a VM provisioned in minutes doesn’t help if you then need to wait days (or even weeks) for the required firewall settings.

In this context, the acquisition of Nicira makes a lot of sense! Nicira, a company that specializes in software defined networks addresses exactly this issue. I am sure we will see a lot of Nicira’s technology in the next release of vCloud Director or in other new VMware products in the upcoming year. Including Nicira’s capabilities to provision network functions automated by software into VMware, is the logical next step to the operating system for datacenters.

What about storage?

Storage virtualization automation has been around for years, in one form or another. I am not sure if we will see VMware do acquisitions in that space, especially as VMware is owned by EMC². However, the vision of a software defined datacenter seems tempting also for storage companies. NetApp CEO Tom Georgens stated at VMworld that NetApp wants to be for storage what VMware is for servers.

Summary

VMware tries to counter recent attacks by Microsoft and Red Hat to its core business of server virtualization by a compelling vision of the operating system for the data center. I do agree that the software defined datacenter will be the datacenter layout of the future. However, I strongly believe that not only VMware will be capable to achieve this vision in the near future, Microsoft, IBM or Red Hat bring their products into shape, too. What was declared as the hypervisor war turns out to be the battle for the datacenter!

This blogpost was originally published on Thoughts On Cloud!

How SmarterComputing solutions can work for you

Last Saturday I finally registered my membership for car2go. You might have seen the white and blue Smart-cars in selected cities. Car2go basically is a car sharing offering, however, what fascinates me about it is the full integration with SmarterComputing and cloud technologies that brings a real value for me as a member.

I have to admit that my curiosity about how they managed to bring all these technologies together was one of my main motivations to join. I own a car, a motorbike and public transport in my hometown Vienna is definitely one of the best worldwide. so I am not really dependent on another transport option; however, the whole concept of a car sharing offering, well integrated with the Internet and mobile devices, caught my attention. Beside the fact that I always wanted to know what it is to drive a Smart-car and that is definitely one of the cheapest ways to find out.

Registrations is simple as it can be, you fill out a web form and then you just need to stop by the car2go shop to show your drivers license and pick up your membership card. Not really extraordinary until that, but now it starts getting thrilling almost as described by Sebastian in his blog post about driving as a service (http://thoughtsoncloud.com/index.php/2011/11/driving-as-a-service/).

You either look on the Internet or you use the car2go smartphone app to check out where the nearest free car is parked. You see that on a GoogleMap overlay. You can then choose to reserve a car simply by clicking on it in the map. It will then be set to occupied for the next 30min so that you can comfortably walk to it and check it. Checking in means you put your NFC equipped membership card on the windshield’s lower left corner where a device in the car reads out the data from that membership card and after validation unlocks the car for you.

Because the key is in the car you can then just drive around while a fee per minute is charged to your account, regardless how many kilometers you drive. The fee is quite reasonable.

Of course, the car2go area is limited. You can drive outside that area, but you can checkout only when within the boundaries. The car always knows its exact position and sends that to the central systems. Checking out is as easy as checking in. You just place your membership card on that space on the windshield and the car is locked. As soon as you check out, the car is immediately shown as available on the maps with its exact position!

I would really like to see more of these kind of offerings. It indeed shows what SmarterComputing can do for you in your day to day life!