June 6th, 2012

Start Spreading the News… Cloud Expo, New York

Cloud Expo 2012Cloud Expo 2012 is almost here. This promises to be an incredible event, with thousands of attendees and over 100 speakers. As previously mentioned, I’m privileged to be presenting on Making Hybrid Cloud Safe & Reliable. I’m particularly excited that I’ll be introducing attendees to the new concept of API-Aware Traffic Management. It will also be great to be back in New York City!

I recently read Daniel Kahneman’s book Thinking Fast & Slow, a fascinating study of how the human mind works. With the new capabilities offered by big data and Cloud computing — the dual themes for next week’s event — and the increasing personalization of technology through Mobile devices, I think we have an opportunity to make our digital systems more human in their processing. What does that mean?  Well, more intuitive in user experience, more lateral through caching of unstructured data and more adaptive to changing conditions. API-Aware Traffic Management certainly reflects this potential.

If you are going to be (or hope to be) at the event, add a response in the comments box or tweet to @MattMcLartyBC. Hope to see you there!

May 15th, 2012

API-Aware Traffic Management

Cloud ExpoAs I mentioned in my last blog post, the promise of cost reduction is compelling many enterprises to move their workloads into the Cloud but many IT leaders are reluctant to do so, for fear of compromising the security and availability of their services. These concerns are well-founded but the benefits of Cloud are too great to ignore. To obtain these benefits, companies must adopt techniques that protect against the attendant risks, without compromise.

Many people are familiar with Layer 7’s industry-leading security functionality, so it’s no surprise that I’d recommend using our Gateway technology to protect connections from on-premise infrastructure to off-premise Cloud services. The flexibility of deployment options we offer makes it possible to create a network of secure on- and off-premise endpoints to meet the most stringent requirements. This covers security but what about availability?

People seem to be less familiar with Layer 7’s routing capabilities. Our Gateway technology is optimized to perform flexible, content-based routing with negligible impact on overall transaction times. In the context of the Cloud, this means that traffic proxied by a Layer 7 Gateway can be re-directed using intelligent algorithms and even dynamic, state-based awareness. This routing capability, which I call “API-aware traffic management”, brings huge benefits in ensuring availability when connecting to multiple API instances – on-premise, off-premise, in multiple Clouds… anywhere on the hybrid network.

I’ll be discussing this topic in detail at the upcoming Cloud Expo 2012, June 11-14 in New York City. This promises to be a great event, so I hope you can make it and attend my discussion!

April 30th, 2012

Cloud & Clear

Hybrid CloudIt’s April in Vancouver, which got me thinking about clouds.  Although the IT buzz in 2012 has been dominated by mobile and big data, Cloud computing is still a hot topic, especially since it is an enabler for both. In the public Cloud space, Google just launched Drive in the same week that Microsoft updated SkyDrive. In the private Cloud domain, IBM recently announced its PureSystems platform, which falls along similar lines as the Exa- line from Oracle.

It will be interesting to see whether or not big enterprises buy into this “21st century mainframe” concept but what’s clear is that enterprises now want to migrate critical workloads to the Cloud, en masse. To realize the true benefits of Cloud, many of these workloads will have to be running off-premise. But since many will remain on-premise, enterprises will be relying on hybrid Cloud infrastructure for their most significant IT services.

Security remains a major area of concern for organizations looking to leverage the Cloud. Increasingly, availability and reliability are also significant concerns, particularly since Amazon has had a few outages recently. In addition to addressing these concerns, enterprises are evaluating how they can optimize processing volumes to get maximum cost benefit from their Cloud deployments.

Please join me at the Cloud Expo, June 11-14 in New York, where I’ll be discussing solutions for each of these considerations. Hey, we should have blue skies by then!

April 23rd, 2011

Why Cloud Brokers Are The Foundation For The Resilient API Network

Written by
 
Amazon Web Services crashed spectacularly, and with it the illusion that cloud is reliable-by-design and ready for mission-critical applications. Now everyone knows that cloud SLAs fade like the phosphor glow in a monitor when someone pulls the plug from the wall. Amazon’s failure is an unfortunate event, and the cloud will never be the same. So what is the enterprise to do if it can’t trust its provider? The answer is to take a page from good web architecture and double up. Nobody would deploy an important web site without at least two identical web servers and a load balancer to spray traffic between them. If one server dies, its partner handles the full load until operators can restore the failed system. Sometimes the simplest patterns are the most effective. Now take a step back and expand this model to the macro-level. Instead of pair of web servers, imagine two different cloud providers, ideally residing on separate power grids and different Internet backbones. Rather than a web server, imagine a replicated enterprise application hosting important APIs. Now replace the load balancer with a Cloud Broker—essentially an intelligent API switch that can distribute traffic between the providers based  both on provider performance and a deep understanding of the nature of each API. It is this API-centricity that makes a Cloud Broker more than just a new deployment pattern for a conventional load balancer. Engineers design load balancers to direct traffic to Web sites, and their designs excel at this task. But while load balancers do provide rudimentary access to API parameters in a message stream, the rules languages used to articulate distribution policy are just not designed to make effective decisions about application protocols. In a pinch, you might be able to implement simple HTTP fail over between clouds, but this isn’t a very satisfactory solution. In contrast, we design cloud brokers from the beginning to interpret application layer protocols and to use this insight to optimize API traffic management between clouds. A well-designed cloud broker abstracts existing APIs that may differ between hosts, offering a common view to clients decoupled from local dependencies. Furthermore, Cloud Brokers implement sophisticated orchestration capabilities so they can interact with cloud infrastructure through a provider’s APIs. This allows the broker to take command of applications the provider hosts. Leveraging these APIs, the broker can automatically spin up a new application instance on demand, or release under-utilized capacity. Automation of processes is one of the more important value propositions of cloud, and Cloud Brokers are means to realize this goal. For more information about Cloud Brokers, have a look at the Cloud Broker product page at Layer 7 Technologies.