May 17th, 2011

Upcoming Cloud Identity Talk At TMForum Management World

Written by
Category Talks
 
I’ll be delivering a presentation at TMForum Management World on Wednesday, May 25, 2001 in Dublin, Ireland. My talk is the second presentation in the Carrier Grade Cloud: Secure, Robust and Billable session. I’m scheduled to speak between 5pm and 5:30, which makes it a perfect way to end the day before retiring to a fine Irish pub. This talk suffers from the rather prosaic title Implementing Identity and Access Control and Management in the Cloud, but the actual content is great, and I promise to deliver a very entertaining show. This is actually a new talk, and I was fortunate enough to have an opportunity to rehearse  last weekend for the Western Canadian Engineering Students’ Society Team (WESST), who met at Simon Fraser University. Students can be a surprisingly tough crowd, but we all had a good time and I was able to work out some of the bugs in the flow. I’m sure it will play well in Dublin. I hope you can join me next week at TMForum. We can all sneak out afterward for a pint of the Guinness.
May 16th, 2011

Amazon’s Mensis Horribilis

Written by
 

Hot on the heels of Amazon Web Service’s prolonged outage late last month, Bloomberg has revealed that hackers used AWS as a launch pad for their high profile attack against Sony. In a thousand blogs and a million tweets, the Internets have been set abuzz with attention-seeking speculation about reliability and trust in the cloud. It’s a shame, because while these events are noteworthy, in the greater scheme of things they don’t mean much.

Few technologies are spared a difficult birth. But over time, with continuous refinement, they can become tremendously safe and reliable, something I’m reminded of every time I step on an airplane. It never ceases to amaze me how well the global aviation system operates. Yes, this has it’s failures—and these can be devastating; but overall the system works and we can place our trust in it. This is governance and management and engineering working at the highest levels.

Amazon has been remarkably candid about what happened during their service disruption, and it’s clear they have learned much from the incident. They are changing process, refining technology, and being uncharacteristically transparent about the event. This is the right thing to do, and it should actually give us confidence. The Amazon disruption won’t be the last service failure in the cloud, and I still believe that any enterprise with reliability concerns should deploy Cloud Service Broker (CSB) technologies. But the cloud needs failure to get better—and it is getting better.

In a similar vein, overreacting over the Sony incident is to miss what actually took place. The only cloud attribute the hackers leveraged on Amazon was convenience. This attack could have been launched from anywhere; Amazon simply provided barrier-free access to a compute platform, which is the point of cloud computing. It would be unfortunate if organizations began to blacklist general connections originating from the Amazon AWS IP range, as they already do for email originating in this domain because of an historical association with spam.  In truth this is another example of refinement by cloud providers, as effective policy control in Amazon’s data centers have now largely brought spam under control.

Negative impressions come easy in technology, and these are hard to reverse. Let’s hope that these incidents are recognized for what they are, rather than indicators of a fundamental flaw in cloud computing.


May 13th, 2011

NIST Seeks Public Input On New Cloud Computing Guide

Written by
 

What is the cloud, really? Never before have we had a technology that suffers so greatly from such a completely ambiguous name. Gartner Research VP Paolo Malinverno has observed that most organizations define cloud as any application operating outside their own data centre. This is probably as lucid a definition as any I’ve heard.

More formalized attempts to describe cloud rapidly turn into essays that attempt to bridge the abstract with the very specific, and in doing seem to miss the cloud for the clouds. Certainly the most effective comprehensive definition has come from the National Institute of Standards and Technology (NIST), and most of us in the cloud community have fallen back to this most authoritative reference when clarity is important.

Now is our chance to give back to NIST. To define cloud is to accept a task that will likely never end, and the standards boffins have been working hard to continually refine their work. They’ve asked for public comment, and I would encourage everyone to review their latest draft of the Cloud Computing Synopsis and Recommendations. This new publication builds on the basic definitions offered by NIST in the past, and at around 84 pages, it dives deep into the opportunities and issues surrounding SaaS, IaaS, and PaaS. There is good material here, and with community input it can become even better.

You have until June 13, 2011 to respond.


May 11th, 2011

Layer 7 to Demonstrate Cloud Network Elasticity at TMForum Management World in Dublin

I’ll be at the TMForum Management World show this May 23-26, 2011 in Dublin, Ireland to participate in the catalyst demonstrating cloud network elasticity, which is sponsored by Deutsche Telekom and the Commonwealth Bank of Australia. For those of you not yet familiar with TMForum, it is (from their web site) “the world’s leading industry association focused on enabling best-in-class IT for service providers in the communications, media, defense and cloud service markets.” We’ve been involved with the TMForum for a couple of years, and this show in Dublin is going to showcase some major breakthroughs in practical cloud computing. TMForum offers catalysts as solution proof-of-concepts. A catalyst involves a number of vendors which partner together to demonstrate an end-to-end solution to a real problem faced by telco providers or the defense industry. This year, we’re working closely with Infonova, Zimory, and Ciena to showcase a cloud-in-a-box environment that features elastic scaling of compute resources and network bandwidth on-demand, all of which is fully integrated with an automated billing system.We think this solution will be a significant game-changer in the cloud infrastructure marketplace, and Layer 7′s CloudControl product is a part of this solution. CloudControl plays a crucial role in managing the RESTful APIs that tie together each vendor’s components. What excites me about this catalyst is that it assembles best-of-breed vendors from the telco sector to create a truly practical elastic cloud. Zimoury contributes the management layer that transforms simple virtualized environments into clouds. We couple this with Ciena’s on-demand network bandwidth solutions, allowing users to acquire guaranteed communications capacity when they need it. Too often clouds elasticity starts and stops with CPU. Ciena’s technology ensures that the network resource factors into the elastic value proposition. The front end is driven by Zimory’s BSS system, ensuring that all user actions are managed under a provider-grade billing framework. And finally, Layer 7′s CloudControl operates as the glue in the middle to add security and auditing, integrate disparate APIs, and provide application-layer visibility into all of the communications between different infrastructure components.

Layer 7's CloudControl acts as API glue between cloud infrastructure components.

I hope you can join me at TMForum Management World this month. We will be giving live demonstrations of the elastic cloud under real world scenarios given to us by Deutsche Telekom and Commonwealth Bank. This promises to be a very interesting show.
May 11th, 2011

Using API keys effectively

Category Uncategorized
 

A common use of API keys for authentication of web api consumption is to ask the requester to just include the key directly in the URI parameters of the web API call as illustrated below:

http://apis.acme.com/resources/blah/foo?app_id=myid&app_key=mykey

The term ‘key’ in this case can be misleading. A key is normally used to perform some sort of crypto operation, typically a signature. The use of the API key above is the same as using a password in clear such as in:

http://apis.acme.com/resources/blah/foo?login=mylogin&password=mypassword

In both cases, nothing is signed, and the shared secret is sent alongside each call. If the request is somehow sniffed by a malicious intermediate (think MITM), the malicious user can now impersonate the legitimate requester. A secure channel to send such messages is needed. Even on a secure channel, this type of approach causes a number of security issues. For example, you want to avoid these shared secrets showing up in your traffic logs or being rendered to web pages for a browser based portal.

Other well known API service providers (such as AWS, Azure) use an HMAC signature based authentication model. HMAC (Hash-based Message Authentication Code) uses a hash function combined with a symmetric key. It still uses a shared secret but in this case, the secret is not included in the requests. Instead, the request includes an HMAC signature added to the Authorization HTTP header (the RESTful location for such signatures, tokens). This HMAC covers essential parameters such as the HTTP VERB, the payload, the payload type, a date, etc. Even if the request can be intercepted, the HMAC cannot be re-used beyond a short period of time and cannot be used if any of these critical aspects of the request are altered in any way. Using the same shared secret, the recipient can verify the authenticity of the message and the identity of the requester. Authentication and integrity are both achieved.

Below, an example HMAC construct as used by AWS:

Authorization: AWS + KeyId + : + base64(hmac-sha1(VERB + CONTENT-MD5 + CONTENT-TYPE + DATE + …))

Using the Layer 7 API Proxy, you can use such HMAC signatures to authenticate incoming requests on behalf of a protected API and to add signatures on the way out using the Generate Security Hash Assertion as illustrated below.

Layer 7 Gateway Hashing Assertion

Layer 7 Gateway Hashing Assertion

The Generate Security Hash Assertion lets you calculate an HMAC based on the key and data to sign. The data to sign is something that must be agreed upon in advance, as is the way to incorporate the HMAC in the request. When working with an existing system which already defines this (such as AWS), you simply set the variable ${hash.dataToSign} to reflect the same order and contents. If you have the freedom to define this yourself for your own environment, make sure it covers key aspects of a request so that an HMAC cannot be reused if it falls in the wrong hands. For a RESTful web service for example, it makes sense to cover the HTTP verb (method), the request URI, query parameters and payload if any. Adding either a timestamp or a validity period is also good practice.

Once you calculated an HMAC in your policy using this assertion, you can inject it to an outgoing message by adding it to the Authorization HTTP header directly as illustrated below. Note that you can include this HMAC in any desired header.

Injecting an HMAC downstream

For verifying an incoming HMAC, construct your policy to calculate the hash based on the input and compare this value against the incoming HMAC value using a simple comparison assertion.

Validating an incoming HMAC

Validating an incoming HMAC