January 8th, 2014

APIs Past, Present & Future – API Predictions 2014

2014 Predictions Tech TalkThe beginning of a new year is a great time to reflect back on the year that was and look ahead to the year that will be. Because 2013 was such an impressive year for API technology, I thought now would be a great time to assemble a panel of API experts and talk about The Past, Present & Future of APIs.

This will be our first API Tech Talk of 2014 and it’ll be a great chance for our audience to interact with the panel, ask questions, make comments and ultimately learn and think about the future of APIs.

At Layer 7, we’re proud to have thought leaders and top industry talent when it comes to API design, API security, the API economy and IoT. On the panel will be members of our API Academy: VP Client Services Matt McLarty; Principal API Architect, North America Mike Amundsen; Principal API Architect, Europe Ronnie Mitra; Product Architect Holger Reinhardt; Chief Architect Francois Lascelles. They will be joined by Layer 7′s Director of Product Management, Ross Garret.

So, we’ve brought together experts from a design/usability perspective, a business perspective, an integration perspective and – of course – an API security and management perspective.

Our customers – and enterprises in general – realize they can transform their businesses through APIs by leveraging their digital assets and taking advantage of all-pervasive trends like mobile, BYOD and IoT. Mobile apps are an integral part of daily life for most of us and smartphone use has become commonplace in the enterprise.  Mobile app developers need APIs to build the exciting applications we all love to use. And as the recent Snapchat security breach teaches us, security is a very important – but sometimes undervalued – aspect of API architecture.

APIs are driving the future of business and there are a lot of considerations when talking about API Management. The API itself needs to be designed well, the security needs to be tight to protect user data, it needs to be developer-friendly and on we go.

While  2013 may very well have been the year of the API, 2014 could be the year APIs go mainstream. So, join us on January 15 at 9am Pacific Time for a live discussion of The Past, Present & Future of APIs. You can tweet your questions or comments directly to @layer7 or you can use the #layer7live hashtag. You can also email your questions directly to us (techtalk@layer7tech.com).

I’m really looking forward to hosting this discussion and think you’ll get a great deal out of the discussion. We value your input and look forward to hearing from you on Jan 15.

October 30th, 2013

Designing APIs for the Internet of Things (IoT)

API Design for IoTI’m looking forward to our next API Tech Talk for several reasons. First of all, on Oct 31 at 9am Pacific, we’ll be discussing some topics that are very hot in IT right now: the Internet of Things (IoT), API design and – more specifically – how to design APIs with IoT in mind.

Secondly, Holger Reinhardt will be our special guest expert. Holger was a Product Architect at Layer 7 before the company’s acquisition by CA technologies and now he’s Senior Principal, Business Unit Strategy, an expert on IoT and Big Data and all-around great guy.

I also happen to find the concept of IoT – all manner of devices and other “things” connected on the Internet – inherently fascinating . It might be an animal in a field with a biochip transponder or household appliances that alert the homeowner through a mobile application when it’s time for maintenance. Basically, any object that can be assigned an IP address and given the ability to transfer data over a network can be part of the massive Internet of Things. And all these mobile applications and connections across IoT are being designed using APIs.

Of course there are many questions raised by creating such a huge network of things. Security, for one, is a concern. Scale is another – how do you manage the massive amount of data being produced and how do you control access to it? How do you open up APIs to IoT in a secure, scalable way?

API design will be central to answering these questions and addressing these concerns. That’s why Holger will be using tomorrow’s Tech Talk to discuss best practices for designing APIs within the context of IoT. Holger will explore how the ubiquity of APIs in the IoT age will affect API design and answer any related questions you may have.

Here’s how to join in:

September 26th, 2013

Common OAuth Security Mistakes & Threat Mitigations

OAuth SecurityI just found out we had record attendance for Wednesday’s API Tech Talk. Clearly, there’s an appetite for the topic of OAuth risk mitigation.

With our digital lives scattered across so many services, there is great value in technology that lets us control how these service providers interact on our behalf. For providers, making sure this happens in a secure way is critical. Recent hacks associated with improperly-secured OAuth implementations show that OAuth-related security risks need be taken seriously.

When in doubt, take a second look at the security considerations of the spec. There is also useful information in RFC6819 – OAuth 2.0 Treat Model & Security Considerations.

The Obvious Stuff
Let’s get a few obvious things out of the way:

  1. Use SSL (HTTPS)
  2. Shared secrets are confidential (if you can’t hide it, it doesn’t count as a secret)
  3. Sanitize all inputs
  4. Limit session lifespan
  5. Limit scope associated with sessions

None of these are specific to OAuth. They apply to just about any scheme involving sessions and secrets. For example, form login and cookie-based sessions in Web applications.

OAuth’s Main Attack Vector
Some of the grant types defined by the OAuth protocol involve the end-user being redirected from an application to a service provider’s authorization server where the user is authenticated and expresses consent for the application to call the service provider’s API on its behalf. Once this is done, the user is redirected back to the client application at a callback address provided by the client application at the beginning of the handshake. In the implicit grant type, the redirection back to the application includes the resulting access token issued by the OAuth provider.

OAuth’s main attack vector involves a malicious application pretending to be a legitimate application. When such an attacker attaches its own address as the callback for the authorization server, the user is redirected back to the malicious application instead of the legitimate one. As a result, the malicious application is now in possession of the token that was intended for a legitimate application. This attacking application can now call the API on behalf of the user and wreak havoc.

OAuth 101: Callback Address Validation
The most obvious defense against this type of attack is for the service provider to require that legitimate client applications register their callback addresses. This registration step is essential as it forms the basis of a user being able to assess which application it is granting to act on its behalf. At runtime, the OAuth authorization server compares these registered values against the callback address provided at the beginning of the handshake (redirect_uri parameter). Under no circumstance should an OAuth authorization server ever redirect a user (along with an access token) to an unregistered callback address. The enforcement of these values is a fundamental precaution that should be engrained in any OAuth implementation. Any loophole exploiting a failure to implement such a validation is simply inexcusable.

redirect_uri.startsWith(registered_value) => Not good enough!
Some application developers append client-side state at the end of runtime redirection addresses. To accommodate this, an OAuth provider may be tempted to merely validate that a runtime redirection address starts with the registered value. This is not good enough. An attacker may exploit this by adding a suffix to a redirection address – for example, to point to another domain name. Strict redirection URI trumps anything else, always. See http://tools.ietf.org/html/rfc6819#section-5.2.3.5.

Dealing with Public (Not Confidential) Clients
If you are using the authorization code grant type instead of implicit, a phishing attack yields an authorization code, not the actual access token. Although this is technically more secure, the authorization code is information that could be combined with another vulnerability to be exploited – specifically, another vulnerability caused by improperly securing a shared secret needed to complete the code handshake in the first place.

The difference between the implicit and authorization code grant types is that one deals with public clients and the other deals with confidential ones. Some may be tempted to rely on authorization code rather than implicit in order to add security to their handshakes. If you expose APIs that are meant to be consumed by public clients (such as a mobile app or a JavaScript-based invocation), forcing the application developer to use a shared secret will only lead to these shared secrets being compromised because they cannot be effectively kept confidential on a public platform. It is better to be prepared to deal with public clients and provide handshake patterns that make them secure, rather than obfuscate secrets into public apps and cross your fingers they don’t end up being reverse-engineered.

Remembering Past Consent Increases Risk
Imagine a handshake where a user is redirected to an authorization server (e.g. implicit grant). Imagine this handshake happening for the second or third time. Because the user has an existing session with the service provider, with which the authorization server is associated (via a cookie), the authentication step is not required and is skipped. Some authorization server implementations also choose to “remember” the initial expression of consent and will not prompt the user to express consent again – all in the name of better user experience. The result is that the user is immediately redirected back to the client application without interaction. This typically happens quickly and the user is not even aware that a handshake has just happened.

An “invisible” handshake of this kind may lead to improved user experience in some situations but this also increases the effectiveness of a phishing attack. If the authorization server does not choose to implement this kind of handshake and instead prompts the user to express consent again, the user is now aware that a handshake is at play. Because the user does not expect this action, this “pause” provides an opportunity for the user to question the action which led to this prompt in the first place and helps the user in recognizing that something “phishy” is in progress.

Although bypassing the authentication step provides an improvement in user experience, bypassing consent and allowing redirection handshakes without displaying anything that allows a user to abort the handshake is dangerous and the resulting UX gain is minimal (just skipping an “OK” button).

July 26th, 2013

Next Tech Talk: CDN API Management Using Optimized API Distribution

API Tech Talk T-ShirtGet ready for Layer 7’s next API Tech Talk, coming up on Wednesday July 31 at 9am PDT. This live, interactive event will feature Akamai’s Gary Ballabio alongside our very own Francois Lascelles, chatting about CDN API Management Using Optimized API Distribution.

Right now, APIs are playing a central role in content providers’ efforts to maximize customer engagement by leveraging emerging online channels via cloud-based content distribution networks (CDNs).

But CDNs and API publishing have raised new access management and SLA enforcement challenges for content providers. On Wednesday, Gary and Francois will explain how content providers can tackle these challenges via entitlement checks, access history and analytics.

Our presenters will also be taking your questions and comments throughout the Tech Talk. And if they answer your question during the live stream, we’ll send you one of our highly desirable, limited edition Tech Talk T-shirts.

Click here to get the full event details and a reminder in your calendar. On the day of the event, join us at:

You can ask questions throughout the stream by chatting or tweeting. Alternatively, just email your questions in advance so that Gary and Francois can give you some really in-depth answers on the day.

See you on Wednesday!

June 7th, 2013

IoT Tech Talk Follow-Up

Written by
 

IoT Tech Talk Follow UpLast week, I had the opportunity to answer questions about the Internet of Things (IoT) when I took part in Layer 7’s monthly API Tech Talk. We had a tremendous response, with lots of questions and a very active online discussion. You can find a replay of the Tech Talk here. I’d like to take this opportunity to answer a few of the questions we received during the webcast but didn’t have time to answer on the day.

How does Layer 7 help me manage a range of devices across IoT?
IoT is an opportunity for CA and Layer 7 to bring together identity, access and API Management.  To paraphrase a comment on a recent Gigaom article: Everything with an identity will have an API and everything with an API will have an identity.

With so many “things” potentially accessing APIs, what are some strategies for securing these APIs across such a breadth of consumers?
Identify, authenticate and authorize using standards. API for IoT means managing identity for many devices at Internet scale.

How will API discoverability work with the vast number of things, especially if we see REST as the primary communication style?
I reached out to my colleague Ronnie Mitra for this answer. Ronnie pointed out that, in the past, standards like UDDI and WSRR promised to provide service registries but that didn’t really work out. Nowadays, we see lots of independent human-oriented API registries and marketplaces that might have more chance of surviving. There are even some runtime discovery solutions like Google’s discovery interface for APIs and the use of HTTP OPTION to learn about APIs. At the moment, lots of people are trying lots of things, unsure of where it will all end up. It would be interesting to dive deeper into why we need discoverability to power IoT and when that discoverability has to take place.

How can API security get easier when API demand grows exponentially? There’s a big disconnect.
It doesn’t get easier. Transport-level security is reasonably well understood but endpoint identity and trust will be challenging.

Where will the intelligence be in IoT? Will there be some form of on-site intelligence, so that core functionality continues even if the connection is lost? Or will all intelligence be cloud-based?
It depends on whether you design for centralized “hub and spoke” or decentralized “domains of concern”. The former is responsible for correlating data and events within the domain whereas the latter is responsible for communicating with other domains (I owe this concept to Michael Holdmann’s blog). “Domains of concern” design communicates with different domains for different purposes –  in an apartment for home automation, in an apartment building for HVAC, in a city block for energy generation/consumption, in a city for utility grid etc. Emergencies or out-of-bound signals are handled like exceptions and are bubbling up through the domains until intercepted. But most things will serve an inherent purpose and that purpose will not be affected by the absence of any connectivity. There will be intelligence within the core of each domain as well as at the edges/intersections with other domains.

What is the best way to overcome fear of exposing data via APIs in an enterprise?
You need to identify a business opportunity. Unless you know what business impact you are trying to archive and how you will measure it, you should not do it.

Does IoT require a strong network or big data or both?
Not a strong network but ubiquitous connectivity. Not big data but sharing/correlating data horizontally between distinct vertical silos.

What significance (benefits/drawbacks) do the various REST levels have with respect to the Internet of Things (connecting, monetizing etc.)?
I had never heard of levels of REST and had to look it up. Turns out the levels are: resources, verbs and hypermedia. Hypermedia would allow you to embed long-lived clients, which could adapt to changes in API design. But it is actually the data or service behind the API which is monetizable, not the API itself. The API is just the means to an end.

How will IoT evolve? And more importantly how can enterprises solve the security and privacy issues that will arise as IoT evolves?
Culturally, the European regulators will try to put privacy regulations in place sooner rather than later whereas the North Amercian market will initially remain largely unregulated until some abuse prompts the regulator to step in. In Germany, the federal regulator tries to stay ahead of the market and recently published a security profile for smart meters. Personally I would look at designing M2M and IoT applications assuming that endpoint data is inherently unreliable and that I can not necessarily trust the source. But that is very broad guidance and may or may not be applicable to a specific use case.

As we create API frameworks that interact with sensors and control objects in the IoT what/who are the best organizations to follow to learn about new protocols that we should be preparing to handle, such as CoAP etc?
Here are some suggestions:

How close are we to having a unified platform for IoT application developers and who is likely to be the winner among the competing platforms?
Chances are there won’t be a winner at all. You have companies like Axeda, Exosite, Gemalto, Digi, Paraimpu, BugLabs, ThingWorx, SensiNode, deviceWISE and more. You have industry working groups like Eclipse M2M and various research efforts like SPITFIRE project, Fraunhofer FOKUS, DFuse and many others. The Eclipse M2M framework is probably a good choice to start with.

Even assuming ubiquitous and common networking (e.g. IPv6 on the public Internet) – how will the IoT identify peers, hierarchy and relationships?  
I think there is a huge opportunity for identity companies like CA to figure this out. Take a look at EVRYTHNG as one of the few startups in that space. Meanwhile, the folks over at Paraimpu are trying to tackle this challenge by combining aspects of a social network with IoT.