October 30th, 2013

Designing APIs for the Internet of Things (IoT)

API Design for IoTI’m looking forward to our next API Tech Talk for several reasons. First of all, on Oct 31 at 9am Pacific, we’ll be discussing some topics that are very hot in IT right now: the Internet of Things (IoT), API design and – more specifically – how to design APIs with IoT in mind.

Secondly, Holger Reinhardt will be our special guest expert. Holger was a Product Architect at Layer 7 before the company’s acquisition by CA technologies and now he’s Senior Principal, Business Unit Strategy, an expert on IoT and Big Data and all-around great guy.

I also happen to find the concept of IoT – all manner of devices and other “things” connected on the Internet – inherently fascinating . It might be an animal in a field with a biochip transponder or household appliances that alert the homeowner through a mobile application when it’s time for maintenance. Basically, any object that can be assigned an IP address and given the ability to transfer data over a network can be part of the massive Internet of Things. And all these mobile applications and connections across IoT are being designed using APIs.

Of course there are many questions raised by creating such a huge network of things. Security, for one, is a concern. Scale is another – how do you manage the massive amount of data being produced and how do you control access to it? How do you open up APIs to IoT in a secure, scalable way?

API design will be central to answering these questions and addressing these concerns. That’s why Holger will be using tomorrow’s Tech Talk to discuss best practices for designing APIs within the context of IoT. Holger will explore how the ubiquity of APIs in the IoT age will affect API design and answer any related questions you may have.

Here’s how to join in:

October 7th, 2013

SDKs or APIs: What’s the Right Choice for Your Developer Community?

SDK or APIWhen creating an application programming interface (API) for a service, one of the key decisions any program or product manager will face is how best to meet the needs of their prime target audience: developers. Faced with this decision, you want to make sure your API is easy to use and doesn’t represent a high barrier to entry for your specific developer audience.

Currently, the typical approach is to design an interface that leverages the most common protocol on the Web today: HTTP. This is often labeled a “RESTful” API (referring to Roy Fielding’s architectural model for the World Wide Web) and offered as a one-size-fits-all (OSFA) model for developers to use when building client applications for a service. But this is not always the best approach.

Properly understanding and implementing a raw HTTP interface may be too complex for some segments of your developer community – some of whom are really only interested in your service and not in spending time to build a killer HTTP application for that service. Additionally, a generic HTTP API might end up compromising key performance aspects of your service in order to work best for a wide range of developer communities (the OSFA problem). Even worse, an HTTP-based API may – in the end – result in unfocused client applications built by developers who know more about HTTP than your service.

What We Learned from SOAP
One of the powerful lessons learned from the SOAP community relates to the value of developer tooling. The SOAP Web Service Definition Language (WSDL) is a complex, difficult-to-read document that contains all the important details on building a compliant client application for a service. And developers have a hard time making sense of the document. To solve this problem, the SOAP community helped promote an “accommodation” for developers –  the “WSDL” button that is available on many code editors. By simply pressing this button and following a few prompts, developers can easily create API facades for servers or consume WSDL documents to build client applications. The WSDL accommodation makes SOAP programming not only easy, it adds to the usability of SOAP interfaces and lowers the bar for developers wanting to use services on the Web.

What We Lost with HTTP CRUD
The rise of JSON-style HTTP CRUD (Create-Read-Update-Delete) programming interfaces meant the loss of the WSDL accommodation. Developers were expected to hand-craft both the server and client interfaces without the aid of a unifying definition document like SOAP’s WSDL. To make up for this loss, server coding environments like Ruby on Rails introduced helper functions (e.g. “rails new”) designed to generate a great deal of the API facade required in order to publish a service on the Web. Developers could use this accommodation to simplify the interface details and allow them to focus on crafting the internal object and business modeling needed to make the service operational.

Client devs needed to create their own accommodations, too. That’s why we have client-side libraries like ember.js, backbone.js, angular.js and others. Like the developers building servers, client developers needed help handling the basic plumbing for HTTP-based applications, so they could focus on their own object models and business logic.

The bad news is, in this HTTP CRUD world, each and every service has its own unique set of URLs, objects (users, products etc.) and actions (approve, remove, edit, apply etc.). And each API looks like a snowflake among hundreds of other unique interfaces. For example, there are more than 500 unique APIs for supporting shopping services. This can raise the bar for developers and lower the usability of HTTP CRUD APIs.

Adapter APIs
Netflix set out to solve the OSFA problem in 2012 by embracing differences in its developer community and creating a set of targeted “adapter APIs”. These are custom interfaces optimized for selected developer communities. For example, Netflix offers one API for its XBox community and a slightly different API for its PS2 community. In fact, each major device has its own custom API.

What’s interesting about the Netflix approach is that the customized interface accommodations live on the server, not the client. In other words, Netflix has taken on the task of optimizing its interfaces for each community and hosting that optimization on its own servers. A client developer will still be using an HTTP API but it will be one tailored to a specific device – a server-side custom library.

Do SDKs Provide the Answer?
Another way to provide solid accommodations to your HTTP developers is to create software developer kits (SDKs) for your service. Essentially, SDKs provide the same accommodation that WSDL-generated client code does for SOAP interfaces. The good news is that well-designed and executed SDKs can lower the bar for developers and increase service usability.

Recently, Evernote announced that it was taking the SDK approach. One reason for this was Evernote’s decision to use the Apache Thrift message model. The Thrift approach serializes messages using a custom binary format and then ships them across the network using one of a handful of transport protocols (including direct TCP/IP). This is pretty low-level stuff and can easily intimidate some client developers – raising the barrier to entry for Evernote’s API – and this is where creating an SDK is a handy approach. Evernote has committed to building a wide range of language-specific SDKs that clients can download, install and use in their own code. This accommodation lowers the bar when using the Thrift model.

The bad news is that creating SDKs is often a recurring additional expense for your services team. Unlike the WSDL standard, which makes it possible to generate code for a wide range of programming languages from a single published definition file, SDKs usually need to be hand-built for each target programming environment. And selecting target programming languages can turn into a slippery slope. Which languages are most used by your current API consumers? What if the most-used language represents only 30% of your target audience? How many SDKs do you need to build and maintain before you reach a significant portion of your developer community?

The maintenance of SDKs can be substantial, too. How often do you release updates? Fix bugs? Provide new features? Each release can mean added cost to your developer community as they need to open up their own code, integrate the new SDK, run tests and finally re-release their apps to their own target communities. Unless carefully done, this can result in added churn and expense all around – including deployment and download costs for end users. And that can raise the barrier to entry and lower usability.

So What’s a PM to do?
When you start to think about the notion of creating accommodations for target audiences, you have a new metric for assessing the usability and value of your service interface. If you can design an API that is easy for your target audience to use, then you (and your developers) win. If your API is too complex for your audience or relies on a less-used technology, you likely need to include more direct accommodations for your developers.

In some cases, the audience will know how to use SOAP interfaces and will benefit from you offering a WSDL as their accommodation. In other cases, the target audience won’t want/need a SOAP interface and a well-crafted HTTP API (or possibly a set of targeted adapter APIs) will be the right choice. Finally, some target developers will want/need an SDK to handle the protocol details and allow them to focus on their own business logic.

In fact, sometimes the best bet is to offer more than just one of these options in order to reach all your target audiences. Your enterprise partners may prefer a SOAP interface, your mobile devs may prefer a well-design HTTP CRUD API and your business and marketing team may prefer an SDK that exposes only the parts of your service they need to use.

You Own the Interface
The key point in all this is that you own the interface. You can cast your service in many different ways, aimed at several different audiences. You don’t need to stick with a OSFA approach and you don’t have to rule out major technology sectors like SOAP just because a portion of your audience prefers one interface style over another.

By focusing on your target audience and learning their skills and preferences, you can identify key metrics that can drive the selection of the right mix of API styles and accommodations that will help you meet your goals for API reach and usability.

October 4th, 2013

Can Your API be BREACHed?

Secure APITLS and SSL form the foundations of security on the Web. Everything from card payments to OAuth bearer tokens depend on the confidentiality and integrity that a secure TLS connection can provide. So when a team of clever engineers unveiled a new attack on SSL/TLS – called BREACH – at July’s Black Hat conference, more than a few eyebrows were raised. Now that it’s Cyber Security Awareness Month, it seems like a good time to examine the BREACH threat.

There have already been a number of articles in the technology press identifying threats BREACH poses to traditional Web sites and suggesting ways to mitigate the risks but it is important for us to examine this attack vector from an API perspective. API designers need to understand what the attack is, what risks there are to Web-based APIs and what can be done to mitigate the risks.

The BREACH attack is actually an iteration of  a TLS attack named CRIME, which emerged last year. Both attacks are able to retrieve encrypted data from a TLS connection by taking advantage of the way data compression works in order to guess the value of a confidential token in the payload. While CRIME relied specifically on exploiting the way TLS-based compression works, the BREACH exploit can target messages sent with compression enabled at the HTTP/S level, which may be much more widely enabled in the field.

HTTP compression is based on two compression strategies for data reduction: Huffman coding and LZ77. The LZ77 algorithm accomplishes its goal of data compression by identifying and removing duplicate pieces of data from an uncompressed message. In other words,  LZ77 makes a message smaller by finding duplicate occurrences of data in the text to be compressed and replacing them with smaller references to their locations.

A side effect of this algorithm is that the compressed data size is indicative of the amount of duplicate data in the payload. The BREACH attack exploits this side effect of LZ77 by using the size of a message as a way of guessing the contents of a confidential token. It is similar in nature to continually guessing a user’s credentials on a system that provides you with unlimited guesses.

While the premise is scary, the good news is that the BREACH attack doesn’t give an attacker unfettered access to the encrypted TLS payload. Instead, it is a targeted attack that attempts to retrieve a confidential token through repeated and iterative guesses. In fact, the attack isn’t an exploit of the TLS protocol at all, rather it is an attack that can be applied to any messaging system that uses the gzip compression algorithm (which is a variation of LZ77).

On top of this, BREACH is not an easy attack to pull off. A would-be BREACHer must:

  1. Identify an HTTPs message which has compressed data, a static secret token and a property that can be manipulated
  2. Trigger the application or server to generate many such messages in order to have a large enough sample size to iteratively guess the token
  3. Intercept all of these messages in order to analyze their sizes

Each of these requirements is non-trivial. When combined, they greatly reduce the attack surface for a BREACH attack in the API space. While API messages certainly contain data that may be manipulated and while many APIs do provide compressed response data, very few of those API messages also contain confidential tokens.

But designers of APIs shouldn’t dismiss the possibility of being BREACHed. There are at least two scenarios that might make an API susceptible to this attack vector.

Scenario 1 – Authentication & CSRF Tokens in Payloads:
Many APIs return an authentication token or CSRF token within successful responses.  For example, a search API might provide the following response message:

<SearchResponse>
    <AuthToken>d2a372efa35aab29028c49d71f56789</AuthToken>
    <Terms>…</Terms>
    <Results>…</Results>
</SearchResponse>

If this response message was compressed and the attacker was able to coerce a victim into sending many requests with specific search terms, it would only be a matter of time before the AuthToken credential was retrieved.

Scenario 2 – Three-Legged OAuth:
APIs that support the OAuth 2 framework for delegated authorization often implement CSRF tokens, as recommended in the OAuth 2 RFC. The purpose of the token is to protect client applications from an attack vector in which a client can be tricked into unknowingly acting upon someone else’s resources (Stephen Sclafani provides  a better explanation of the CSRF threat here.)  Due to the fact that CSRF tokens are reflected back by the server, the three-legged OAuth dance becomes a possible attack surface for BREACH.

For example, an attacker could coerce a victim to send repeated OAuth 2 authorization requests and utilize the state parameter to guess the value of the authorization token. Of course, all of this comes with the caveat that the OAuth server must be compressing responses to become a target. The fact that a new authorization code is generated for each authorization attempt would make this attack less practical but still theoretically possible.

Ultimately, the simplest way to mitigate the BREACH attack is to simply turn off compression for all messages. It isn’t hard to do and it will stop BREACH dead in its tracks. However, in some cases, designers may need to support compression for large data responses containing non-critical information or because they are supporting platforms with low bandwidth capabilities.  In these instances, it makes sense to implement a selective compression policy on an API Gateway.

While disabling compression will certainly negate the impact of the BREACH attack, a more general solution is to impose smart rate limiting on API requests. This will not only negate the sample size that a BREACH attacker needs to guess data, it will also stop other side-effect attacks that don’t rely solely on compression. In addition, log analysis and analytics will make it easier to spot any attempt at an attack of this kind.

An API Gateway is the key component for this type of security mitigation in the API space. A Gateway can provide the level of abstraction needed to enforce configurable compression and rate limiting policies that server side developers may not have the security background to implement effectively. In addition, the Gateway acts as a central enforcement point for security policy – particularly useful in larger federated organizations.

TLS is core to most of the security implementations that have evolved on the Web, including the OAuth 2 framework. This latest published attack does not render the world’s TLS implementations useless but it does introduce an interesting attack vector that is worth protecting against in the API domain. Remember, API rate limiting and usage monitoring are useful for much more than just monetizing an API!

September 30th, 2013

Workshops, Workshops, Workshops!

Layer 7 API WorkshopsOne of the great things about my job is that I get to travel around the world sharing API design strategies, experiences and theories with people who are at the forefront of our industry. These interactions not only make it easier to design effective APIs, they also have the potential to spark ideas that can lead to real business transformation.

But we aren’t all lucky enough to get these types of opportunities and it’s often difficult to justify the cost of traveling to far-flung events in the modern business world. If you’re in that boat, then it’s your lucky day: our Layer 7 API Strategy Workshop series aims to bring all the experiences, discussions and networking opportunities practically to your doorstep.

Over the next two months, Mike Amundsen, Holger Reinhardt and I will be delivering a series of free workshops on API strategy, the principles of good API design and the keys to designing an API that will last. In addition to core aspects of effective API design, we will discuss the emerging trends of developer experience (DX), the Internet of Things (IoT) and DevOps as they pertain to the API universe.

Our tour kicked off in September with great events in San Antonio and Los Angeles and it will continue through October and November with the following stops:

It’s going to be an exhausting couple of months for us but we’re looking forward to having some great conversations with our attendees. So, come out and join us during what promises to be a very thought-provoking and engaging series of half-day events.

September 13th, 2013

Nordic APIs

Nordic APIsIt looks like the remainder of September will provide a bounty of learning opportunities for those of you interested in diving deeper into API design.  To start with, Mike Amundsen and I will be continuing our Layer 7 API Academy workshop tour in Montreal and Calgary. In addition to our API Academy events, Mike will be hosting his annual conference related to all things REST with RESTFest 2013. I had the pleasure of attending last year and I highly recommend going if you are interested in thought-provoking conversation and ideas in the hypermedia domain.

On the other side of the ocean and closer to home for me is next week’s Nordic APIs conference in Stockholm (September 18-19).  I’ve been to a few of the smaller API design conferences that the Nordic APIs team has put on and I can say without a doubt that this will be a conference worth attending.  They’ve always done a great job of putting together sessions that will appeal to developers on the leading edge of API design as well as those who are looking for practical solutions.

I’ll be delivering a keynote presentation on a developer experience (DX) oriented design approach for APIs. My colleague Holger Reinhardt will be talking about the Internet of Things and Aran White will be delivering a demonstration of the Layer 7 product line. Of course, the great value in events like this comes from the serendipitous conversations that take place outside the agenda and Holger, Aran and I are really looking forward to swapping war stories with Nordic API attendees.

While I’m sad that I won’t be able to join Mike at RESTFest this year, I’m overjoyed at the reason I can’t go. I’m continually amazed at how much the European API design community has grown and watching the Nordic event grow from a few small events into a major conference has been eye opening. Not too long ago, it was difficult to find API design events to attend but now we are spoiled for choice. It’s a great indication of the continued interest in and growth of Web-based APIs.