October 24th, 2012

Improving the API Developer Experience

Developer ExperienceSometimes design concepts are obvious. We know they are implicitly understood and don’t require drawn-out explanations. But sometimes these implicitly-understood concepts aren’t executed in real life because they haven’t been explicitly defined. I’ve come to the realization that designing APIs with the developer in mind is one of those ideas that often has an audience nodding their heads but which only a few take to heart and apply to their API architectures.

We in the API design world have a great opportunity to learn from our brethren in the product design world. The user-centered design approach for products has paid great dividends for those who can understand and apply the idea to their interfaces. The goal is almost stupid in simplicity – design products that your users will enjoy. But, as always, the challenge is in translating a simple concept into real strategies, methodologies and practices that do not lose that fundamental goal while staying applicable to unique marketplaces.

In our world of API design, most of us understand that machine-to-machine integration still involves a human – the developer who develops the client code. That developer – the one who makes or breaks us by deciding to use an API – is our user. While product designers talk about improving user experience, we talk about improving the developer experience.

But how does this actually happen? What do we specifically need to do in order to create APIs that are enjoyable to use? Indeed, what does enjoyable even mean in this context? This developer/API publisher relationship is a unique one and the product-based, user-centered design and human/computer interaction models cannot just be airlifted in. They need to be massaged and transformed so they are applicable to the Web API world, without losing the potential value inherent in a user-focused design.

I hope to explore these ideas over the coming months and come up with recommendations for how we can build API solutions that deliver on the promise of improved developer experience (or DX). I’ll dive deeper into the world of user-centered design and discuss methods for translating these concepts from the world of product design into our API design domain.

October 15th, 2012

API Workshops in Europe

Paris API WorkshopI had a great time presenting on API design and management trends at our London API Workshop a few weeks back. James Governor from RedMonk delivered an exciting talk on APIs, the need for API Management and some stark truths, like the fact that Java is still at the top of the programming pile. All of the trend talk and analysis was followed by a great real-world example when MoneySupermarket.com’s Cornelius Burger described his organization’s journey implementing the MoneySupermarket API with a SecureSpan API Proxy. We had excellent feedback on the event, so I know I wasn’t the only one who learned a lot from our speakers.

I was particularly impressed by the range of industries and organizations that were represented in the audience. We had developers from large enterprise shops, specialized Internet-focused start-ups and even a few entrepreneurs just getting started. I think this range of interest is indicative of the value of Web APIs for all and bodes well for a continued investment in designing great APIs, rather than just chucking them out into the ether.

Next up on the tour is our Paris API Workshop taking place tomorrow (Tuesday, October 16).  As always, we have a great set of speakers lined up, with Martin Duval from bluenove talking about building developer outreach programs and Benoit Herard from Orange Labs discussing their API launch. France has a  great start-up culture and a reputation for enterprises like Orange driving innovation, so I’m expecting good conversation, some excellent API Management presentations and – if I’m lucky – some great wines.

September 17th, 2012

Web APIs are International

APIs are GlobalI had the great fortune of spending last week in India, helping a Layer 7 customer develop a Web API program from scratch. While it’s always exciting to walk into a greenfield situation and build something new, I was doubly excited to be doing this in India, where the concept of open APIs is still fairly new.

Over the last few years, we’ve seen explosive growth in open APIs across North America, lead of course by the avant garde Internet companies on the West Coast. The API Management industry has focused much of its attention on the US market but the Web API movement has definitely made its way to other markets and the push towards mobile and device-based applications is clearly having an influence on enterprise architectures.

Western Europe has had a strong influence on the API scene, with notable government and enterprise organizations diving wholeheartedly into the collaborative, developer-focused open API space. London, in particular, has developed a thriving technology scene with tons of hackathons, codeathons, meetups and start-up companies trying to change the world or at least get rich trying.

At the moment, the open API scene in India is still in its infancy and I’m looking forward to helping the concept blossom in whatever way that I can. As you may be aware, the number of mobile devices being used in India is mind-boggling and the ratio of mobile-use-to-desktop-computing is much higher than in North America or Western Europe.  This quantity of mobile client platforms, combined with the large number of motivated developers on the scene, makes this a very intriguing open API marketplace. I can’t disclose any details on the nature of the project yet… but I’m hoping to to have exciting news to share in the near future, so stay tuned.

I’ve spent most of the summer in North America, for a variety of reasons and I’m excited that I will finally be getting back home to the UK so I can re-engage with the European API and mobile scene. We have some great Layer 7 API workshops scheduled across Europe over the next few months and hopefully we will uncover a few new and noteworthy European API publishers while we are on tour.

August 3rd, 2012

Standards, APIs & WAC

Wholesale Applications Community LogoGigaOM recently ran a piece opining the demise of the Wholesale Applications Community (WAC) after only a couple of years on the scene. The article complained that something like the WAC effort is needed and suggested that, given the nature of the industry and the players involved, it’s not likely to happen. However, what the author failed to notice was that the WAC’s attempted solution was way off the mark.

The WAC’s key failure was that it attempted to standardize the wrong thing: the API. This is a common problem that occurs repeatedly. GigaOm readers may recall another example of industry-level standards going astray, summarized in the “Cloudstack-Openstack Dustup” piece from April. I suspect several readers can call to mind similar cases in the not-too-distant past. Such cases usually share a common theme: disagreement on the details of the API.

The solution is right at hand but few see it. The right way to go is to standardize the way messages are designed and shared, not the data points and actions themselves. In other words, the key to successful shared standardization is through media-types and protocols. This is especially true for any communication over HTTP but it holds true for standards operating over any application-level protocol.

We don’t need to look too far to see an example of an industry-led standardization success. VoiceXML was started by AT&T, IBM, Lucent and Motorola as a way to standardize interactive voice system communications. Not long after the first markup was defined in 1999 (a process which took a matter of a few months), the standard was turned over to the W3C for continued growth and refinement.

The goals of VoiceXML were strikingly similar to those of the WAC and Cloudstack/Openstack efforts: defining an interoperable standard that could be used across an industry group. The difference in the case of VoiceXML was that the committee focused on message design and domain-specific details shared by all players. It did not attempt to document all the data elements, function calls and workflows to be used in lockstep by all.

Most likely, the WAC meltdown won’t be the last one we’ll see. But this is not the inevitable result of competing interests in the global marketplace. This is a result of well-meaning people aiming at the wrong target. We can do better. We can learn from successful interface designs and focus on making it possible to consistently communicate a wide range of information freely instead of attempting to constrain systems to a single set of possible interactions.

The future of an effective Web, a growing and vibrant distributed network, rests in the hands of those who would take on the task of writing the vital standards that will make it work. I look forward to seeing more efforts where the focus is on improving communication between parties through well-designed message formats instead of on limiting communication though constrained APIs.

July 30th, 2012

Why I Still Like OAuth

Written by
 

OAuth 2.0 ControversyThat sound of a door slamming last week was Eran Hammer storming out of the OAuth standardization process, declaring once and for all that the technology was dead and that he would no longer be a part of it. Tantrums and controversy make great social media copy, so it didn’t take long before everyone seemed to be talking about this one. In some quarters, you’d hardly know the London Olympics had begun.

So what are we to really make of all this? Is OAuth dead or at least on “the road to Hell”, as Eran now-famously put it? Certainly, my inbox is full of emails from people asking if they should stop building their security architecture around such a tainted specification.

I think Tim Bray, who has vast experience with the relative ups and downs of technology standardization, offered the best answer in his own blog:

“It’s done. Stick a fork in it. Ship the RFCs.”

Which is to say sometimes you just have to declare a reasonable victory and deal with the consequences later. OAuth isn’t perfect, nor is it easy. But it’s needed and it’s needed now, so let’s all forget the personality politics and just get it done. And hopefully, right across the street from me here in Vancouver, where the IETF is holding it’s meetings all this week, this is what will happen.

In the end, OAuth is something we all need and this is why this specification remains important. The genius of OAuth is that it empowers people to perform delegated authorization on their own, without the involvement of a cabal of security admins. And this is something that is really quite profound.

In the past, we’ve been shackled by the centralization of control around identity and entitlements (a fancy term which really just describes the set of actions your identity is allowed, such as writing to a particular file system). This has led to a status quo in nearly every organization that is maintained first because it is hard to do otherwise but also because this equals power, which is something that is rarely surrendered without a fight.

The problem is that centralized identity admin can never effectively scale, at least from an administrative perspective. With OAuth, we can finally scale authentication and authorization by leveraging the user population itself — and this is the one thing that stands a chance of shattering the monopoly on centralized identity and access management (IAM). OAuth undermined the castle and the real noise we are hearing isn’t infighting on the spec but the enterprise walls falling down.

Here is the important insight of OAuth 2.0: delegated authorization also solves that basic security sessioning problem of all apps running over stateless protocols like HTTP. Think about this for a minute: The basic Web architecture provides for complete authentication on every transaction. This is dumb, so we have come up with all sorts of security context tracking mechanisms, using cookies, proprietary tokens etc. The problem with many of these is that they don’t constrain entitlements at all; a cookie is as good as a password because it really just linearly maps back to an original act of authentication.

OAuth formalizes this process but adds in the idea of constraint with informed user consent. And this, ladies and gentlemen, is why OAuth matters. In OAuth, you exchange a password (or other primary security token) for a time-bound access token with a limited set of capabilities to which you have explicitly agreed. In other words, the token expires fast and is good for one thing only. So you can pass it off to something else (like Twitter) and reduce your risk profile or — and this is the key insight of OAuth 2.0 — you can just use it yourself as a better security session tracker.

The problem with OAuth 2.0 is that it’s surprisingly hard to get to this simple idea from the explosion of protocol in OAuth 1.0a. Both specs too-quickly reduce to an exercise in swim lane diagram detail, which ironically runs counter to the movement towards simplicity and accessibility that drives today’s Web. And therein lies the rub. OAuth is more a victim of poor marketing than bad specsmanship. I have yet to see a good, simple explanation of why, followed by how. (I don’t think OAuth 1.0 was well served by the valet key analogy, which distracts from too many important insights.) As it stands today, OAuth 2.0 makes Kerberos specs seem like grade school primer material.

It doesn’t have to be this way. OAuth is actually deceptively simple; it is the mechanics that remain potentially complex (particularly those of the classic 1.0a, three-legged scenario). But the same can be said of SSL/TLS, which we all use daily with few problems. What OAuth needs is a set of dead simple (but nonetheless solid) libraries on the client side and equally simple, scalable support on the server. This is a tractable problem and it is coming. It also needs much better interpretation, so that people can understand it fast.

Personally, I agree in part with Eran Hammer’s wish buried in the conclusion of his blog entry:

“I’m hoping someone will take 2.0 and produce a 10-page profile that’s useful for the vast majority of Web providers, ignoring the enterprise.”

OAuth absolutely does need simple profiling for interop. But don’t ignore the enterprise. The enterprise really needs the profile too because the enterprise badly needs OAuth.