Mike Amundsen

Mike Amundsen

Mike Amundsen is Layer 7’s Principal API Architect. An internationally-known author and lecturer, Mike travels throughout the United States and Europe, consulting and speaking on a wide range of topics including distributed network architecture, Web application development and cloud computing. His recent work focuses on the role hypermedia plays in creating and maintaining applications that can successfully evolve over time. He has more than a dozen books to his credit, the most recent of which is Building Hypermedia APIs with HTML5 & Node. He is currently working on a new book on “cloud stack” programming. When he is not working, Mike enjoys spending time with his family in Kentucky.

August 16th, 2013

Designing Web APIs – A Candid Conversation

API Design WebinarIt was just over a year ago that we hosted our first API Workshop (for the record, it was July 2012 in Sydney Australia). Since then, I and my API Academy buddies Ronnie Mitra and Alex Gaber have had the privilege to meet and talk with hundreds of developers representing dozens of companies and organizations all over the world. It has been a very rewarding experience.

Along the way, we’ve learned a great deal, too. We’ve heard about creative ways people are leveraging the Web to build powerful APIs. We’ve seen great examples of real-world APIs and learned the practices and pitfalls encountered while maintaining and growing these APIs over time. We’ve even had the opportunity to observe and participate in the process of designing and architecting systems in order to foster creative innovation and long-term stability for the APIs.

In the past year, we’ve collected many examples of best practices and distilled common advice from a range of sources. We’ve also created free API events, conducted dozens of hackathons, webinars, one-day workshops and multi-day API boot camps as ways to share what we’ve learned and help others build upon that advice when creating their own Web APIs. And at every event along the way, we’ve met more innovative people doing great things in the Web API space.

As a way to look back and compare notes, Ronnie and I will be hosting a webinar (Designing Web APIs – A Candid Conversation) on August 22 at 9AM PDT. We’ll look back at what we’ve seen on our travels and talk candidly about such topics as SOAP, SOA, REST, lifecycle management and more. It’s going to be a fun hour of both reminiscing and looking forward to this fall’s workshop series and the future of APIs in general.

Also this August, we’re taking a break from offering public events and using the time to compare notes, assess the advice and examples we’ve gathered and improve our content for the upcoming fall season. Ronnie, Alex and I (and many others here) will be spending many hours this month creating new guidance documents, articles and presentations/videos – all in the effort to share what we’ve learned and help others make a difference within their own organizations.

I hope you’ll join us on August 22 for our Webinar and I hope you’ll keep an eye on our workshop schedule for upcoming events near you. Even if you’ve participated in our open workshops before, you’ll want to come back for the new series. We’re adding new topics, brushing up existing material with new guidance from the field and adding new features to the events.

August 9th, 2013

REST Fest 2013 is Coming!

REST Fest 2013It’s that time of year again! REST Fest 2013 is less than two months away (September 19-21) and preparations and are in full swing. Now in its fourth year, REST Fest has become one of my favorite events on the calendar and I’m very much looking forward to being involved with this year’s event.

REST is Just the Beginning
This year the keynote will be delivered by Brian Sletten. And – judging from the title (and my knowledge of Brian’s experience and knowledge) – it will be a great talk. We’re honored that Brian accepted our invitation and looking forward not just to his presentation but also the resulting converstations and explorations that are hallmarks of REST Fest.

Everybody Talks
An important part of REST Fest is the principle that everyone who shows up must give a presentation. The talks are typically quite short: a five-minute “lightning” talk followed by a short Q&A session. There are a few 30-minute  “featured talks”, too. But the basic idea is that we all get to talk about things that are interesting to us and we don’t have to make a big deal about it.

Every year, I probably learn more than 30 new ideas and novel approaches to problem solving and get to talk to the people who are coming up with these great things. REST Fest is a fantastic boost to my creative spirit!

Everybody Listens
The corollary to our key “talk” principle is that we all get to listen, too. And listening is, in my opinion, even more important than speaking. REST Fest attendees come from all sorts of backgrounds, experiences and points of view. The chance to hear how others view the Web space, how others are tackling problems and how others are advancing the practice of services on the Web is always an eye opener.

Less Theory, More Practice
And that leads to another key aspect of the weekend. The focus is on doing, not theorizing. We’re a decidely non-pedantic bunch and are usually much more interested in cool solutions than compelling theories. While it may still be common to think of anything with the REST acronym in the name to be a meeting of pointy-headed geeks, that’s not us. Each year, I get to see actual code solving actual problems in the real world.

We Hack, Too
Every year, we also host a hack day where everyone gets together to work on cool REST-related Web stuff. This year, Erik Mogensen will be leading the day. From what I’ve seen, he’s got some cool ideas in store for us, too.

It’s Easy to Join Us
Just as we cut down on the ceremony surrounding speaking and participating in a conference, we also try to eliminate the ceremony around signing up and showing up for REST Fest. It’s quite easy:

  1. Join our mailing list to see what we’re all about
  2. Drop into the IRC channel to chat us up
  3. Hop onto the GitHub wiki and create your “people page”
  4. Head over to the registration page and reserve your seat for the event

There’s no waiting to see if your talk was accepted; no wondering if what you’re working on would be interesting to some review committee. Just sign up, post your ideas and head down to sunny Greenville, SC for a great weekend.

Need More REST Fest NOW?
Can’t wait for RESTFEst 2013 to get started? Take a look at our Vimeo channel with all the talks from previous years. There’s lots of very cool stuff there.

See you in September!

(Originally published on my personal blog.)

July 23rd, 2013

Interoperability, Not Integration

Interoperability Not IntegrationIt’s a small semantic difference, really but a difference I think is worth calling out. When working in large distributed systems, it’s better to aim for interoperability than integration. And here’s why…

Integration for a Single System
Part of the Merriam-Webster Online Dictionary’s definition of integration is particularly relevant here:

A common approach to working with large distributed systems – e.g. internal networked implementations that run at various locations within a single organization or implementations that rely on some Web-based service(s) – is to attempt to treat the entire operation as a single unit, a “whole system”.

Bad idea!

These “whole systems” can also be called “closed systems”. In other words, people work to create a fully-controlled single unit for which, even when elements are separated by space (location) and time (“We built that part three years ago!”), there is an expectation that things will work as if they are all physically local (on a single machine) and temporally local (there is no significant delay in the completion of requests). As you might expect, attempting this almost always goes badly – at least at any significant scale.

There are several reasons for attempting this approach. The most common is that treating everything as “your system” is mentally easy. Another reason this single-system view prevails is that most tooling acts this way. The legacy of edit and build tools is that all components and data are local and easily accessible. How else would we be able to do things like code completion and data model validation?

Anyway, the point here is that “integration” is an anti-pattern on the Web. It’s not a good idea to use it as your mental model when designing, implementing and deploying large-scale systems.

Interoperability for Working with Other Systems
As you might have guessed, I find Merriam-Webster’s definition for interoperability much more valuable:

The interoperability mindset takes a different approach. In this view, you want – whenever possible – to treat things as interchangeable; as things that can be swapped out or re-purposed along the way. Interestingly, Merriam-Webster notes the first known use of this term was in 1977. So, the idea of interoperability is relatively new compared with “integration”, which was first used in 1620, according to Merriam.

An interoperability-focused approach leads to systems that do not need to “understand” each other, just ones that use interchangeable parts. Especially in widely-distributed systems, this interchangeability has a very high value. It’s easier to replace existing items in a system (e.g. changing data-storage vendors), re-use existing parts for other needs (e.g. applying the same editing component built for a blogging service to a new print publishing service) and even re-purpose parts when needed (e.g. using the file-based document caching system to provide caching for logged-in user sessions).

The primary challenge to thinking like an inter-operator instead of an integrator is that there are no easy tools for this kind of work. Pretty much all integration work is done by creative thinkers in the field (“We could just use our existing storage system for that.”) You usually need a rather specific knowledge of what’s available on site and what the existing parts can do in order to execute on interoperability.

Despite the extra cost of interoperability, there are important benefits for distributed systems that must operate over a long period of time. That’s why so much of the Web relies on interoperability. The standards we use for DNS, HTTP, HTML etc. all assume that varying products and services are free to decide what they do and how they do it as long as they inter-operate with other products and services on the Web.

Treat the Network Space as a Bunch of Other Systems
If you take the approach of treating everything in your network space (e.g. your local intranet or any system that relies on at least one Web-based service) as a bunch of “other systems” you’ll be better off in the long term. You’ll stop trying to get everyone to work the same way (e.g. using the same storage model or object model or resource model) and will be free to start working with other teams on how you can share information successfully across systems, via interoperability.

Even better, large organizations can get a big value out of using the interoperability model for their implementations. In practice, this means fostering an internal ethos where it’s fine to be creative and solve problems in novel ways using whatever means are at your disposal as long as you make sure that you also support interoperability with the rest of the parts of the system. In other words, you have the freedom to build whatever is most effective locally as long as it does not threaten your interoperability with the other parts.

There are lots of other benefits to adopting interoperability as the long-term implementation goal but I’ll stop here for how and just say, to sum up:

  • As a general rule, treat your implementations as exercises in interoperability, not integration.

(Originally published on my personal blog)

July 10th, 2013

Chicago, Sydney, Melbourne, Toronto

Layer 7 API WorkshopsOver the span of about two weeks, I’ll be visiting four cities, three countries and two continents, as part of Layer 7′s continuing free API Workshop series. Along the way, I’ll be joined in each city by great folks from both Layer 7 and CA Technologies.

Layer 7 has already hosted lots of How to Implement a Successful API Strategy workshops this year, across Europe and North America, with content delivered by my API Academy colleagues Ronnie Mitra, Alex Gaber, Holger Reinhardt and Matt McLarty. Over the last few months, I’ve had the pleasure to meet dozens of attendees working on some incredibly interesting projects using APIs on the Web and on internal networks.

Each half-day event includes high-level summaries of the most popular topics from our Introduction to APIs Workshop and API Design & Architecture Boot Camp and – like all our workshops – each is highly interactive. Whether you are just starting to consider incorporating APIs into your distribution model or are already well into a live implementation, these sessions provide a great way to see and hear how others are approaching the same space and to ask questions about how you and your organization can improve the design, implementation and lifecycle maintenance of your Web-based APIs.

Here’s where I’ll be during the next two weeks:

  • Chicago – Jul 16
    If you’re in the US Midwest, there are still a few open seats for this workshop.
    Register now >>
  • Sydney – Jul 24, Melbourne – Jul 25
    I’ll be joined at the Sydney and Melbourne events by Layer7′s CTO Scott Morrison.
    Register for Sydney >>
    Register for Melbourne >>
  • Toronto – Aug 1
    This one will include a presentation from Layer 7 co-founder Dimitri Sirota.
    Register now >>

We’re getting great feedback from attendees, so if you haven’t been able to attend one of our workshops yet this year, now is a great time to pick a location near you, sign up and see what the fuss is all about. One more thing: If you don’t see a convenient location on the list, don’t worry. We’re already gearing up for our fall schedule and you’ll be seeing lots of new locations and content appearing soon.

June 7th, 2013

Hypermedia Workflow Questions

Hypermedia WorkflowI fairly often get emails following up on the workshops, articles, webinars and online tutorials I take part in. I can’t always answer these questions directly and sometimes deal with them in blog posts or online articles. Following my recent API Tech Talk on hypermedia, I got some questions from Abiel Woldu on how to handle hypermedia support when the same backend operation is called from different workflows. Here’s part of the Abiel’s email:

“Say you have an end point for validating address; call it /validateAddress. Now this endpoint is called from two work flows.

  1. When a user updates his account settings (changes a new address)
  2. When a user tries to buy a product and enters the shipment address

In both cases the /validateAddress should give different set of links and forms as part of the response of validation (next step affordances) because the flow is different. In this case what is the set of the next links and forms returned from the endpoint? Is it the union of the two workflows and the client knows how to get what it needs? Or does the client send information of which flow it is in and the server uses the information to figure out what response to give it?”

Decoupling Backend Processes from Public URIs
This kind of question comes up frequently. Essentially, there are a couple assumptions here that are worth exploring. The first is the idea that a backend operation (e.g. “validateAddress()”) is exposed over HTTP as a single endpoint, no matter the calling context. This is not a requirement. In fact, it is advantageous to decouple public addresses (URI) from private operations on the server. HTTP (whether using HTTP-CRUD, Hypermedia-REST or some other model) offers the advantage of using multiple public URIs to point to the same backend operation. For example, it is perfectly correct to publish both /validateExistingAddress and /validateNewAddress URIs each of which points to the same  “validateAddress()” operation on the server.

Not Everything Needs a URI
Just because the backend server has an operation such as “validateAddress()” does not mean there has to be a URI associated with that operation. For example, the “user updates his account settings” workflow need not have a direct URI call to “validateAddress()”. Instead, there could be an account settings resource (/account-settings/) that supports the HTTP.PUT method and accepts a body containing (among other things) a modified address. Executing this client-side operation (PUT /account-settings/) passes data to the server and – along with other operations – the server calls the “validateAddress()” operation itself and reports the results to the client.

The same can be done in the case of “user tries to buy a product and enters the shipment address”. This address validation could be a small part of the server-side operation and processing of an HTTP.POST to a /check-out/ resource.

Mapping Actions to URI & Method
In the HTTP-CRUD model, the focus is on using URIs to identify entities and/or operations and using the protocol methods to perform actions. For example, an /addresses/ resource that supports adding (POST), modifying (PUT), removing (DELETE) and retrieving (GET) addresses associated with a context (logged in user, check-out processing etc.) In this case, POSTing or PUTing a resource body to the server allows the server to call the “validateAddress()” operation (among other things) and report results to the client.

Mapping Actions to Hypermedia Controls
In the hypermedia model, actions are described using a hypermedia control such as a link or form. The URI is not important in this model. Instead the control has an identifier (e.g. “validate“), indicates a protocol action (“POST“) and lists state data to include in the payload.

In Siren it might look like this:

"actions": [
 {
     "name": "validate",
     "title": "Validate an Address",
     "method": "POST",
     "href": "...",
     "type": "application/x-www-form-urlencoded",
     "fields": [
           { "name" : "Street", "type" : "text", "value" : "123 Main Street" },
           { "name" : "City",   "type" : "text", "value" : "Byteville"},
           { "name" : "State",  "type" : "text", "value" : "MD" },
           { "name" : "ZIP",    "type" : "text", "value" : "12345"}
     ]
     }
 ]

Note that I didn’t bother to enter a value for the href in this example. It could be any valid URL; I just left it out.

Tracking Workflow Progress Within Messages
Here’s another question from Abiel Woldu’s email:

“The concept of which work flow the client is going through – is it code that should reside in the API code itself or it’s something that sits outside in some other gateway or something?”

When implementing processes over HTTP, it’s wise not to rely on stateful multi-request chains. In other words, don’t expect either the client or server to keep track of where some request belongs in a workflow. Instead, include that information in the request and response bodies themselves. This pattern of including all the important context information with each request and response not only assures that the request can be handled independently (e.g. in a load-balanced cluster), it also helps clients and servers to do work within varying time-spans (e.g. a clients can cache the last request to disk and pick things up a day later). In the REST model, Fielding described this as making messages “self-descriptive”.

For example, there might be a use case that prompts human users to provide quite a lot of information (across various UI tabs) before finally submitting this completed set of work to the server for final validation and processing. One way to support this over HTTP is to allow clients to store “work-in-progress” (WIP) records on the server. As each “tab” (or other UI affordance) is completed, the client app is free to execute a POST or PUT operation with the payload to a URI supplied by the server. The stored data would include a value that indicates how far along in the workflow the user has progressed. This same client app could also recall stored WIP records, inspect the workflow indicator and prompt the user to pick up where she left off. Once all the required elements were supplied, the work could be forwarded for final validation and processing.

Dynamic Workflow via Hypermedia
Finally, in some cases, the series of steps in a workflow might vary greatly at runtime. For example, a service might support a multi-tenant model where each instance of “supply all the details for this work” has different steps or the same steps appear in differing order. The “next step” need not be memorized by the client code. Instead, hypermedia servers can inspect the current server-side configuration, check the current progress by the user and then supply the correct “next step” for this particular instance.

In this way, the client app can support a wide range of workflow details without needing custom code ahead of time (or even downloaded code-on-demand). Instead, the client app only needs to be able to recognize the “next step” link and navigate to that resource.

In Summary
In general, when using HTTP:

  1. There is no rule that you must expose internal methods as public URIs
  2. You may use more than one URI for the same backend operation
  3. In the HTTP-CRUD model, you usually map operations by linking URIs and methods
  4. In the hypermedia model, you usually map operations by linking controls and state variables
  5. It is best to use “self-descriptive” messages to track workflow progress statelessly
  6. The hypermedia model supports dynamic workflow progress using the “next step” link pattern

Thanks to Abiel for his questions and his generous permission for me to use his email and name in this blog post. If you’ve got a question that I haven’t answered online before, feel free to ping me via twitter (@mamund) and fire away.