Mike Amundsen

Mike Amundsen

Mike Amundsen is Layer 7’s Principal API Architect. An internationally-known author and lecturer, Mike travels throughout the United States and Europe, consulting and speaking on a wide range of topics including distributed network architecture, Web application development and cloud computing. His recent work focuses on the role hypermedia plays in creating and maintaining applications that can successfully evolve over time. He has more than a dozen books to his credit, the most recent of which is Building Hypermedia APIs with HTML5 & Node. He is currently working on a new book on “cloud stack” programming. When he is not working, Mike enjoys spending time with his family in Kentucky.

May 10th, 2013

Making Government Data “Easy to Find, Accessible & Usable”

On May 9, 2013 the White House released an executive order with the title Making Open & Machine Readable the New Default for Government Information. My favorite line in the entire document is:

“Government information shall be managed as an asset throughout its life cycle to promote interoperability and openness, and, wherever possible and legally permissible, to ensure that data are released to the public in ways that make the data easy to find, accessible, and usable” (emphasis mine).

No Dumping
The usual approach to this type of work is to simply publish raw data in a directory or repository and then create some fencing around the data that helps track usage and distribution. Essentially, making government data “open” becomes a data dumping operation. This practice fails on all of President Obama’s three key points. First, data dumps make finding valuable information not at all easy. Second, even though the content might appear in a standard format like XML, CSV or JSON, it is hardly accessible (except for to geeks, who love this kind of stuff). And finally, raw data is hardly ever usable. Instead, it’s a mind-numbing pile of characters and quote marks that must be massaged and re-interpreted before it comes close to usability.

So, while this new directive offers an opportunity to make available a vast amount of the data the government collects on our behalf, the devil is in the details. And the details are in the interface – the API. As with poorly-designed kitchen appliances and cryptic entertainment center remote controls, when it takes extensive documentation to explain how to use something, the design has failed. There’s a simple principle here. Poor API design results in unusable data.

Affordable Data
It doesn’t have to be this way, of course. Government departments have the opportunity to implement designs that meet the goals set forth in the executive order. They can make it easy for people to find, access and use the data. They can publish not just data but APIs that afford searching, filtering and exploring the data in a meaningful and helpful manner; APIs that empower both users and developers to successfully interact with the data, without resorting to a dashboard featuring dozens of options or mind-numbing explanations.

In the (likely) event that the initial open data release consists of mere data, companies and individuals would be well advised to resist the temptation to build a multitude of “one-off” applications, each of which solves a single problem or answers a narrow set of questions for some subset of the data. Instead, work should be put into converting the raw data into usable API formats such as Atom, OData, HAL, Collection+JSON and HTML (to name just a few). APIs should be designed with the same care that would be given to any interactive experience.  Investment in tools and technologies that can properly represent the data in multiple formats while supporting various use cases and access requirements will yield great results.

Open Data APIs
In the end, organizations that know the importance of a good interface, the power of choice and the freedom of flexible representations will be able to convert raw data into valuable information, which can be consumed by a wide range of users, platforms and devices. These considerations are essential to building and supporting open data APIs.

Because – ultimately – data isn’t open, unless it’s “easy to find, accessible, and usable”.

March 20th, 2013

If They Have to Ask, You Didn’t Afford It

Question MarkMy guess is you are familiar with the phrase “If you have to ask, you can’t afford it”. Well, that’s not what I mean here. Let me show you what I’m actually getting at…

If They Have to Ask…
Try this:

  • Create a new Web API
  • Get it up and running on some server or other
  • Hand the single URL to a client dev and say: “There ya go!”

Is the API self-descriptive? Does it contain enough information in the responses to allow client devs to know what the API is for, what it is capable of and how they can make valid requests to the server and properly parse the responses?

Here are some questions for you:

  • How many assumptions do you have about your API?
  • Are these assumptions shared by client devs?
  • All clients devs?
  • Even ones who have never met you?

If your answer to any of those questions was “No” or “I’m not sure” then it’s likely that devs will need to ask you a thing or two about how to properly use your API. That’s no big deal, right?

…You Didn’t Afford It
In everyday life, if people have to ask how to use a device (television remote, toaster etc.) then you can be sure that device is “poorly afforded” – it’s a case of weak design. We all know devices (especially electronics) that come with huge manuals and complicated explanations – and we all know what a bummer it is when that happens.

In this respect, your API is the same as any other consumer device. It should be “well afforded” – developers shouldn’t have to read the technical equivalent of War & Peace before they are able to successfully use your API.

Yes, you can supply detailed instructions in prose, provide a long list of possible methods, include lots of tables etc. These resources are helpful for devs but they can be daunting to read and cumbersome to maintain.

Another approach is to include this kind of information in a machine-readable format – and one that most devs will also understand quickly. This can be achieved by providing instructions (that get automatically updated whenever your API changes) via hypermedia controls in the response. Why write a Web page of documentation to tell devs to construct a URI and use that URI to execute an HTTP GET when you can just include that (and much more) information in your API responses?

Help your client devs out. Throw ‘em a bone, here. Don’t make them read pages of documentation when you can just include simple run-time instructions as they’re needed.

In conclusion: If they have to ask, you didn’t afford it.

(Originally published on my personal blog.)

January 28th, 2013

Four Tech-Related Trends That Will Shape 2013

Written by
Category Apps, Mobile Access

Mike Amundsen 2013 PredictionsLooking ahead, here are four tech-related trends that I think will shape the coming year. These are trends I noticed were already in flight during late 2012. I believe they will continue to affect the way we design and implement solutions in 2013.

As you’ll see, all of my predictions are driven by the relentless increase of connected mobile devices. This is the dominating overall trend that will continue to affect all aspects of information systems.

In a nutshell, I predict:

  • Individual service deployments on the Web will get smaller and more numerous
  • Mobile client deployment will be a bottleneck
  • Server mash-ups will increase but client mash-ups will decline
  • The demand for seamless switching between personal devices will increase

Services on the Web Get Smaller, More Numerous
Influenced by the existence of the many mobile apps running on a single device, Web-based services will become small, single-focused offerings that (in the words of Doug Mcllroy) “do one thing and do it well.” This will also explode the number of available services. The advantage of this trend will be an increase in the agility and evolvability of service offerings. The challenge will be an increased need for governance at the “micro-service” level.

Mobile Client Deployment Becomes a Bottleneck
As more services appear on the Web and more mobile devices spread throughout the world, keeping up with mobile app deployment will become more difficult and more costly. This is especially true for cases where an app store requires approval before release. To mitigate this problem, developers and architects will look for new ways to update and modify the functionality of already-installed mobile apps without the need for full-on redeployment. Solutions will include use of in-message hypermedia designs, reliance on remote discovery documents and just-in-time plug-in style implementations.

Server-Side Mash-Ups Increase while Client-Side Mash-Ups Decline
The increasing popularity of languages like Node.js, Erlang and Closure will make implementing server-side mash-ups more efficient and easier to maintain than doing the same work within a client application; especially for the mobile platform. This will reduce the “chattiness” of client-side applications and increase the security and flexibility of server-side implementations. The result will be a perceived increase in responsiveness and a reduced use of battery power on mobile apps.

Multiple Device Form Factors Will Demand Seamless Sharing
As more users access content on multiple devices, there will be an increased need to design apps that seamlessly share user data across these devices. This will affect the both client- and server-side implementation details. Identity will need to cross devices easily and content syncing will need to be seamless and automatic. App builders will rely more on the “responsive design” pattern in order to automatically adjust displays and functionality to meet the needs of the current form factor. Servers will need to be “context-aware” and provide the most up-to-date content while users switch from one device to the next.

Finally, whether my predictions are spot on or way off, I look forward to a very interesting and challenging 2013.

December 14th, 2012

Three Common Web Architecture Styles

Three Common Architecture StylesWhen talking to clients about the architectural details of an implementation, one of the first questions I ask is: “What architectural style is appropriate for this Web solution?” It turns out this question stumps most of my audience. Not many system architects and developers think about it. Instead, they implement solutions using whatever components and frameworks are on hand.

Each technology, service or coding framework exhibits its own “style” for solving a problem. Sometimes we select a system component because it’s familiar (“We use SQL databases because that’s what we’ve always used”). Sometimes we include one because it’s unfamiliar (“We’ve never used Node.js before, let’s try it on this project”). And sometimes we select components based on skill set (“Our team doesn’t have any experience with WebSockets, so let’s just use HTTP instead”). It’s important to step back and get a big picture view when selecting components for a production system that will (hopefully) serve your needs for an extended period of time. And that’s where architectural styles come into play.

Architectural styles set the tone for how components in a system interact, govern the implementation details and establish lines of responsibility and maintenance over time. Setting the style early on and communicating it to the team ahead of time goes a long way toward creating a stable and successful implementation. To help clients get a handle on this topic, I commonly identify three widely varying-styles for Web solutions that people can easily recognize: Tunneling, Objects and Hypermedia.

The Tunneling style is best illustrated by SOAP-based implementations where all requests are “tunneled” through a limited set of components (user management, product services etc.) exposed on the Web. The Object style is one that uses the HTTP CRUD pattern (create-read-update-delete) where domain objects (users, products etc.) are exposed and basic read/write operations are supported for those objects. The Hypermedia style relies on a shared understanding through a message format (media type) that defines both the data elements (users, products etc.) and the possible actions (read, write, filter, report etc.) on those data elements. Each of these styles can be used to implement a solution and each of them has associated benefits and challenges.

This comes up so often that we’ve created a short API Academy video introducing the subject of architectural styles for the Web. Take a look and see if it gives you some ideas for how you can answer this question the next time you are about to embark on a major system implementation: “What architectural style is appropriate for this Web solution?”

December 7th, 2012

Use Hypermedia to Reduce Mobile Deployment Costs

Using Hypermedia to Reduce CostsI speak about the power and flexibility of hypermedia quite often. I explain the general idea behind hypermedia, discuss its historical roots and show how it can help client applications adapt to changes in data input and application flow. Essentially, a hypermedia-based approach aims to take key elements often placed into the client’s source code and move them into the actual response messages sent by the server.

I also point out that using a hypermedia-based approach to building client and server applications takes a different kind of effort than using RPC-style approaches. And I explain that, currently, there is a limited amount of tooling available to support the process of designing, implementing and maintaining hypermedia-style systems.

If your work involves designing, building, testing and deploying a mobile client application, it is likely you need to deal with an “application store” or some other process where your packaged application must be submitted for review and approval before it is available to users for download. This can happen not only with well-known public offerings such as the Apple Store but also within any organization that provides its own application repository aimed at ensuring the safety and consistency of user-available mobile apps.

In an environment of quick-turnaround, agile-style implementations this “app store” approval can be a real bottleneck. It may be not just days but weeks before your app is tested, approved and posted. This can be especially frustrating when you want to deploy a rapid-fire series of enhancements in response to an engaged user community.

A hypermedia-based client design can often support UI, data transfer and workflow modifications by altering the server messages rather than altering the client source code. By doing this, it is possible to improve both the user experience and the system functionality without the need for re-submitting the client code for “app store” review and re-deployment. This also has the potential to reduce the need for interrupting a user’s day with download and reinstall events and can, in the process, cut down on the bandwidth costs incurred during the repeated roll outs of code modifications to a potentially large user base.

Improved agility, a better user experience and reduced bandwidth costs are all tangible benefits that are possible when investing in a hypermedia-based implementation for your mobile client application.