April 18th, 2013

Intel Buys Mashery! Is it Because the Cloud Will Have an API Inside?

Intel-MasheryFor close to five years, Intel has had a stake in the API space. All the while, I’ve often asked myself why. Intel originally acquired an API Gateway from a prior Intel Capital investment that never fully blossomed. And despite the oddness of having a tiny enterprise software franchise lost inside a semiconductor behemoth, Intel persisted in its experiment, even in the face of questionable market success and lukewarm analyst reaction. So, why double down on APIs now?

With the steady decline of the PC business, Intel clearly has to look elsewhere for its future growth. The cloud datacenter is not a bad place to start. Cloud server farms clearly consume lots of processors. Still, servers powering Web sites can operate fine without APIs, thank-you. But servers powering mobile is a different story. Mobile apps (whether HTML5, hybrid or native) get the data that makes them valuable from applications that reside in datacenters. And APIs are the key to letting cloud data be sharable with mobile apps.

Clearly, app-centric “smart” phones and tablets and TVs and cars and watches and glasses are changing the way we go about our daily business. And APIs will power these smart devices by giving enterprise and Internet companies a way to push their data to apps. That hope of bridging the cloud with mobile is probably why Intel has kept its current API product intact. Mashery broadens Intel’s API scope by providing a way to not only share data with mobile apps but now also the developers that build these apps. But will this plan succeed?

If it does, it will take quite a bit of time. The reality today remains that Intel – even despite the semi-recent McAfee acquisition – is not oriented to selling software or even cloud services into the enterprise. It’s missing the sales force. It’s missing the history. And in many ways, it’s missing the rest of the software stack it needs to power the networking, infrastructure and application parts that underpin data in the cloud. That will make selling an API platform comprising a legacy API Gateway and newfound API developer platform a harder proposition. It’s kind of out there alone.

Another obvious roadblock to making the Mashery acquisition successful is that Intel’s existing API Gateway and the Mashery API service are designed for two very different audiences inside the enterprise, with un-reconcilable needs. The API Gateway is designed for an IT department that wants to run its API Management layer in its own datacenter. The Mashery offering is designed for a non-IT buyer (a mobile program manager, say) who wants to run everything in someone else’s cloud.

One is technical, the other is not. One is on-premise, the other is SaaS. One sells traditional software licenses, the other pure subscription. The first aims to address internal and external API integration challenges. The latter is only really concerned with the challenge of acquiring external API developers (though Mashery would probably protest this point).

Will the two be a marriage made in heaven? Given that the Intel/Mashery partnership is already a year old and that Mashery was barely able to grow its revenues in that time, the likelihood seems remote. But who knows for sure? And anyway, Intel has probably not bought Mashery for its $12M in revenue but for its long-term potential as a pathway to mobile.

April 12th, 2013

Want ROI from Your APIs? Then Lower the Cost of Building Them

Internal and External DevelopersI often hear the term “ROI” used in reference to an API program. Often, it is the discussed in the context of getting either direct revenue from an API or growing reach from an API, which in some places, translates into a lower cost of customer acquisition. While both direct revenue and reach are admirable goals, ROI from an API program is not limited to the number and quality of external developers.

For instance, most organizations will derive far more immediate payback from an API initiative if it enables internal developers, enterprise mobility initiatives, tighter partner integrations or even IT rationalization through hybrid cloud. Each of these endeavours will pay dividends in terms of productivity, agility, distribution and lowered IT costs. Each deserves its own dedicated discussion. However, underpinning all of these API business drivers  – external developers included – there is one often-overlooked consideration for cost and return in any API program: how do you introduce and innovate new APIs cost effectively?

Obviously, there are many ways to stand up an API. Many packaged software applications have some kind of API already, even if some are XML- or SOAP- centric. But in many instances, nothing exists except the desire to expose a piece of functionality or quantity of data as an API. Programmers can obviously build “programmable  interfaces” onto almost anything. It just takes time and people. However, the results will be brittle and the journey expensive.

A faster, less costly and more flexible route is to use an adaptation layer that can talk to various application or data backends and dynamically render one or more as an API. Using a backend adaptation layer can, with the right product, also solve the related problem of iterating on an API, both in terms of versioning but also composition. Add to that the promise of facilitating new business functionality by orchestrating API interactions with external mobile, social and cloud services and you get a pretty compelling ROI story.

Not surprisingly, Layer 7 provides such an adaptation layer. Our API Gateways provide more than just security and management; they simplify backend connectivity, new API formation (i.e. composition) and novel orchestrations with all kinds of cloud, social and mobile services. Like many of our API compatriots, we provide tools that help enterprises build and foster developer ecosystems. But we also realized early on that much of the cost and potential of an API program will rest on how quickly and cost-effectively new services can be launched and evolved. Something worth considering the next time you evaluate the ROI of an API program.

March 28th, 2013

Who Owns Your Developers?

Developer CommunityFor API publishers, acquiring developers is a pretty fundamental matter. “More developers, more money and reach” goes the thinking. But are all developers of equal value? And is borrowing a developer as good as true developer ownership?

My rather unsurprising answer to both questions is: “No”. Clearly, some developers will be more valuable than others and borrowing will never be a substitute for ownership. Here’s why:
•    The only developers that matter are those that are engaged and active

Registration numbers don’t matter. “Key Wielding” this or that is marketing fluff. Looky-loo’s don’t build apps that drive revenue or reach. They may take your time, they may toy with your APIs but they won’t deliver business value. And if they are borrowed, “drive-by” developers, guess what – they never will!

As a vendor that helps organizations publish APIs, my advice is to always own your developer. Don’t get caught up in the promises of vendors lending access to hordes of faceless developers. The only developers that matter are the ones engaged directly with you because those are the ones that care about your API and those are the ones that you can develop and nurture.

This does not mean that making it easy for high-value developers to access your APIs should not be a goal. Giving engaged GitHub developers the ability to use their credentials to access your APIs is smart. There are millions of current, high-quality developers waiting for the right project.

So, pick a vendor like Layer 7 that enables onboarding and Single Sign-On from GitHub and other deep pools of active, engaged developers. And be careful not to get caught up in the developer equivalent of a feel-good payday loan. You will pay a high price in the long run.

March 20th, 2013

If They Have to Ask, You Didn’t Afford It

Question MarkMy guess is you are familiar with the phrase “If you have to ask, you can’t afford it”. Well, that’s not what I mean here. Let me show you what I’m actually getting at…

If They Have to Ask…
Try this:

  • Create a new Web API
  • Get it up and running on some server or other
  • Hand the single URL to a client dev and say: “There ya go!”

Is the API self-descriptive? Does it contain enough information in the responses to allow client devs to know what the API is for, what it is capable of and how they can make valid requests to the server and properly parse the responses?

Here are some questions for you:

  • How many assumptions do you have about your API?
  • Are these assumptions shared by client devs?
  • All clients devs?
  • Even ones who have never met you?

If your answer to any of those questions was “No” or “I’m not sure” then it’s likely that devs will need to ask you a thing or two about how to properly use your API. That’s no big deal, right?

…You Didn’t Afford It
In everyday life, if people have to ask how to use a device (television remote, toaster etc.) then you can be sure that device is “poorly afforded” – it’s a case of weak design. We all know devices (especially electronics) that come with huge manuals and complicated explanations – and we all know what a bummer it is when that happens.

In this respect, your API is the same as any other consumer device. It should be “well afforded” – developers shouldn’t have to read the technical equivalent of War & Peace before they are able to successfully use your API.

Yes, you can supply detailed instructions in prose, provide a long list of possible methods, include lots of tables etc. These resources are helpful for devs but they can be daunting to read and cumbersome to maintain.

Another approach is to include this kind of information in a machine-readable format – and one that most devs will also understand quickly. This can be achieved by providing instructions (that get automatically updated whenever your API changes) via hypermedia controls in the response. Why write a Web page of documentation to tell devs to construct a URI and use that URI to execute an HTTP GET when you can just include that (and much more) information in your API responses?

Help your client devs out. Throw ‘em a bone, here. Don’t make them read pages of documentation when you can just include simple run-time instructions as they’re needed.

In conclusion: If they have to ask, you didn’t afford it.

(Originally published on my personal blog.)

February 4th, 2013

More Mobile Access Predictions for 2013

MWC PredictionsWith February just beginning, the mobile world is gearing up for Mobile World Congress (MWC), which will be taking place in Barcelona, at the end of the month. It’ll certainly be interesting to see what new products and features will be announced at the show. From the ongoing trends (some of which Mike Amundsen recently discussed), I’d expect to see a number of announcements of IoT products.

The good old measure of progress, mobile subscriber penetration, doesn’t cut it anymore. Now, the real measure is how many other connected devices a subscriber uses – iPads, Smart TVs and even fridges (who wouldn’t want a Galaxy Kitchen or an iPad Mini?) This is just the start of a revolution in connectivity, which will make it easier than ever to consume information and equally easy to emit a lot of information, often through social networks.

But there is another aspect to this – not only will you be able to post your own information but there will be all kinds of devices that can “sense” information about you. I expect to see a lot of this at MWC – sensors and cameras scattered around the floor, mapping passers-by to Facebook profiles and other personal information. Obviously, the capturing and cross pollination of this information raises all sorts of privacy issues.

It will also have a number of significant ramifications for mobile developers. First, there will be a new wealth of information available in the form of Web service APIs, as most of the data will be stored in cloud. The sheer scale of this new information-rich world will require apps to leverage cloud processing capabilities in order to be truly effective. This will create opportunities for enterprises to rethink their mobile architectures.

Second, mobile developers will need to use standard protocols for authentication and authorization. OAuth and OpenID Connect are key standards for protecting resources and allowing app users to authorize apps to leverage their information. Will these standards address all the privacy issues mentioned above? Probably not but they will make it a good deal easier for app developers to comply with privacy laws and regulations.

Third, the most successful app developers will be those that are able to provide a seamless user experience (UX) across multiple devices. This is because the end user of the near future will naturally expect all apps to know about other sessions that user had with an app across all of his or her many smart devices. Devs will therefore want to migrate sessions across devices, to bolster the UX.

If you’re going to MWC, come and say hello to the Layer 7 team. We will be located in the App Planet area Hall: 8.1 Booth: A47. I hope to see you there!