Recent Postings
March 27th, 2014

SDK vs API – Round 2

SDK vs APIInspired by a conversation with Kin Lane and Mike Amundsen at last year’s APIdays conference in Paris, I decided to dig deeper into the SDK-vs-API debate.

As I wrote in my last post, I had noticed a pattern of using API SDKs rather than the underlying Web APIs. Let’s quickly define what I mean by SDK. There used to be a time, not so long ago, when “SDK” meant documentation, code samples, build scripts and libraries all bundled together. Today, you find all the former on the Web and usually only the library remains. When I use the term SDK, I mean a programmatic API (e.g. JS, Ruby, PHP, Python) on top of the Web API (usually REST/JSON).

As it happens, my wife gave me a small wearable activity monitor as a Christmas present (I could not possibly think of any reason why I would need that – and the new weight scale in the bathroom must have been pure coincidence). Since the gadget was uploading my stats to a cloud service and this cloud service had an API and my wife gave it to me as a present, I figured I had a perfect excuse to do some coding. My goal was to write a client-side app using JavaScript, pulling my stats from the cloud service via the API (and to keep notes on my experiences).

After registering at the developer portal, I hit my first stumbling block – no client-side JavaScript SDK. The closest I could find was a PHP SDK – which, of course, meant that I had to install PHP (I had not used PHP before) and the framework(s) used by the SDK. I also had to enable a Web server and debug the SDK’s OAuth implementation. That was not quite what I had in mind. Even though I got it running after one day and a half, I ended up being quite frustrated. All that work for a simple Web page displaying the stats for a hardcoded day!

So, I decided to take a different approach and use the SDK provided by Temboo. Again, no client-side Javascript SDK. While I could have used a Node.js SDK instead, I decided to stick with my PHP install and opted for the PHP SDK. From there, the integration was quick and I was up and running within an hour. What I did not like about the SDK was the footprint (it included all possible integrations) and how its abstraction and data presentation leaked into my application code. If I were to build a commercial product using it, I would have to consider the potential stickiness of the SDK. Personally, this did not feel right either. So, I went back to the drawing board and looked at my options.

What had I learned so far? My frame of reference going into the experiment was a client-side app using JavaScript. Yet the only SDKs I had found were written for server-side integration. Since it was the only choice offered to me, I had to invest a significant amount of time finding all the necessary information to get the server-side components and dependencies installed and configured. But even after going through all of this, the experience with either form of SDK left something to be desired.

I had started this exercise thinking that using an SDK was much easier than coding against the Web API – maybe it was time to reassess that assumption.

Looking back at the API documentation, I could not find anything inherently difficult. Looking at the vendor-provided PHP SDK, it dawned on me that the complexity of the client was entirely due to OAuth and the use of XML as payload. This finally gave me the opening I had been looking for. If I could avoid the complexities of OAuth on the client and use JSON as the payload format, I should be able to implement my client-side app with a few lines of JavaScript against the Web API. As a regular guest at Webshell‘s APIdays conference, I was familiar with the company’s OAuth Broker service. It took me a few minutes to get set up, configure the service and download their oauth.js client library. I cut and pasted the sample code into a JavaScript file, consulted the API docs to learn about switching to JSON format and did the API call. I was done in less than five lines of code!

While this experiment is just a single data point, I think it nicely illustrates some of the key lessons in the SDK-vs-API debate. I will attempt to provide a summary from both the API consumer and the API provider perspectives in my next post. I will also be talking about the SDK-vs-API issue in more detail during the upcoming Nordic APIs tour in Stockholm and Copenhagen and at the APIdays conference in Berlin.

March 26th, 2014

Of Monsters & Men & Machines

Written by
Category API Security, IoT, M2M
 

Monsters Men Machines

In my last post, I talked about IoT and its nascent emergence into our everyday lives, with products like Anki Drive and the Nest Thermostat beginning to get a foothold. I also talked about the need for security, as IoT becomes more present in our day-to-day lives. Today, let’s talk about a few real-world examples where security was an “oh, we didn’t think of that” kinda thing.

Implantable medical devices (think pacemakers, for example) are absolute lifesavers for virtually all recipients. And, as you would suspect, they need to be monitored – usually at a doctor’s office. BUT what if the recipient lives in a rural area (e.g. anywhere in Montana, North/South Dakota, Wyoming)? A quick visit to the office might be out of the question. But there’s an app for that (you knew that was coming, right?) Pop an IP address and wireless on that pacemaker, plug that address into the doctors app and voila! Monitoring via the Internet! Yeah! Only thing is… suppose somebody got a hold of that IP address? And suppose that somebody had access to said app? Monitoring could easily become something far more nefarious – bumping up the heartbeat, slowing it down (either of which may have the same result, mind you). Not too cool.

Or how about using a baby monitor with video? New parents are always going to want to have complete unfettered access to their precious being – and the newest generation of baby monitors not only delivers audio but video and yes, with an IP address, there’s an app for that too! So mom/dad can be anywhere and keep complete tabs on the fruit of their loins. Of course, in the wrong hands, with an IP address and no security, that baby monitor all of a sudden becomes an audio/video surveillance tool. No big deal unless, say, that new mom or dad works in the President’s office, NORAD, banking or any one of a number of businesses where you really wouldn’t want to let sensitive information out via casual conversation around a dinner table – with the baby monitor catching every word.

Finally, how about the car – a ubiquitous item (in many countries) of which the newer ones are just chock-full of various computer systems, some of which talk to each other, some of which don’t, some of which are supposed to talk to each other but don’t (anyone played with the Cadillac CUE lately?). All these systems are there to make the driving experience either better or safer. One of these is simply brilliant – the Tire Pressure Monitoring System (TPMS) reports pressures to the primary automotive ECU, keeping the owner informed of poorly-inflated tires (when appropriate). By definition, these systems have to be wireless – and unfortunately, they are completely unsecured. What if someone was within range (say the car behind you) and used the same set of APIs that power the TPMS to send invalid data to the ECU – thereby potentially shutting down the car or, worse, making it unsafe?

All of these examples sound outlandish, right? And yeah, they are.

Oh and they’re also all true. The remarkable Robert Vamosi details these exploits, along with many others, in his phenomenal book When Gadgets Betray Us (available on Amazon here). Writing at a layperson’s level, Vamosi details time and time again how the emergence of IoT consistently takes security for granted or ignores it completely. It’s a scary bedtime story but worth reading. And it’s worth taking note of the key lesson: In IoT, security is very, very important.

March 10th, 2014

The Internet of Things – Today

Anki Drive CarA quick intro: I’m Bill Oakes, I work in product marketing for CA Layer 7 and I was recently elected to write a regular blog about the business of APIs. I’ve been around the block over the years – a coder, an engineer… I even wrote a BBS once upon a time (yes, I’m pre-Web, truly a dinosaur – roar!) But now I “market things”. That said, I still have a bit of geek left in me and with that in mind, this blog is going to focus not so much on the “what” or “how” when it comes to APIs, their implementations and how they affect businesses/consumers but rather, the “why” (which means, alas, I won’t be writing about the solar-powered bikini or the Zune anytime soon – I mean, really… why?)

For an initial first post, I thought I’d take a look at the Internet of Things (IoT) because it’s something no one else is really discussing today (cough). We are beginning to see the actual emergence of nascent technology that can be called the IoT. First, I’m going to take a look at one particular example – one that’s actually pretty representative of the (very near) future of IoT. Yes, I’m talking about Anki Drive (and if you haven’t heard of Anki Drive, you really should watch this short video).

What’s amazing about Anki Drive cars is that they know WHAT they are, WHAT their configuration is, WHERE they are, WHERE you are… Five hundred times a second (in other words, effectively real time), each of these toy cars uses multiple sensors to sample this information using Bluetooth Low Energy, determining thousands of actions each second. Oh… and they’re armed!

Equally amazing is the fact that kids of all ages “get it”. (By “kids”, I of course mean “males” – as once males hit 15 or so, they mentally stop growing, at least according to my wife. Although, I’ve seen many women enjoy destroying other vehicles with Anki too… but I digress.) Players intuitively know how to use the iOS device to control their cars and after learning the hard way that the “leader” in this race really equates to the “target”, they adapt quickly to compete against true artificial intelligence (AI) and each other. It really is an incredible piece of work and is absolutely the best representation of the IoT today.

So, you ask: “What does this have to do with moi”? Well, imagine if your car could do this kind of computation in real time as you went to work. Certainly, Google is working aggressively on this track but Anki lets you get a feel for it today. (And I’m fairly certain Google is not going to provide weaponry its version.) Still, the real-world application of this technology is still a ways away. Let’s reign in timeframes and take a look at what is happening with the IoT in other industries today.

Imagine that your appliances knew about and could talk to each other. Google, though its Nest acquisition, is working on this with its learning thermostat. My first thought on the Nest was something along the lines of: “What kind of idiot would spend $250 on a thermostat when you can get a darned good programmable one for around $50?” But then Nest introduced the Protect. Simply an (expensive) smoke detector with CO detection built in. Big deal, right? Except that if your Nest Protect detects CO, it makes a somewhat logical assumption your furnace is malfunctioning and sends a command to the thermostat to shut down said furnace. That is the power of the IoT in the real world today. So I bought into Nest (thus answering that previous question) and, yeah, it’s pretty cool – not nearly as cool as Anki Drive but then Anki really doesn’t care if my furnace has blown up and Nest does.

As we see more and more real-world introduction of functional, useful IoT solutions, these solutions will all have one thing in common: they will use APIs to communicate. And what IoT will absolutely require is a solution that ensures that only the right devices can communicate with other right devices, in the right way, returning the right results, with no fear of Web-based (or, technically, IoT-based) threats, bad guys, MITM etc. As solutions roll out, it’ll be interesting to see how many vendors remember that security and performance are not options in IoT – they are 100% essential.

February 27th, 2014

New API Academy Team Member: Irakli Nadareishvili

Irakli NadareishviliThe API Academy team has a new member: Irakli Nadareishvili who has joined CA Layer 7 as Director of API Strategy. Before joining CA, Irakli served as Director of Engineering for Digital Media at NPR, which is noted for its leadership in API-oriented platform design. He has also participated in the creation of the Public Media Platform, worked with whitehouse.gov and helped a number of major media companies develop publishing solutions using open source software.

I recently sat down with Irakli to discuss what he has in mind as he joins API Academy.

MM: You once told me that you believe the future of Big Data is “linked APIs”? That sounds intriguing. Tell me more about it.

IN: In most people’s minds, “Big Data” is synonymous to “very large data”. You may hear: “Google-large” or “Twitter-large” or “petabytes”. The Wikipedia definition of Big Data is slightly more elaborate:

“Big data is the term for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications”.

In my work, I see the “complex” part of the definition becoming more important than the size. We have gotten pretty good at taming the large sizes of data. Tooling for horizontal partitioning and parallel processing of large data sets is now abundant. Still, most Big Data sets are contained and processed in the isolation of single organizations. This is bound to change very soon. The end of siloed Big Data is near: I believe that the next phase of Big Data challenges will have to do with data sets that cross organizational boundaries.

APIs will play a major role in this. Web APIs represent the most effective available technology that allows data to cross organizational boundaries. APIs efficiently connect and link data at a distance.

MM: Can you give an example of what you mean by “data sets that cross organizational boundaries”? And what challenges do these pose?

IN: You see, a lot of people have the notion that the data they need to process can be stored in a database maintained by a single organization. This notion is increasingly inaccurate. More and more, organizations are having to deal with highly-distributed data sets.

This can be very challenging. The infamous healthcare.gov is a good example of such a distributed system. The main technical challenge of implementing healthcare.gov’s backend was that it had to integrate with data in many existing systems.

The $500 million initial public fiasco of healthcare.gov is also a vivid indication of just how complex it is to build truly distributed systems. Practically the only successful implementation of such a large, distributed information system is the World Wide Web. There’s a lot we can learn from the architecture of the Web. It’s a battle-tested blueprint for building distributed systems at scale.

I believe the Big Data challenges of the future will be solved at the intersection of APIs with Web/hypermedia architecture, linked data and what we currently call Big Data tooling. I call this intersection “Linked APIs”, to differentiate it from the current, siloed state of most Web APIs.

MM: What practical advice would you give to the developers of future Big Data APIs?

IN: I think the most important thing is that we need to stop falsely assuming all of the API data is local data. It is not. Despite the name, an API for a distributed system is really not a “programming interface” to local data/assets. Rather, it is a programmable data index. Think of APIs as a programmable search index for a distributed collection of data sets.

I don’t like to think of the term “API” as an abbreviation anymore. Maybe it was one a while ago but it has since evolved way past that. Much like IBM doesn’t think of itself as “International Business Machines” anymore, APIs aren’t merely “application programming interfaces”. Most of what IBM does these days isn’t even necessarily about “machines”. Likewise, most of what we need out of APIs isn’t about any single application or an interface to it.

MM: Big Data represents one important challenge for computing today. What about IoT?

NN: The Internet of Things is already here, in small ways. The IoT we have today consists of a vast number of Web-connected devices, acting as sensors, sending myriads of signals to the cloud. That, by the way, is what creates many Big Data challenges. The future is much more interesting, however. Once the connected devices start engaging in peer-to-peer interactions, bypassing central authority, we will enter a significantly different realm. The most important challenge in that world, from my perspective, will be identity. Identity is always key in distributed systems but especially so in peer-to-peer networks.

MM: What excites you the most about your new role at Layer 7?

IN: Thank you for asking this question. I will start by telling you what terrifies me the most. The API Academy and Layer 7 teams represent a gathering of  ”scary” amounts of world-class brainpower and expertise in the API space. It is extremely humbling to be part of such a distinguished group.

Obviously, it also means that there is a lot of very fundamental thinking and innovation that happens here. Especially now that Layer 7 is part of CA Technologies, there’s really very little that we couldn’t accomplish if we put our hearts to it. That feels extremely empowering. I really care about all things related to APIs and distributed systems, the role they can play for the future of technology. I am super excited about the possibilities that lie ahead of us.

February 26th, 2014

What We Should Learn from the Apple SSL Bug

Written by
 

What We Should Learn from the Apple SSL BugTwo years ago, a paper appeared with the provocative title “The Most Dangerous Code in the World.” Its subject? SSL, the foundation for secure e-commerce. The world’s most dangerous software, it turns out, is a technology we all use on a more-or-less daily basis.

The problem the paper described wasn’t an issue with the SSL protocol, which is a solid and mature technology but with the client libraries developers use to start a session. SSL is easy to use but you must be careful to set it up properly. The authors found that many developers aren’t so careful, leaving the protocol open to exploit. Most of these mistakes are elementary, such as not fully validating server certificates and trust chains.

Another dramatic example of the pitfalls of SSL emerged this last weekend as Apple issued a warning about an issue discovered in its own SSL libraries on iOS. The problem seems to come from a spurious goto fail statement that crept into the source code, likely the result of a bad copy/paste. Ironically, fail is exactly what this extra code did. Clients using the library failed to completely validate server certificates, leaving them vulnerable to exploit.

The problem should have been caught in QA; obviously, it wasn’t. The lesson to take away from here is not that Apple is bad — it responded quickly and efficiently the way it should — but that even the best of the best sometimes make mistakes. Security is just hard.

So, if security is too hard and people will always make mistakes, how should we protect ourselves? The answer is to simplify. Complexity is the enemy of good security because complexity masks problems. We need to build our security architectures on basic principles that promote peer-reviewed validation of configuration as well as continuous audit of operation.

Despite this very public failure, it is safe to rely on SSL as a security solution but only if you configure it correctly. SSL is a mature technology and it is unusual for problems to appear in libraries. But this weekend’s events do highlight the uncomfortable line of trust we necessarily draw with third-party code. Obviously, we need to invest our trust carefully. But we also must recognize that bugs happen and the real test is about how effectively we respond when exploits appear and patches become available. Simple architectures work to our favor when the zero-day clock starts ticking.

On Monday at the RSA Conference, CA Technologies announced the general availability of the new Layer 7 SDK for securing mobile transactions. We designed this SDK with one goal: to make API security simpler for mobile developers. We do this by automating the process of authentication and setting up secure connections with API servers. If developers are freed up from tedious security programming, they are less likely to do something wrong — however simple the configuration may appear. In this way, developers can focus on building great apps, instead of worrying about security minutia.

In addition to offering secure authentication and communications, the SDK also provides secure Single Sign-On (SSO) across mobile apps. Use the term “SSO” and most people instinctively picture one browser authenticating across many Web servers. This common use case defined the term. But SSO can also be applied to the client apps on a mobile device. Apps are very independent in iOS and Android, and sharing information between them, such as in an authentication context, is challenging. Our SDK does this automatically and securely, providing a VPN-like experience for apps without the very negative user experience of mobile VPNs.

Let me assure you that this is not yet another opaque, proprietary security solution. Peel back the layers of this onion and you will find a standards-based OAuth and OpenID Connect implementation. We built this solution on top of the Layer 7 Gateway’s underlying PKI system and we leveraged this to provide increased levels of trust.

If you see me in the halls of the RSA Conference, don’t hesitate to stop me and ask for a demo. Or drop by the CA Technologies booth where we can show you this exciting new technology in action.