Holger Reinhardt

Holger Reinhardt

Holger Reinhardt is a Product Architect & Business Developer at Layer 7. In this role, he explores opportunities related to IoT, M2M and Big Data for Layer 7. Holger has over 16 years of software development experience in Telecom and SOA appliances. Prior to joining Layer 7, he was part of the in-house incubator team at the office of the CTO of IBM WebSphere. He joined IBM through the DataPower acquisition.

January 31st, 2014

Would You Like a Library with That (API)?

LibraryIf you are professionally inclined – like I am – to read all about the business of APIs, you may have gotten the impression that APIs are going to take over everything and bring world peace.

Okay, I’m exaggerating but it is sometimes hard to remember that an API is really just a means to an end. That (business) end can vary. It might mean reaching more or different customers using mobile apps. It might mean integrating with partners. It might also mean giving access to services or data that provide enough value for them to be monetized. But in the end, I’m not using an API for the sake of using an API but because I want to use the services or data served by the API with the least amount of friction to myself because I – as a developer –  have a job to do.

Which brings me to my next point: When was the last time you looked at an API’s documentation and implemented it in a client app? If I need to integrate with an API in my Web app, I simply copy two lines of Javascript from the API provider page and cut and paste my API key into the <your_api_key_here> placeholder. Then I reload my Web app, which loads the Javascript library, which allows me to use the API. Or if I need to integrate with an API in the backend, I look for a pre-build component like a GEM (Ruby-on-Rails) or NPM (node.js) and add it to my loader file. Then I restart the backend, which loads the library, which allows me to use the API. Notice a pattern?

So here I am, using APIs via libraries/SDKs. In fact, I am not even sure when I last implemented a client using API documentation. In terms of priority, I only care if the service offered through the API meets my (business) needs – and then I use its library. And if it doesn’t have one, I might just move right along and look for the next one. (Remember: I have a job to do!)

For most developers out there in the real world, does consuming an API mean consuming an SDK that implements the API via a library? Should we talk about portable library design, efficient library loading (eager or lazy, async or sync) etc? Which platforms do I need to support to maximize my developer reach? What are the tradeoffs between offering (and having to support) a number of libraries versus just the API? Do dynamically-loaded libraries solve the API versioning challenge?

Reading about Evernote’s decision to use the Apache Thrift framework for its client API may help you formulate some potential answers to these questions.

In any case, none of this reduces the importance of providing a well-designed and documented API. Libraries and SDKs simply provide a convenient wrapper around an API and APIs are about removing friction. And client libraries are just another few steps in the march towards frictionless integration. (I still remember fondly the ease with which I was able to create client stubs and OSGI bundles from WSDL during the Web service days.) Nevertheless, we at Layer 7 – as an API Management company – will certainly be looking into how we can provide more support for client libraries in the future.

The preference for using client libraries rather than developing API clients becomes even more pronounced if we look at the Internet of Things.

Almost all the IoT integration vendors I have talked to consider supporting a large amount of client libraries for the myriad different IoT platforms to be one of their key differentiators in the market. Partly, this is driven by the need to optimize client stacks for resource-constrained platforms but it is also driven by the emergence of new binary and publish-subscribe protocols designed to better deal with IoT’s asynchronous communications patterns (see this blog post for a good overview).

While there has been quite some activity around API definition languages lately, I’m not aware of any attempt to provide a formal description language for asynchronous APIs of this type. Maybe this is an area where we will see some new ideas and innovation in the coming months.

In case you’re interested, I’ll be talking more about these trends as part of Layer 7’s upcoming API Academy Summits in London and New York.

November 29th, 2013

Ending the IoT Protocol Wars

Ending the IoT Protocol WarsIt’s been a while since my last blog post – not least because I have been traveling quite a bit to run Layer 7’s European API workshops together with my colleague Ronnie Mitra. The workshops (part of Layer7′s outreach program via the apiacademy.co) are vendor-neutral and focused on sharing API design and management best practices.

To be honest, I probably learn as much during these workshops as the participants do. It has certainly been striking to watch how our material evolves throughout the workshops. We constantly keep adding and tweaking material, based on what we learn. In particular, I’m struck by the amount of changes my IoT section has been going through.

Here is what I have learned regarding IoT protocols: It’s a zoo out there, with lots of protocols trying to become the next HTTP. And some candidates deploy a formidable array of marketing, making it exceedingly hard to cut through the fog.

My current shortlist of main contenders is (in alphabetical order):

I might add STOMP to that list, just for its simple brilliance. STOMP is a text-based messaging protocol that has recently been extended to allow for binary content. Additionally, I’ve recently started talking with some transportation companies and learning about their use of DDS, which might be another candidate for the shortlist.

In the corner of residing champion, we have JSON/HTTP. Not content to see this protocol pushed into early retirement, advocates have been developing some very interesting approaches that attempt to ensure the continuing relevance of HTTP for asynchronous small messages – WebSocket being the most well-known. Hypercat, Simple Thing Protocol and EventedAPI represent just a small sample of the interesting approaches emerging to support async eventing and messaging with HTTP.

Where does this leave a developer trying to choose the right protocol for that awesome winged steam punk toaster? I don’t really have the answer but there are some documents trying to tease out the differences. Take a look at the MQTT vs. CoAP comparison from 1248.io or the DDS/AMQP/MQTT/JMS/REST comparison from DDS champion PrismTech.

Based on what I’ve learned so far, only XMPP and DDS have significant commercial deployments while MQTT is being evaluated by almost every major vendor I have talked to. While MQTT’s use as the protocol powering Facebook’s messenger is a good demonstration of its scalability, I don’t think this constitutes a proof point for mission-critical commercial deployments. If you know of commercial deployments of MQTT, I’d love to hear about them.

Each protocol has weaknesses: MQTT appears to be weak in security; DDS seems to be complex to scale and has version dependencies; XMPP is considered heavy-weight. But they all have strengths too, of course: DDS has the deployments in the field to prove its relevance; XMPP supports EXI and WebSocket for efficiency and a proven track record; both DDS and XMPP are extremely mature and have built-in security. Given the industry interest in MQTT, I am sure that whatever security problems exist will be fixed in one of the next versions. The one puzzling piece is the absence of CoAP in a commercial deployment. Again, if you know of one, please let me know.

Where do I stand on all of this? Having watched technologies rise and fall, I think it’s very normal at this stage to have multiple contenders trying to improve on HTTP. What I try to keep in mind though is that both bandwidth and computing power seem to be on an ever-increasing trajectory, while at the same time becoming cheaper and cheaper. Reduction in power consumption and increase in battery capacity, mostly driven through mobile, further lowers the bar for mainstream technology to power even small devices. I would not be surprised if, after the initial phase, we continue to see HTTP and JSON being dominant. As geeks, we sometimes get too excited about efficiency gains while losing sight of the fact that, for most products, technology simply needs to be good enough. But I won’t complain if I am proven wrong this time.

And don’t just take my word for any of this. To help you learn more, here are a couple of other articles reviewing IoT protocols:

October 16th, 2013

Intelligent APIs for Big Data & IoT

Written by
 

Big Data Webinar“Data is the new oil” is an oft-repeated phrase. But when was the last time you went out and bought a barrel of crude oil?  The value to consumers is in the refined product: gasoline. With data, the refined product is information – the distilled and actionable essence of multiple sources of raw data.  So, if “data is the new oil” then “information is the new gasoline”.

There’s a lot of data out there and IoT is going to increase it greatly. For large organizations, refining Big Data stores is a significant challenge. This is partly because data doesn’t start out big but gets collected from lots of relatively small sources. Also, data seldom arrives in the right format for sharing and monetization. Furthermore, responsibility for securing and managing data is not always in the same hands as responsibility for sharing data.

We have explored some of these issues in recent blog posts like Was is DaaS? and How APIs Grease the Data Wheels. In tomorrow’s webinar, Intelligent APIs for Big Data & IoT, Matt McLarty and I will try to bring it all together and talk about how APIs are becoming the pipelines and tankers that move the gasoline from its source to the user.

October 1st, 2013

Cyber Security Awareness Month & the Internet of Vulnerable Things

IoT SecurityDid you know that October 2013 is the 10th National Cyber Security Awareness Month in the US? While I usually emphasize the enormous potential of the Internet of Things (IoT), let’s use the occasion to look at the security risks of the Internet of really vulnerable things.

Over the last couple of months, a casual observer could have noticed a variety of security scares related to “connected things” – from hacked baby monitors to hacked cars. In August, my colleague Matthew McLarty wrote about the security vulnerabilities of the Tesla Model S. Regulators also started to take notice and felt compelled to act.

Given that the problems appear to be systemic, what can companies do to mitigate the risks for connected devices? Rather than looking for yet another technological solution, my advice would be to apply common sense. It’s an industry-wide problem, not because of a lack of technology but because security and privacy are afterthoughts in the product design process. To get a feeling for the sheer scale of the problem, I suggest taking a look at the search engine Shodan. Both SiliconANGLE and Forbes have recently run articles covering some its findings.

Yet these problems did not start with IoT. For instance, Siemens was shipping industrial controllers with hardcoded passwords before the dawn of IoT – enabling the now infamous Stuxnet attack. Despite all the publicity, there are still vulnerabilities in industrial control systems, as noted in a Dark Reading article from the beginning of the year.

All the best practices and technologies needed to address these problems exist and can be applied today. But it is a people (designer, developer, consumer) problem and a (product design) process problem, not a technology problem. Designing fail-close (rather than fail-open) systems, using meaningful authentication, authorization and encryption settings and so on – all of this can be done today with little or no additional effort.

Essentially, our legal process has not caught up with technology. And it won’t for as long as the lack of security merely inconveniences us rather than threatening us with loss of property – or even life! Conversely, we are pretty good at applying security best practices in aviation because most serious problems with an aircraft in flight are inherently catastrophic. So, let’s hope that the recent news of hackers accessing airplane flight control systems acts as a wake-up call for the industry.

As API Management providers, we at Layer 7 are, more often than not, actively involved in shaping the API security policies and best practices of our customers. Since we believe APIs will form the glue that will hold IoT together, we are using our API Academy to disseminate API best practices in a vendor-neutral way. Most of what we have learned regarding scalability, resilience and security from the SOA days is still applicable in the API space and will be applicable in the IoT space. As the magnitude of interconnectedness grows, security remains paramount.

August 13th, 2013

What ist DaaS?

DaaSWe live in the age of Big Data but Big Data is not showing up to the party alone. Fast data and open data are also coming along for the ride. This is why we need an “as-a-service” approach to data sharing. In a recent article for Big Data Republic, I explored the concept of data-as-a-service (DaaS) and some of the operational challenges associated with providing access to Big Data.

The fact that these challenges are not just theoretical considerations was driven home to me by one of our customers, who told me that he simply didn’t have enough IT cycles to keep writing and rewriting all those queries and APIs his customers were asking for.

Similarly another recent article on Big Data Republic, refered to three powerful drivers for machine learning identified by Tibco CTO Matt Quinn – drivers that I believe are equally relevant to data APIs:

  • “A surge of data being liberated from places where it was previously hidden (aka big data’s volume challenge)
  • A need for automation that manages the complexity of Big Data in an environment where humans have no time to intervene (aka Big Data’s velocity challenge)
  • An absolute requirement to create adaptable, less fragile systems that can manage the combination of structured and unstructured data without having a human write complex code and rules with each change (aka Big Data’s variety challenge)”

The efficiency gains and resulting agility and potential for innovation created by data-centric APIs are enormous – not just in respect to open data but also the ability to turn data into an active asset and monetize it. For an inspiring story, head over to Andorra via FastCompany.

Meanwhile, an interesting take on the way IoT is increasingly driving data democratization – and creating new governance challenges in the process – comes from  Christopher J. Rezendes and W. David Stephenson in an article at the HBR blog network. Naturally, the best place to implement and enforce data governance is in the API that provides access to the data.

Secure API design and management is not rocket science. Our API Academy is offering best practices and practical advice on everything from API design to API security to API lifecyle management (and yes, that includes versioning). And if you are curious about how Layer7′s API Management Suite can help your Big Data access challenge, download our Data Lens solution brief or contact me at hreinhardt@layer7.com.