June 7th, 2013

IoT Tech Talk Follow-Up

Written by
 

IoT Tech Talk Follow UpLast week, I had the opportunity to answer questions about the Internet of Things (IoT) when I took part in Layer 7’s monthly API Tech Talk. We had a tremendous response, with lots of questions and a very active online discussion. You can find a replay of the Tech Talk here. I’d like to take this opportunity to answer a few of the questions we received during the webcast but didn’t have time to answer on the day.

How does Layer 7 help me manage a range of devices across IoT?
IoT is an opportunity for CA and Layer 7 to bring together identity, access and API Management.  To paraphrase a comment on a recent Gigaom article: Everything with an identity will have an API and everything with an API will have an identity.

With so many “things” potentially accessing APIs, what are some strategies for securing these APIs across such a breadth of consumers?
Identify, authenticate and authorize using standards. API for IoT means managing identity for many devices at Internet scale.

How will API discoverability work with the vast number of things, especially if we see REST as the primary communication style?
I reached out to my colleague Ronnie Mitra for this answer. Ronnie pointed out that, in the past, standards like UDDI and WSRR promised to provide service registries but that didn’t really work out. Nowadays, we see lots of independent human-oriented API registries and marketplaces that might have more chance of surviving. There are even some runtime discovery solutions like Google’s discovery interface for APIs and the use of HTTP OPTION to learn about APIs. At the moment, lots of people are trying lots of things, unsure of where it will all end up. It would be interesting to dive deeper into why we need discoverability to power IoT and when that discoverability has to take place.

How can API security get easier when API demand grows exponentially? There’s a big disconnect.
It doesn’t get easier. Transport-level security is reasonably well understood but endpoint identity and trust will be challenging.

Where will the intelligence be in IoT? Will there be some form of on-site intelligence, so that core functionality continues even if the connection is lost? Or will all intelligence be cloud-based?
It depends on whether you design for centralized “hub and spoke” or decentralized “domains of concern”. The former is responsible for correlating data and events within the domain whereas the latter is responsible for communicating with other domains (I owe this concept to Michael Holdmann’s blog). “Domains of concern” design communicates with different domains for different purposes –  in an apartment for home automation, in an apartment building for HVAC, in a city block for energy generation/consumption, in a city for utility grid etc. Emergencies or out-of-bound signals are handled like exceptions and are bubbling up through the domains until intercepted. But most things will serve an inherent purpose and that purpose will not be affected by the absence of any connectivity. There will be intelligence within the core of each domain as well as at the edges/intersections with other domains.

What is the best way to overcome fear of exposing data via APIs in an enterprise?
You need to identify a business opportunity. Unless you know what business impact you are trying to archive and how you will measure it, you should not do it.

Does IoT require a strong network or big data or both?
Not a strong network but ubiquitous connectivity. Not big data but sharing/correlating data horizontally between distinct vertical silos.

What significance (benefits/drawbacks) do the various REST levels have with respect to the Internet of Things (connecting, monetizing etc.)?
I had never heard of levels of REST and had to look it up. Turns out the levels are: resources, verbs and hypermedia. Hypermedia would allow you to embed long-lived clients, which could adapt to changes in API design. But it is actually the data or service behind the API which is monetizable, not the API itself. The API is just the means to an end.

How will IoT evolve? And more importantly how can enterprises solve the security and privacy issues that will arise as IoT evolves?
Culturally, the European regulators will try to put privacy regulations in place sooner rather than later whereas the North Amercian market will initially remain largely unregulated until some abuse prompts the regulator to step in. In Germany, the federal regulator tries to stay ahead of the market and recently published a security profile for smart meters. Personally I would look at designing M2M and IoT applications assuming that endpoint data is inherently unreliable and that I can not necessarily trust the source. But that is very broad guidance and may or may not be applicable to a specific use case.

As we create API frameworks that interact with sensors and control objects in the IoT what/who are the best organizations to follow to learn about new protocols that we should be preparing to handle, such as CoAP etc?
Here are some suggestions:

How close are we to having a unified platform for IoT application developers and who is likely to be the winner among the competing platforms?
Chances are there won’t be a winner at all. You have companies like Axeda, Exosite, Gemalto, Digi, Paraimpu, BugLabs, ThingWorx, SensiNode, deviceWISE and more. You have industry working groups like Eclipse M2M and various research efforts like SPITFIRE project, Fraunhofer FOKUS, DFuse and many others. The Eclipse M2M framework is probably a good choice to start with.

Even assuming ubiquitous and common networking (e.g. IPv6 on the public Internet) – how will the IoT identify peers, hierarchy and relationships?  
I think there is a huge opportunity for identity companies like CA to figure this out. Take a look at EVRYTHNG as one of the few startups in that space. Meanwhile, the folks over at Paraimpu are trying to tackle this challenge by combining aspects of a social network with IoT.

June 7th, 2013

Hypermedia Workflow Questions

Hypermedia WorkflowI fairly often get emails following up on the workshops, articles, webinars and online tutorials I take part in. I can’t always answer these questions directly and sometimes deal with them in blog posts or online articles. Following my recent API Tech Talk on hypermedia, I got some questions from Abiel Woldu on how to handle hypermedia support when the same backend operation is called from different workflows. Here’s part of the Abiel’s email:

“Say you have an end point for validating address; call it /validateAddress. Now this endpoint is called from two work flows.

  1. When a user updates his account settings (changes a new address)
  2. When a user tries to buy a product and enters the shipment address

In both cases the /validateAddress should give different set of links and forms as part of the response of validation (next step affordances) because the flow is different. In this case what is the set of the next links and forms returned from the endpoint? Is it the union of the two workflows and the client knows how to get what it needs? Or does the client send information of which flow it is in and the server uses the information to figure out what response to give it?”

Decoupling Backend Processes from Public URIs
This kind of question comes up frequently. Essentially, there are a couple assumptions here that are worth exploring. The first is the idea that a backend operation (e.g. “validateAddress()”) is exposed over HTTP as a single endpoint, no matter the calling context. This is not a requirement. In fact, it is advantageous to decouple public addresses (URI) from private operations on the server. HTTP (whether using HTTP-CRUD, Hypermedia-REST or some other model) offers the advantage of using multiple public URIs to point to the same backend operation. For example, it is perfectly correct to publish both /validateExistingAddress and /validateNewAddress URIs each of which points to the same  “validateAddress()” operation on the server.

Not Everything Needs a URI
Just because the backend server has an operation such as “validateAddress()” does not mean there has to be a URI associated with that operation. For example, the “user updates his account settings” workflow need not have a direct URI call to “validateAddress()”. Instead, there could be an account settings resource (/account-settings/) that supports the HTTP.PUT method and accepts a body containing (among other things) a modified address. Executing this client-side operation (PUT /account-settings/) passes data to the server and – along with other operations – the server calls the “validateAddress()” operation itself and reports the results to the client.

The same can be done in the case of “user tries to buy a product and enters the shipment address”. This address validation could be a small part of the server-side operation and processing of an HTTP.POST to a /check-out/ resource.

Mapping Actions to URI & Method
In the HTTP-CRUD model, the focus is on using URIs to identify entities and/or operations and using the protocol methods to perform actions. For example, an /addresses/ resource that supports adding (POST), modifying (PUT), removing (DELETE) and retrieving (GET) addresses associated with a context (logged in user, check-out processing etc.) In this case, POSTing or PUTing a resource body to the server allows the server to call the “validateAddress()” operation (among other things) and report results to the client.

Mapping Actions to Hypermedia Controls
In the hypermedia model, actions are described using a hypermedia control such as a link or form. The URI is not important in this model. Instead the control has an identifier (e.g. “validate“), indicates a protocol action (“POST“) and lists state data to include in the payload.

In Siren it might look like this:

"actions": [
 {
     "name": "validate",
     "title": "Validate an Address",
     "method": "POST",
     "href": "...",
     "type": "application/x-www-form-urlencoded",
     "fields": [
           { "name" : "Street", "type" : "text", "value" : "123 Main Street" },
           { "name" : "City",   "type" : "text", "value" : "Byteville"},
           { "name" : "State",  "type" : "text", "value" : "MD" },
           { "name" : "ZIP",    "type" : "text", "value" : "12345"}
     ]
     }
 ]

Note that I didn’t bother to enter a value for the href in this example. It could be any valid URL; I just left it out.

Tracking Workflow Progress Within Messages
Here’s another question from Abiel Woldu’s email:

“The concept of which work flow the client is going through – is it code that should reside in the API code itself or it’s something that sits outside in some other gateway or something?”

When implementing processes over HTTP, it’s wise not to rely on stateful multi-request chains. In other words, don’t expect either the client or server to keep track of where some request belongs in a workflow. Instead, include that information in the request and response bodies themselves. This pattern of including all the important context information with each request and response not only assures that the request can be handled independently (e.g. in a load-balanced cluster), it also helps clients and servers to do work within varying time-spans (e.g. a clients can cache the last request to disk and pick things up a day later). In the REST model, Fielding described this as making messages “self-descriptive”.

For example, there might be a use case that prompts human users to provide quite a lot of information (across various UI tabs) before finally submitting this completed set of work to the server for final validation and processing. One way to support this over HTTP is to allow clients to store “work-in-progress” (WIP) records on the server. As each “tab” (or other UI affordance) is completed, the client app is free to execute a POST or PUT operation with the payload to a URI supplied by the server. The stored data would include a value that indicates how far along in the workflow the user has progressed. This same client app could also recall stored WIP records, inspect the workflow indicator and prompt the user to pick up where she left off. Once all the required elements were supplied, the work could be forwarded for final validation and processing.

Dynamic Workflow via Hypermedia
Finally, in some cases, the series of steps in a workflow might vary greatly at runtime. For example, a service might support a multi-tenant model where each instance of “supply all the details for this work” has different steps or the same steps appear in differing order. The “next step” need not be memorized by the client code. Instead, hypermedia servers can inspect the current server-side configuration, check the current progress by the user and then supply the correct “next step” for this particular instance.

In this way, the client app can support a wide range of workflow details without needing custom code ahead of time (or even downloaded code-on-demand). Instead, the client app only needs to be able to recognize the “next step” link and navigate to that resource.

In Summary
In general, when using HTTP:

  1. There is no rule that you must expose internal methods as public URIs
  2. You may use more than one URI for the same backend operation
  3. In the HTTP-CRUD model, you usually map operations by linking URIs and methods
  4. In the hypermedia model, you usually map operations by linking controls and state variables
  5. It is best to use “self-descriptive” messages to track workflow progress statelessly
  6. The hypermedia model supports dynamic workflow progress using the “next step” link pattern

Thanks to Abiel for his questions and his generous permission for me to use his email and name in this blog post. If you’ve got a question that I haven’t answered online before, feel free to ping me via twitter (@mamund) and fire away.

May 27th, 2013

The Nuts & Bolts of the Internet of Things

Written by
Category IoT, M2M, Tech Talks
 

The Nuts and Bolts of IoTA few days ago, I talked with Brian Proffitt of ReadWrite about the Internet of Things (IoT) and I’d like to take this opportunity to share some of his questions.

One of Brian’s first questions was about the difference between M2M and IoT. The best answer I could give him was actually one I had found through an M2M group on LinkedIn: “I see M2M platforms as mainly enabling vertical integration, as they have historically, of a single capability; where I see IoT as more about horizontal integration of multiple capabilities and resources into a larger system. M2M is about communication, IoT is about integration and interoperability.”

So, whereas M2M feeds data into existing vertical silos, IoT is layered on top horizontally, correlating and integrating data from different silos. A good illustration of this vertical–versus-horizontal distinction was provided in a recent More with Mobile article. The realization that the commercial potential of IoT first and foremost requires a new model of data sharing inspired us to create the Layer 7 Data Lens Solution.

Another question that Brian posed was about the protocols and standards underpinning the M2M/IoT ecosystem. Here is my short list of key protocols (in no particular order):

I’d certainly be interested to hear if you had any additions to the list. You’ll find background information about IoT protocols on Telit’s M2M blog and Michael Holdman’s blog. Also, Michael Koster published a very interesting blog post about adding event-driven processing to REST APIs, trying to bridge the necessity of supporting event-driven patterns in IoT within a RESTful API approach.

I’ll be discussing IoT in more detail myself when I take part in Layer 7’s latest API Tech Talk, on Wednesday May 29 at 12pm EDT/9am PDT. If I answer your IoT-related question live during the Tech Talk, Layer 7 will send you a free T-shirt. See you on Wednesday!

May 23rd, 2013

Join Our Live Internet of Things (IoT) Discussion – Win a T-Shirt

Written by
Category Events, IoT, M2M, Tech Talks
 

IoT-ShirtWe’ll be discussing the Internet of Things (IoT) during our latest API Tech Talk next Wednesday, May 29 at 9am PDT. Our special guest – Layer 7 Product Architect and IoT expert Holger Reinhardt – will be taking your questions live throughout the stream. And we’ll be sending every single person who gets an IoT-related question answered by Holger one of our nifty new IoT-shirts, for free! You can ask questions through the Livestream chat, using the Twitter hashtag #layer7live or by emailing techtalk@layer7.com.

The Internet of Things is a simple concept: objects being connected to the Internet. What’s not so simple is managing the enormous, almost sublime amount of data these connected “things” (vehicles, appliances…) generate. There’s also the question of how you give people within your organization secure-but-seamless access to specific subsets of data they can actually make use of.  Well, our man Holger knows how it’s done, so start getting your questions together and join our live Q&A on May 29.

Click here to get the full event details and a reminder in your calendar. On the day of the event, join us at:

And don’t forget, you can ask questions throughout the stream by chatting or tweeting. Alternatively, you can email your questions in advance and Holger will give you an in-depth answer on the day. IoT is a pretty hot topic right now, so this is bound to be a lively discussion. See you next Wednesday!

March 22nd, 2013

Enterprise Mobility & BYOD – Live Interactive Q&A

BYOD Tech TalkCalling all Enterprise Architects, Application Architects and Senior Developers! For our next API Tech Talk, we’ll be discussing Enterprise Mobility & BYOD live on March 26 at 9am PST. My special guests will be Layer 7 VP of Client Services Matt McLarty and Product Manager for Mobile Leif Bildoy.

The BYOD movement seems to be changing the hardware landscape permanently and it’s showing no signs of slowing down. Naturally, this presents both opportunities and challenges. Security managers within the enterprise have less control then ever. “Anywhere access” has blurred the lines of what used to be called the corporate network perimeter.

So what are CIOs and CTOs specifically worried about with BYOD? Well for one, mobile devices can easily go missing while containing sensitive data and employers often cannot even assess the impact of data security breaches from compromised devices. But locking down employees’ personal devices is generally not an option.

So how can enterprises re-assert control over their data assets while still allowing employees to use their own smartphones as they choose? We’ll be discussing this and other questions during out live, interactive Q&A. So, be sure to clear your calendar and join in the discussion on March 26 at 9am PST.

Here’s How to Join the Discussion
Make sure you click Add to Calendar to get the event details and a reminder in your calendar. Then, on the day of the event, click here to join:

To ask questions, you can: