July 26th, 2013

Next Tech Talk: CDN API Management Using Optimized API Distribution

API Tech Talk T-ShirtGet ready for Layer 7’s next API Tech Talk, coming up on Wednesday July 31 at 9am PDT. This live, interactive event will feature Akamai’s Gary Ballabio alongside our very own Francois Lascelles, chatting about CDN API Management Using Optimized API Distribution.

Right now, APIs are playing a central role in content providers’ efforts to maximize customer engagement by leveraging emerging online channels via cloud-based content distribution networks (CDNs).

But CDNs and API publishing have raised new access management and SLA enforcement challenges for content providers. On Wednesday, Gary and Francois will explain how content providers can tackle these challenges via entitlement checks, access history and analytics.

Our presenters will also be taking your questions and comments throughout the Tech Talk. And if they answer your question during the live stream, we’ll send you one of our highly desirable, limited edition Tech Talk T-shirts.

Click here to get the full event details and a reminder in your calendar. On the day of the event, join us at:

You can ask questions throughout the stream by chatting or tweeting. Alternatively, just email your questions in advance so that Gary and Francois can give you some really in-depth answers on the day.

See you on Wednesday!

July 24th, 2013

IoT: The Weighting Game

Written by
Category IoT, M2M, Security, Twitter
 

Data Weighting for IoTThis must have been a scary few moments. On March 23, the main Associated Press Twitter account tweeted about explosions at the White House and President Obama being hurt. Guess what happened next? The Dow went down by over 100 points within minutes of the tweet.

So why did this happen? Regardless of whether the trades were executed by an algorithm or a human, both where treating all tweets from that AP feed as equal. They traded  based on the content of a single tweet – and the resulting feedback loop caused the drop in the stock market.

Fast forward to IoT and imagine that each Twitter account is a sensor (for instance, a smart meter) and the tweets are the sensor readings. Further imagine that the stock market is the grid manager balancing electricity supply and demand. If we were to attach the same weight to each data point from each smart meter, a potential attack on the smart meters could easily be used to manipulate the electrical grid and – for instance – cause the local transformer to blow up or trigger a regional blackout via a feedback loop.

Yet strangely enough – when talking about the IoT – the trustworthiness of sensor data does not appear to be of concern.  All data are created equal or so the assumption seems to be. But data have an inherent quality or weight inferred by the characteristics of the endpoint and how much it is trusted. Any algorithm using sensor data would need to not only take into account the data points as such but also weight the data based on the actual capabilities of the sensor, its identity and its trust relationship with the sensor.

I tried to capture this relationship in picture below.

Endpoint Security in IoT

How can we account for the risk that not all data are created equal?

Credit card companies provide a good object lesson in the way they have embraced inherent insecurity. They decided to forgo stronger security at the endpoint (the credit card) in order to lower the bar for use and increase market adoption. But in order to limit the risk of fraudulent use, every credit card transaction is being evaluated in the context of most recent transactions.

A similar approach will be required for IoT. Instead of chasing impossible endpoint security, we should embrace the management of (data) risk in the decision-making process. An advanced, high-performing API Gateway like Layer 7’s can be used to perform data classification at the edge of the enterprise and attach labels to the data flowing through the Gateway and into the control processes.

I’d be curious to learn if and how you would deal with the data risk. Do you assume that all data are created equal? Or does the above picture resonate with your experiences?

July 23rd, 2013

Interoperability, Not Integration

Interoperability Not IntegrationIt’s a small semantic difference, really but a difference I think is worth calling out. When working in large distributed systems, it’s better to aim for interoperability than integration. And here’s why…

Integration for a Single System
Part of the Merriam-Webster Online Dictionary’s definition of integration is particularly relevant here:

A common approach to working with large distributed systems – e.g. internal networked implementations that run at various locations within a single organization or implementations that rely on some Web-based service(s) – is to attempt to treat the entire operation as a single unit, a “whole system”.

Bad idea!

These “whole systems” can also be called “closed systems”. In other words, people work to create a fully-controlled single unit for which, even when elements are separated by space (location) and time (“We built that part three years ago!”), there is an expectation that things will work as if they are all physically local (on a single machine) and temporally local (there is no significant delay in the completion of requests). As you might expect, attempting this almost always goes badly – at least at any significant scale.

There are several reasons for attempting this approach. The most common is that treating everything as “your system” is mentally easy. Another reason this single-system view prevails is that most tooling acts this way. The legacy of edit and build tools is that all components and data are local and easily accessible. How else would we be able to do things like code completion and data model validation?

Anyway, the point here is that “integration” is an anti-pattern on the Web. It’s not a good idea to use it as your mental model when designing, implementing and deploying large-scale systems.

Interoperability for Working with Other Systems
As you might have guessed, I find Merriam-Webster’s definition for interoperability much more valuable:

The interoperability mindset takes a different approach. In this view, you want – whenever possible – to treat things as interchangeable; as things that can be swapped out or re-purposed along the way. Interestingly, Merriam-Webster notes the first known use of this term was in 1977. So, the idea of interoperability is relatively new compared with “integration”, which was first used in 1620, according to Merriam.

An interoperability-focused approach leads to systems that do not need to “understand” each other, just ones that use interchangeable parts. Especially in widely-distributed systems, this interchangeability has a very high value. It’s easier to replace existing items in a system (e.g. changing data-storage vendors), re-use existing parts for other needs (e.g. applying the same editing component built for a blogging service to a new print publishing service) and even re-purpose parts when needed (e.g. using the file-based document caching system to provide caching for logged-in user sessions).

The primary challenge to thinking like an inter-operator instead of an integrator is that there are no easy tools for this kind of work. Pretty much all integration work is done by creative thinkers in the field (“We could just use our existing storage system for that.”) You usually need a rather specific knowledge of what’s available on site and what the existing parts can do in order to execute on interoperability.

Despite the extra cost of interoperability, there are important benefits for distributed systems that must operate over a long period of time. That’s why so much of the Web relies on interoperability. The standards we use for DNS, HTTP, HTML etc. all assume that varying products and services are free to decide what they do and how they do it as long as they inter-operate with other products and services on the Web.

Treat the Network Space as a Bunch of Other Systems
If you take the approach of treating everything in your network space (e.g. your local intranet or any system that relies on at least one Web-based service) as a bunch of “other systems” you’ll be better off in the long term. You’ll stop trying to get everyone to work the same way (e.g. using the same storage model or object model or resource model) and will be free to start working with other teams on how you can share information successfully across systems, via interoperability.

Even better, large organizations can get a big value out of using the interoperability model for their implementations. In practice, this means fostering an internal ethos where it’s fine to be creative and solve problems in novel ways using whatever means are at your disposal as long as you make sure that you also support interoperability with the rest of the parts of the system. In other words, you have the freedom to build whatever is most effective locally as long as it does not threaten your interoperability with the other parts.

There are lots of other benefits to adopting interoperability as the long-term implementation goal but I’ll stop here for how and just say, to sum up:

  • As a general rule, treat your implementations as exercises in interoperability, not integration.

(Originally published on my personal blog)

July 19th, 2013

What I Learned in Helsinki: The Core Motivation of IoT & Other Reflections

Written by
Category Conferences, IoT, M2M
 

Reflections on IoT-ALast month, I attended a two-day IoT-A workshop during IoT Week in Helsinki. The goal of the workshop was to showcase the various IoT research projects that are jointly funded by industry and the EU’s FP7 research program. The quality of the projects on display was amazing and I could not possibly do it justice in the space of a blog post. Still, here’s a partial list of what I saw:

BUTLER

  • Horizontal open platform for IoT
  • To learn the intent of a user requires a horizontal approach
  • This horizontal approach leads to context awareness

FI-WARE

iCore

  • Composed of Virtual Objects, Composite Virtual Objects and  Service Layer
  • User characteristics + situation awareness = intent recognition

OpenIoT

  • Linked sensor middleware
  • Data management instead of infrastructure management
  • Uses information interoperability and linked data to enable automated composition

ComVantage

  • Manufacturing automation
  • Uses XACML and extends it for linked data

CHOReOS

  • Probabilistic registration of things
  • Registration decisions are based on existing density and coverage requirements

To get a more complete picture, you can find all the presentations from the workshop here.

There were two key insights I took away from this workshop, both of which had to do with subtle similarities shared by all the projects.

First, sitting in and listening to the various presentations, I was struck by one particular similarity: at the core of each use case was the desire to make better-informed decisions. I’ve tried to capture what I call the core motivation of IoT in the picture below.

The identity of the user or thing combined with the temporal and/or spatial context based on real-world knowledge and data from the past can allow us to make better-informed decisions for the future. I think this holds for both the smart coffeemaker and the smart city.

My other insight had to do with the surprisingly similar characteristics of the various presented IoT applications. I tried to capture these characteristics in the picture below.

At the heart of the applications lies data – lots of data. But Big Data has two siblings: Fast Data and Open Data. The applications are graph-structured based on the relationship of things to each other and to me. They are event-driven rather than transactional and they are compositional.

What do you think? What kind of similarities do you see between the various applications?

July 17th, 2013

Secure APIs: The Road to Business Growth

CA Technologies Mobile SolutionsBusinesses today are under intense pressure to reach new customers, collaborate with new partners and build new mobile apps that transform business processes.

If you’re a bank, you might want to sign customers up using a tablet on the street corner. If you’re servicing cell phone towers, you might want a technician’s tablet or cell phone to know his location, open the right support ticket and send him the proper documentation for that work site. If you’re selling to end customers, you want to give them exactly the information they need, when they need it, on whatever device they choose, when they’re ready to buy.

But when the business comes to IT for such applications, we often tell them “no” – or, at least, “not now.” One reason is that IT is short on developers. But IT’s hesitance also stems from an appropriate concern with issues like access control or the possibility that backend systems might crash under the load from mobile applications or the cost of converting data for these new services and devices. As we learned from connecting our backend systems to the Web, adding a new platform can mean a profound change in how these systems are used – instead of checking a flight once when the travel agent makes a reservation, people now check on-flight information dozens of times as they search for the best flight on the Web or check their mobile phones to see whether Grandma’s flight has arrived yet.

IT can help meet these needs if we realize the business is not asking for a series of huge new standalone apps. What it’s asking for is the ability to experiment, to try a lot of new ideas quickly and at low enough risk and cost that even if some ideas fail, that’s still okay – as long as one or two succeed in a big way. As Linus Pauling put it, “If you want to have good ideas you must have many ideas. Most of them will be wrong and what you have to learn is which ones to throw away.”

Such experimentation is often impossible in-house and not just because of a lack of the skills. The hand-coding process used in most organizations today forces them to build the same app multiple times, once for the browser, once for mobile, once for Google Glass or whatever the next platform is. That not only delays deployment, it also increases cost and risk so much that experimentation in the business is not possible.

But secure APIs can make that experimentation possible. Here’s how.

Secure APIs provide a single gateway for developers from smaller companies that are in your organization’s “ecosystem” to access and monetize the backend systems, databases and information that are your core assets. If you can support outside developers in creating great apps for you, you avoid grinding out that code yourself. That makes you much more agile and reduces your cost and delivery times. It also lets you tap outside developers if you need help in a new or emerging area, such as a Google Glasses app or Big Data or for a short-lived app like one that works with a Super Bowl promotion.

In addition, outside developers might see ways to monetize your internal systems that you cannot. They might come up with, say, a social banking app that builds brand loyalty by using a customer’s social group to encourage her to contribute to a retirement account. They might develop a branded pedometer app for a health plan to track a member’s exercise routine. This is no different than when Twitter or any other social media platform lets third-party developers connect to their information systems, delivering revenue in ways the business might never have imagined.

Organizations taking advantage of this market opportunity are building on a platform that allows them to abstract security and data transformation tasks into a technology layer purpose built to enable this innovation. Think of this as a mobility Gateway that streamlines development and reduces risk by eliminating the need to write everything from security to access control, caching and load management for every application. If those functions are delivered from the Gateway, developers can focus on quick revisions of the front-end application to go after those potential big market wins.

At CA Technologies, we are now providing such a secure Gateway, following our acquisition of Layer 7. The Layer 7 technology provides a secure Gateway that sits in front of your backend systems, exposing them via simple and secure APIs. It provides everything from identity verification to caching through a single security and IT optimization layer, giving developers – and the business – the freedom to experiment and innovate.

The way to build out your APIs is to start slowly using budget from individual projects but keeping the long-term architecture in mind. Don’t try selling it to the business in terms of APIs, caching and security layers. Instead, tell it how you’re giving it the ability to rapidly and securely experiment with new business models at low cost. Talk about how you’re letting it roll out a new mobile application more quickly or giving an outside developer the tools to find a new route to market for you.

We’re seeing that customers who build APIs, not applications, are leveraging the creativity of a world of clever developers. What challenges and rewards have you found on your API journey?