April 17th, 2014

Next API Tech Talk: Linked APIs

Linked APIsThe challenges faced by today’s software architects go far beyond the familiar. “Big Data” means more than managing petabytes of data – it requires dealing with data-sets that span organizational boundaries. Likewise, the term “distributed system” no longer refers to just a multi-tier architecture or cloud deployment – it usually involves the connection of non-heterogeneous systems across multiple organizations.

On Thursday April 24, I’ll discuss these challenges as part of Layer 7’s latest API Tech Talk. I’ll be using this opportunity to explore how architects can leverage “linked APIs” to handle Big Data sets and distributed systems that cross organizational, technological and cultural boundaries, breaking through data silos in order to better integrate information. Interested? Just add the Tech Talk to your calendar and go to api.co/L7live at 9am PDT (12pm EDT) next Thursday.

I’ll also be taking your questions on linked APIs, Big Data, distributed systems, open source and anything related, so please don’t hesitate to join in. You can submit your questions now by email or you can chat with me or tweet them at me on the day. This will be my first Tech Talk since joining the Layer 7 API Academy and I’m really, really looking forward to a lively discussion. See you on April 24!

February 27th, 2014

New API Academy Team Member: Irakli Nadareishvili

Irakli NadareishviliThe API Academy team has a new member: Irakli Nadareishvili who has joined CA Layer 7 as Director of API Strategy. Before joining CA, Irakli served as Director of Engineering for Digital Media at NPR, which is noted for its leadership in API-oriented platform design. He has also participated in the creation of the Public Media Platform, worked with whitehouse.gov and helped a number of major media companies develop publishing solutions using open source software.

I recently sat down with Irakli to discuss what he has in mind as he joins API Academy.

MM: You once told me that you believe the future of Big Data is “linked APIs”? That sounds intriguing. Tell me more about it.

IN: In most people’s minds, “Big Data” is synonymous to “very large data”. You may hear: “Google-large” or “Twitter-large” or “petabytes”. The Wikipedia definition of Big Data is slightly more elaborate:

“Big data is the term for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications”.

In my work, I see the “complex” part of the definition becoming more important than the size. We have gotten pretty good at taming the large sizes of data. Tooling for horizontal partitioning and parallel processing of large data sets is now abundant. Still, most Big Data sets are contained and processed in the isolation of single organizations. This is bound to change very soon. The end of siloed Big Data is near: I believe that the next phase of Big Data challenges will have to do with data sets that cross organizational boundaries.

APIs will play a major role in this. Web APIs represent the most effective available technology that allows data to cross organizational boundaries. APIs efficiently connect and link data at a distance.

MM: Can you give an example of what you mean by “data sets that cross organizational boundaries”? And what challenges do these pose?

IN: You see, a lot of people have the notion that the data they need to process can be stored in a database maintained by a single organization. This notion is increasingly inaccurate. More and more, organizations are having to deal with highly-distributed data sets.

This can be very challenging. The infamous healthcare.gov is a good example of such a distributed system. The main technical challenge of implementing healthcare.gov’s backend was that it had to integrate with data in many existing systems.

The $500 million initial public fiasco of healthcare.gov is also a vivid indication of just how complex it is to build truly distributed systems. Practically the only successful implementation of such a large, distributed information system is the World Wide Web. There’s a lot we can learn from the architecture of the Web. It’s a battle-tested blueprint for building distributed systems at scale.

I believe the Big Data challenges of the future will be solved at the intersection of APIs with Web/hypermedia architecture, linked data and what we currently call Big Data tooling. I call this intersection “Linked APIs”, to differentiate it from the current, siloed state of most Web APIs.

MM: What practical advice would you give to the developers of future Big Data APIs?

IN: I think the most important thing is that we need to stop falsely assuming all of the API data is local data. It is not. Despite the name, an API for a distributed system is really not a “programming interface” to local data/assets. Rather, it is a programmable data index. Think of APIs as a programmable search index for a distributed collection of data sets.

I don’t like to think of the term “API” as an abbreviation anymore. Maybe it was one a while ago but it has since evolved way past that. Much like IBM doesn’t think of itself as “International Business Machines” anymore, APIs aren’t merely “application programming interfaces”. Most of what IBM does these days isn’t even necessarily about “machines”. Likewise, most of what we need out of APIs isn’t about any single application or an interface to it.

MM: Big Data represents one important challenge for computing today. What about IoT?

NN: The Internet of Things is already here, in small ways. The IoT we have today consists of a vast number of Web-connected devices, acting as sensors, sending myriads of signals to the cloud. That, by the way, is what creates many Big Data challenges. The future is much more interesting, however. Once the connected devices start engaging in peer-to-peer interactions, bypassing central authority, we will enter a significantly different realm. The most important challenge in that world, from my perspective, will be identity. Identity is always key in distributed systems but especially so in peer-to-peer networks.

MM: What excites you the most about your new role at Layer 7?

IN: Thank you for asking this question. I will start by telling you what terrifies me the most. The API Academy and Layer 7 teams represent a gathering of  ”scary” amounts of world-class brainpower and expertise in the API space. It is extremely humbling to be part of such a distinguished group.

Obviously, it also means that there is a lot of very fundamental thinking and innovation that happens here. Especially now that Layer 7 is part of CA Technologies, there’s really very little that we couldn’t accomplish if we put our hearts to it. That feels extremely empowering. I really care about all things related to APIs and distributed systems, the role they can play for the future of technology. I am super excited about the possibilities that lie ahead of us.

November 5th, 2013

Thoughts on Trends in IoT & Mobile Security

Written by
 

IoT and Mobile SecurityRecently, I read an article about predicted growth in the Internet of Things (IoT). Extrapolating a previous estimation from Cisco, Morgan Stanley is predicting there will be 75 billion connected devices by 2020. This sort of math exercise is entertaining and has a real “wow” factor but the real question here is: What does this mean for consumers and enterprises?

In recent years, consumer electronics manufactures have started to see the usefulness of building Internet connectivity into their appliances. This enables the post-sales delivery of service upgrades and enhanced features. It also allows mobile apps to control home appliances remotely. This is nothing radical per se, a decade ago I observed a sauna in the Nokia Research Center’s lab being controlled by voice and WML. But this was still a simple one-off integration. As the number of device form factors increases, the complexity of integrating devices grows. The term “anytime, anywhere computing” is usually used to describe this scenario but it isn’t entirely adequate. As a consumer I don’t only want device-independent access to a service – I want the various devices and appliances to work with each other so that smarter interactions can be achieved.

Today, we already see a plethora of connected devices with more-or-less crude connectivity and integration options. Smartphones can sync and connect with tablets, TVs and laptops. Mostly, these are very basic integrations, such as your various devices “knowing” about the last page you read in an eBook, regardless of which device you used. But the number and complexity of these integrations will increase greatly in the coming years.

The Coming Age of Connectivity
One of the main reasons the iPhone revolutionized mobile computing was Apple’s focus on user experience. Since then, mobile vendors have battled to see who could provide the best experience within the device. The next battle will be over cross-device experiences within the broader ecosystem, as users roam from device to device. And in the battle, the big players will keep adding their own proprietary components (software and hardware). The sheer size of these ecosystems will make the opportunity large enough to attract even more mindshare. If you make money – who cares about proprietary protocols and connectors?

But how does this relate to IoT, you may ask – isn’t this just a subset of IoT’s promise? The answer is “yes” but that is how this revolution will work – closer to an evolution where the consumer-driven use cases will be implemented first. Yes, there are other enterprise use cases and we can see many protocols and frameworks that claim to address these requirements. In the end though, I believe most of these platforms will struggle with developer uptake as most of the developer mindshare is found in the big mobile ecosystems. As with mobile, the successful approaches will be the platforms that can offer developers familiar tools and a roadmap to revenue.

It’s clear the big players in mobile, like Samsung and Apple, see a huge opportunity in connected devices. As we move on, we will see more devices get included in each of the mobile ecosystems’ spheres. Increased integration between mobile devices and cars is already in the works. Similarly, among the many notable events at last week’s Samsung DevCon (an excellent show, by the way), several SDKs were launched with the aim of solving specific consumer needs around media consumption in the home. But the impact of increasing connectivity will go beyond these relatively well-understood use cases to encompass home automation, smart grid, healthcare and much more.

Alternative Authentication Methods for the Connected World
In this multi-device, multi-service world, conventional username/password login methods will not be convenient. Advances in the biometric space (such as Nymi or Apple Touch ID) will be relevant here. I suspect that, just as we have seen a bring-your-own-device trend grow in enterprise mobile, we will see a bring-your-own-authentication paradigm develop. As a larger set of authentication methods develops in the consumer space, enterprise IT systems will need to support these methods and often be required to adopt a multi-layered approach.

Ensuring Big Data Privacy in the Age of IoT
Another set of challenges will be created by the enormous amounts of data generated by IoT. Increasingly, connected devices are able to collect and transmit contextual data on users. This information can be highly useful for vendors and users alike. But what happens if data is used for purposes other than those first intended or agreed to? Who owns the raw data and the generated insights? And how is the rightful owner in control of this? Today, there is no general standard available nor are the mobile ecosystems providing adequate privacy protection. Sometimes one gets the feeling that users don’t care but they will probably start caring if and when data leakage starts to make an impact on their wallets.

Meanwhile, Layer 7 will continue to innovate and work on solutions that address the challenges created by IoT, multi-device authentication and Big Data. Oh and by the way, I believe Morgan Stanley underestimated the number, I think it will be double that. You heard it here first…

October 31st, 2013

Security in the Frenetic Age

Written by
 

I AcceptHappy Halloween everyone!

There has been a lot of talk about data leaks and data privacy lately, not naming any names. The articles and blog entries on this topic are filled with outrage and spoken with dropped jaws. I have to admit that the only shock I experience on this subject is at how shocked people are. As divisive as these issues are, fundamental questions remain. How much privacy should be expected? How many times a week are you prompted to accept a long block of terms and conditions in order to access online services? How many times do you read them? Isn’t that the scary part?

The mobile revolution has brought us into the Frenetic Age. Hear two bars of a song you like? Buy it on iTunes. Order a tasty looking burrito? Instagram it.  Overcome by wit?  Facebook, Twitter, Tumblr…  Digitizing our social lives — and our lives in general — leaves a trail of data.  Eric Schmidt claims that we now create as much information every two days as we did up until 2003. “If you aren’t paying for the product, you are the product” goes the current mantra.  Should we accept this as easily as we accept those terms and conditions?

In this frenetic age, how can we protect our privacy? I believe data protection and access control will become increasingly vital topics for all of us. Being a responsible company that protects its consumers’ privacy will become a competitive advantage.  Providing safe harbour for third-party data will provide similar opportunities for companies in the next decade, as collecting private data did for social networks in the last decade. At Layer 7, we feel that our Data Lens solution offers a good starting point for companies who want to expose their data, their partners’ data and their customers’ data in an acceptable way.

October 16th, 2013

Intelligent APIs for Big Data & IoT

Written by
 

Big Data Webinar“Data is the new oil” is an oft-repeated phrase. But when was the last time you went out and bought a barrel of crude oil?  The value to consumers is in the refined product: gasoline. With data, the refined product is information – the distilled and actionable essence of multiple sources of raw data.  So, if “data is the new oil” then “information is the new gasoline”.

There’s a lot of data out there and IoT is going to increase it greatly. For large organizations, refining Big Data stores is a significant challenge. This is partly because data doesn’t start out big but gets collected from lots of relatively small sources. Also, data seldom arrives in the right format for sharing and monetization. Furthermore, responsibility for securing and managing data is not always in the same hands as responsibility for sharing data.

We have explored some of these issues in recent blog posts like Was is DaaS? and How APIs Grease the Data Wheels. In tomorrow’s webinar, Intelligent APIs for Big Data & IoT, Matt McLarty and I will try to bring it all together and talk about how APIs are becoming the pipelines and tankers that move the gasoline from its source to the user.