Matt McLarty

Matt McLarty

Matt McLarty is Vice President of Client Solutions at Layer 7 Technologies. Matt is focused on customer success, providing implementation best practices and architectural guidance to ensure clients receive the maximum benefit from Layer 7’s products. Matt brings over 15 years of technology leadership to Layer 7, with a particular focus on enterprise architecture, strategy and integration. Prior to joining Layer 7, Matt led the global IBM technical sales organization responsible for application integration software and solutions, notably helping to grow the SOA Gateway business substantially over a five-year period. Before joining IBM, Matt worked as a Senior Director and Enterprise Architect focused on SOA, electronic payments and identity management.

July 17th, 2014

API360 Summit – Washington, DC

API360Since the API Academy was founded two years ago, we have had the pleasure of helping numerous organizations and industry leaders succeed with their API programs. Through this experience, we have learned at least as much as we have taught – and we recognize that continuing this collaboration is vital to furthering the field of API strategy and design. Also in this time, we have observed a growing recognition that a holistic approach to APIs is needed in order to achieve maximum benefit.

With all of this in mind, we are pleased to announce our API360 Summit series. These complimentary one-day summits will bring together industry leaders to examine APIs from every possible perspective: business and innovation; architecture and design; applications and trends. Most importantly, these events will provide attendees with up-to-date, actionable information they can start using as soon as they walk out the door at the end of the day.

Our first API360 Summit will take place on September 12 at the Newseum in Washington, DC. We will be featuring a range of speakers with first-hand experience of how APIs are impacting organizations across the public and private sectors. There will also be panel sessions examining pertinent topics like using APIs in open government and exposing APIs to external developers. And there will be plenty of opportunities for interaction and discussion.

For more information and free registration please visit the API360 site.

February 27th, 2014

New API Academy Team Member: Irakli Nadareishvili

Irakli NadareishviliThe API Academy team has a new member: Irakli Nadareishvili who has joined CA Layer 7 as Director of API Strategy. Before joining CA, Irakli served as Director of Engineering for Digital Media at NPR, which is noted for its leadership in API-oriented platform design. He has also participated in the creation of the Public Media Platform, worked with whitehouse.gov and helped a number of major media companies develop publishing solutions using open source software.

I recently sat down with Irakli to discuss what he has in mind as he joins API Academy.

MM: You once told me that you believe the future of Big Data is “linked APIs”? That sounds intriguing. Tell me more about it.

IN: In most people’s minds, “Big Data” is synonymous to “very large data”. You may hear: “Google-large” or “Twitter-large” or “petabytes”. The Wikipedia definition of Big Data is slightly more elaborate:

“Big data is the term for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications”.

In my work, I see the “complex” part of the definition becoming more important than the size. We have gotten pretty good at taming the large sizes of data. Tooling for horizontal partitioning and parallel processing of large data sets is now abundant. Still, most Big Data sets are contained and processed in the isolation of single organizations. This is bound to change very soon. The end of siloed Big Data is near: I believe that the next phase of Big Data challenges will have to do with data sets that cross organizational boundaries.

APIs will play a major role in this. Web APIs represent the most effective available technology that allows data to cross organizational boundaries. APIs efficiently connect and link data at a distance.

MM: Can you give an example of what you mean by “data sets that cross organizational boundaries”? And what challenges do these pose?

IN: You see, a lot of people have the notion that the data they need to process can be stored in a database maintained by a single organization. This notion is increasingly inaccurate. More and more, organizations are having to deal with highly-distributed data sets.

This can be very challenging. The infamous healthcare.gov is a good example of such a distributed system. The main technical challenge of implementing healthcare.gov’s backend was that it had to integrate with data in many existing systems.

The $500 million initial public fiasco of healthcare.gov is also a vivid indication of just how complex it is to build truly distributed systems. Practically the only successful implementation of such a large, distributed information system is the World Wide Web. There’s a lot we can learn from the architecture of the Web. It’s a battle-tested blueprint for building distributed systems at scale.

I believe the Big Data challenges of the future will be solved at the intersection of APIs with Web/hypermedia architecture, linked data and what we currently call Big Data tooling. I call this intersection “Linked APIs”, to differentiate it from the current, siloed state of most Web APIs.

MM: What practical advice would you give to the developers of future Big Data APIs?

IN: I think the most important thing is that we need to stop falsely assuming all of the API data is local data. It is not. Despite the name, an API for a distributed system is really not a “programming interface” to local data/assets. Rather, it is a programmable data index. Think of APIs as a programmable search index for a distributed collection of data sets.

I don’t like to think of the term “API” as an abbreviation anymore. Maybe it was one a while ago but it has since evolved way past that. Much like IBM doesn’t think of itself as “International Business Machines” anymore, APIs aren’t merely “application programming interfaces”. Most of what IBM does these days isn’t even necessarily about “machines”. Likewise, most of what we need out of APIs isn’t about any single application or an interface to it.

MM: Big Data represents one important challenge for computing today. What about IoT?

NN: The Internet of Things is already here, in small ways. The IoT we have today consists of a vast number of Web-connected devices, acting as sensors, sending myriads of signals to the cloud. That, by the way, is what creates many Big Data challenges. The future is much more interesting, however. Once the connected devices start engaging in peer-to-peer interactions, bypassing central authority, we will enter a significantly different realm. The most important challenge in that world, from my perspective, will be identity. Identity is always key in distributed systems but especially so in peer-to-peer networks.

MM: What excites you the most about your new role at Layer 7?

IN: Thank you for asking this question. I will start by telling you what terrifies me the most. The API Academy and Layer 7 teams represent a gathering of  ”scary” amounts of world-class brainpower and expertise in the API space. It is extremely humbling to be part of such a distinguished group.

Obviously, it also means that there is a lot of very fundamental thinking and innovation that happens here. Especially now that Layer 7 is part of CA Technologies, there’s really very little that we couldn’t accomplish if we put our hearts to it. That feels extremely empowering. I really care about all things related to APIs and distributed systems, the role they can play for the future of technology. I am super excited about the possibilities that lie ahead of us.

October 31st, 2013

Security in the Frenetic Age

Written by
 

I AcceptHappy Halloween everyone!

There has been a lot of talk about data leaks and data privacy lately, not naming any names. The articles and blog entries on this topic are filled with outrage and spoken with dropped jaws. I have to admit that the only shock I experience on this subject is at how shocked people are. As divisive as these issues are, fundamental questions remain. How much privacy should be expected? How many times a week are you prompted to accept a long block of terms and conditions in order to access online services? How many times do you read them? Isn’t that the scary part?

The mobile revolution has brought us into the Frenetic Age. Hear two bars of a song you like? Buy it on iTunes. Order a tasty looking burrito? Instagram it.  Overcome by wit?  Facebook, Twitter, Tumblr…  Digitizing our social lives — and our lives in general — leaves a trail of data.  Eric Schmidt claims that we now create as much information every two days as we did up until 2003. “If you aren’t paying for the product, you are the product” goes the current mantra.  Should we accept this as easily as we accept those terms and conditions?

In this frenetic age, how can we protect our privacy? I believe data protection and access control will become increasingly vital topics for all of us. Being a responsible company that protects its consumers’ privacy will become a competitive advantage.  Providing safe harbour for third-party data will provide similar opportunities for companies in the next decade, as collecting private data did for social networks in the last decade. At Layer 7, we feel that our Data Lens solution offers a good starting point for companies who want to expose their data, their partners’ data and their customers’ data in an acceptable way.

August 29th, 2013

Steering Safely into the Open Enterprise

Tesla Model SI recently wrote an article for Wired, which discussed the importance of thinking about security at every stage of your application lifecycle.  This is especially important as we enter the new era of open enterprise IT. The explosive growth of mobile computing has shifted the enterprise perimeter and traditional access control mechanisms are no longer sufficient. This is even more relevant when thinking about the Internet of Things (IoT) and its rapidly evolving ecosystem.

George Reese of Dell recently published an article that discusses the Tesla Model S REST API.  This API enables some remote control features on the car and is primarily used by Tesla’s available smartphone apps. Great stuff, showing how mobile meets IOT meets API. The problem is that the focus of the article is all on its potential security vulnerabilities. Where the Tesla developers should be lauded for driving this type of innovation, they are instead scolded for addressing security poorly.

I think this is a great example of where thinking about security all through the lifecycle would have saved the developers some embarrassment. Here are some things for them to think about with the next app or API:

  • Are there other clients besides smartphone apps that I want to access my API?
  • Are there other clients besides smartphone apps that I don’t want to access my API?
  • Are there proven standards or protocols I can use to provide access control?
  • Are there proven tools out there that can help me deliver the solution more quickly?
  • Is there a way for me to revoke a client’s access after it has been granted?

The Tesla team chose to take an unproven path with their authentication solution.  “Security by obscurity” used to be a popular approach but it doesn’t cut it in the open enterprise. In open computing, open and popular protocols like OAuth are the most secure mechanisms to use.  That may seem counter-intuitive but these protocols provide the richest set of implementation tools and breadth of use cases. This allows app developers to focus on their areas of expertise – like automotive innovation – and rely on the security experts for protection.

At Layer 7, our products and services help companies build the foundation for the open enterprise.  Our new Mobile Access Gateway release provides a variety of security capabilities, including smartphone access control and token revocation. Our API Academy helps clients design sustainable APIs that address all aspects of the API lifecycle, including the most practical and comprehensive security protections.

March 8th, 2013

Nation Building in the Age of APIs

I’ve been working with a number of companies lately on their API strategies.  People seem to recognize that having an API is modern day necessity, but they’re not sure how to get started.  Since APIs are viewed as a technical innovations, responsibility for rolling them out is frequently handed to IT groups.

Clearly, there is business value to be attained by companies who utilize an API, and an accessible web API is a requirement for modern corporations.  For companies looking to launch an API, there is a temptation to focus on the technological aspects of implementation.  Good API design, architecture, and infrastructure are vital to the success of a company’s API, but there are other areas to address first.  I am currently reading the book “Why Nations Fail”, and recently read “Thinking Fast and Slow” by Daniel Kahneman.  Although the former is a geopolitical study whereas the latter focuses on the human mind, both share an identical observation that is the foundation of their arguments: a great amount of economic study is flawed because it fails to account for human behavior and tendencies.  I feel the same way about technology.

Every paradigm shift in technology has been driven by both innovation—the new technology itself—and application—how that technology can be used.  In other words, there is a machine side and a people side to every technology change.  The technologists responsible for implementing these changes often bias towards their comfort zone—the machine side—and overlook the people side.  This has led to frustration for companies who invest significantly in new technology only to miss the intended benefits of the change.  For APIs, the people side of the change is especially important.  In fact, the social nature of the API world means there are even more groups of people to consider.  Ultimately, the success of a company’s API will depend on the creation of a diverse community for that API—end users, partners, developers, and more—as well as the adoption of a business model that allows the API to contribute to the company’s bottom line.  Taking the community and the economics together, this means you will need to build a nation for your API.

Some of the biggest companies on the web have taken this approach with their APIs, and I recently explored some of their winning tactics in this VentureBeat article.  Please have a read and let me know your thoughts, and perhaps your own API lessons