October 1st, 2013

Cyber Security Awareness Month & the Internet of Vulnerable Things

IoT SecurityDid you know that October 2013 is the 10th National Cyber Security Awareness Month in the US? While I usually emphasize the enormous potential of the Internet of Things (IoT), let’s use the occasion to look at the security risks of the Internet of really vulnerable things.

Over the last couple of months, a casual observer could have noticed a variety of security scares related to “connected things” – from hacked baby monitors to hacked cars. In August, my colleague Matthew McLarty wrote about the security vulnerabilities of the Tesla Model S. Regulators also started to take notice and felt compelled to act.

Given that the problems appear to be systemic, what can companies do to mitigate the risks for connected devices? Rather than looking for yet another technological solution, my advice would be to apply common sense. It’s an industry-wide problem, not because of a lack of technology but because security and privacy are afterthoughts in the product design process. To get a feeling for the sheer scale of the problem, I suggest taking a look at the search engine Shodan. Both SiliconANGLE and Forbes have recently run articles covering some its findings.

Yet these problems did not start with IoT. For instance, Siemens was shipping industrial controllers with hardcoded passwords before the dawn of IoT – enabling the now infamous Stuxnet attack. Despite all the publicity, there are still vulnerabilities in industrial control systems, as noted in a Dark Reading article from the beginning of the year.

All the best practices and technologies needed to address these problems exist and can be applied today. But it is a people (designer, developer, consumer) problem and a (product design) process problem, not a technology problem. Designing fail-close (rather than fail-open) systems, using meaningful authentication, authorization and encryption settings and so on – all of this can be done today with little or no additional effort.

Essentially, our legal process has not caught up with technology. And it won’t for as long as the lack of security merely inconveniences us rather than threatening us with loss of property – or even life! Conversely, we are pretty good at applying security best practices in aviation because most serious problems with an aircraft in flight are inherently catastrophic. So, let’s hope that the recent news of hackers accessing airplane flight control systems acts as a wake-up call for the industry.

As API Management providers, we at Layer 7 are, more often than not, actively involved in shaping the API security policies and best practices of our customers. Since we believe APIs will form the glue that will hold IoT together, we are using our API Academy to disseminate API best practices in a vendor-neutral way. Most of what we have learned regarding scalability, resilience and security from the SOA days is still applicable in the API space and will be applicable in the IoT space. As the magnitude of interconnectedness grows, security remains paramount.

September 26th, 2013

Common OAuth Security Mistakes & Threat Mitigations

OAuth SecurityI just found out we had record attendance for Wednesday’s API Tech Talk. Clearly, there’s an appetite for the topic of OAuth risk mitigation.

With our digital lives scattered across so many services, there is great value in technology that lets us control how these service providers interact on our behalf. For providers, making sure this happens in a secure way is critical. Recent hacks associated with improperly-secured OAuth implementations show that OAuth-related security risks need be taken seriously.

When in doubt, take a second look at the security considerations of the spec. There is also useful information in RFC6819 – OAuth 2.0 Treat Model & Security Considerations.

The Obvious Stuff
Let’s get a few obvious things out of the way:

  1. Use SSL (HTTPS)
  2. Shared secrets are confidential (if you can’t hide it, it doesn’t count as a secret)
  3. Sanitize all inputs
  4. Limit session lifespan
  5. Limit scope associated with sessions

None of these are specific to OAuth. They apply to just about any scheme involving sessions and secrets. For example, form login and cookie-based sessions in Web applications.

OAuth’s Main Attack Vector
Some of the grant types defined by the OAuth protocol involve the end-user being redirected from an application to a service provider’s authorization server where the user is authenticated and expresses consent for the application to call the service provider’s API on its behalf. Once this is done, the user is redirected back to the client application at a callback address provided by the client application at the beginning of the handshake. In the implicit grant type, the redirection back to the application includes the resulting access token issued by the OAuth provider.

OAuth’s main attack vector involves a malicious application pretending to be a legitimate application. When such an attacker attaches its own address as the callback for the authorization server, the user is redirected back to the malicious application instead of the legitimate one. As a result, the malicious application is now in possession of the token that was intended for a legitimate application. This attacking application can now call the API on behalf of the user and wreak havoc.

OAuth 101: Callback Address Validation
The most obvious defense against this type of attack is for the service provider to require that legitimate client applications register their callback addresses. This registration step is essential as it forms the basis of a user being able to assess which application it is granting to act on its behalf. At runtime, the OAuth authorization server compares these registered values against the callback address provided at the beginning of the handshake (redirect_uri parameter). Under no circumstance should an OAuth authorization server ever redirect a user (along with an access token) to an unregistered callback address. The enforcement of these values is a fundamental precaution that should be engrained in any OAuth implementation. Any loophole exploiting a failure to implement such a validation is simply inexcusable.

redirect_uri.startsWith(registered_value) => Not good enough!
Some application developers append client-side state at the end of runtime redirection addresses. To accommodate this, an OAuth provider may be tempted to merely validate that a runtime redirection address starts with the registered value. This is not good enough. An attacker may exploit this by adding a suffix to a redirection address – for example, to point to another domain name. Strict redirection URI trumps anything else, always. See http://tools.ietf.org/html/rfc6819#section-5.2.3.5.

Dealing with Public (Not Confidential) Clients
If you are using the authorization code grant type instead of implicit, a phishing attack yields an authorization code, not the actual access token. Although this is technically more secure, the authorization code is information that could be combined with another vulnerability to be exploited – specifically, another vulnerability caused by improperly securing a shared secret needed to complete the code handshake in the first place.

The difference between the implicit and authorization code grant types is that one deals with public clients and the other deals with confidential ones. Some may be tempted to rely on authorization code rather than implicit in order to add security to their handshakes. If you expose APIs that are meant to be consumed by public clients (such as a mobile app or a JavaScript-based invocation), forcing the application developer to use a shared secret will only lead to these shared secrets being compromised because they cannot be effectively kept confidential on a public platform. It is better to be prepared to deal with public clients and provide handshake patterns that make them secure, rather than obfuscate secrets into public apps and cross your fingers they don’t end up being reverse-engineered.

Remembering Past Consent Increases Risk
Imagine a handshake where a user is redirected to an authorization server (e.g. implicit grant). Imagine this handshake happening for the second or third time. Because the user has an existing session with the service provider, with which the authorization server is associated (via a cookie), the authentication step is not required and is skipped. Some authorization server implementations also choose to “remember” the initial expression of consent and will not prompt the user to express consent again – all in the name of better user experience. The result is that the user is immediately redirected back to the client application without interaction. This typically happens quickly and the user is not even aware that a handshake has just happened.

An “invisible” handshake of this kind may lead to improved user experience in some situations but this also increases the effectiveness of a phishing attack. If the authorization server does not choose to implement this kind of handshake and instead prompts the user to express consent again, the user is now aware that a handshake is at play. Because the user does not expect this action, this “pause” provides an opportunity for the user to question the action which led to this prompt in the first place and helps the user in recognizing that something “phishy” is in progress.

Although bypassing the authentication step provides an improvement in user experience, bypassing consent and allowing redirection handshakes without displaying anything that allows a user to abort the handshake is dangerous and the resulting UX gain is minimal (just skipping an “OK” button).

September 19th, 2013

Did Apple Just Kill the Password?

Written by
 

Password KillerOn the surface, Apple’s recent iPhone 5S announcement seemed just that: all surface, no substance. But as many reviewers have pointed out, the true star of the new model may not be its shimmering gold sheen but instead the finger sensor built into its home button.

Using a fingerprint to prove you are who you claim to be is not new. But building it into a phone is. And as your mobile phone becomes your carrier of content (like photos), currency (like digital wallet) and identity (like keychain) as well as your route to all manner of digital services, proving who you are will become essential for mobile everything.

Before mobile, Web security rooted itself in the username/password paradigm. Your username and password defined the identity you used to authenticate yourself to PayPal, Amazon, Google, Facebook and everything in between. There are stronger ways to secure access to Web sites but written passwords predominate because they are personal and easy to type on a PC, where all Web pursuits took place – until the arrival of the smartphone, that is.

The smartphone and its similarly keyboard-deprived cousin, the tablet, increasingly represent the jumping off point for the Internet. Sometimes, it may start with a browser. Many times it begins with an app. In either case, passwords are no fun when you move to a mobile device. They are cumbersome to type and annoying when you have to type them repeatedly across multiple sites, services and apps. So, anything that diminishes the burden of typing passwords on a mobile device is a good thing.

Apple is not alone in identifying that end users want ways to eliminate passwords on mobile. Our company, CA Technologies, has a sizeable franchise in Single Sign-On (SSO) and strong authentication technologies, which – when applied to mobile – can significantly reduce the burden of recalling multiple passwords across different sites, apps and services. In fact, CA Layer 7 hosted a webinar on this very topic this morning. But what Apple has achieved is significant because it substitutes a highly-personalized biometric for a password. This has the power to streamline mobile commerce, mobile payments and every other kind of mobile-centered interaction or transaction.

Many commentators have rightfully pointed out that biometrics do not offer a panacea. If your fingerprint gets hacked, for instance, it’s hacked permanently. But there are easy ways of augmenting biometrics to make them stronger. Biometrics can be combined with over-the-air tokens like one-time password or supplemented with context-aware server-side challenges that increase their requirements based on risk. But it’s what they achieve when compared with the alternative that makes fingerprint readers so powerful.

The 5S simplifies authentication for the average user, which encourages security use and acceptance. It also eliminates bad mobile habits like using short, easily memorable, easy-to-type passwords that scream insecurity. Apple is not the first vendor to realize consumers don’t like passwords on mobile devices. But by bringing an alternative to the mass market, it is helping to draw attention to the need and the opportunity: killing the password may open mobile to a whole host of novel security-dependent Internet services.

August 26th, 2013

Layer 7 Mobile Access Gateway 2.0

Mobile Access Gateway 2.0Today, Layer 7 introduced version 2.0 of the Mobile Access Gateway, the company’s top-of-the-line API Gateway. The Mobile Access Gateway is designed to help enterprises solve the critical mobile-specific identity, security, adaptation, optimization and integration challenges they face while developing mobile apps or opening APIs to app developers. In the new version, we have added enhancements for implementing Single Sign-On (SSO) to native enterprise apps via a Mobile SDK for Android and iOS.

Too many times, we have seen the effect of bad security practices. My colleague Matt McLarty eloquently discusses the gulf between developers on one hand and enterprise security teams on the other in this Wired article on Tumblr’s security woes. Because these two groups have different objectives, it becomes hard to get a common understanding of how you can secure the enterprise while enabling app developers to build new productivity-enhancing apps. While nobody really wants to be the fall guy who lets a flaw take down a business, we can be sure Tumblr isn’t the last stumble we are going to see.

To prevent you being the next Stumblr, we have taken a closer look at the technologies and practices for authentication of users and apps. No one of these seemed to be adequate alone and – while acknowledging the value of leveraging existing technologies –  we realized that a new approach was needed.

For mobile app security, there are three important entities that need to be addressed: users, apps and devices. Devices are the focus of the MDM solutions many enterprises are adopting and although these solutions are good at securing data at rest they fail to address the other two entities adequately.

Because today’s enterprise apps use APIs to consume data and application functionality that is located behind the company firewall or in the cloud, API security is vital to the success of any enterprise-level Mobile Access program. Therefore, APIs must be adequately secured and access to API-based resources must be controlled via fine-grained policies that can be implemented at the user, app or device level. To achieve this, the organization must be able to deal with all three entities.

Based on this, we have proposed a new protocol that leverages existing technologies. We leverage PKI for identifying devices through certificates, OAuth 2.0 is used to grant apps access tokens and finally OpenID Connect is used to grant user tokens. This new approach, described in our white paper Identity in Mobile Security,  provides SSO for native apps and makes sure the handshake is done with a purpose – to set up mutual SSL for secure API consumption.

Furthermore, this framework is adaptable to changing requirements because new modules can replace or add to existing protocols. For example, when an organization has used an MDM solution to provision devices, the protocol could leverage this instead of generating new certificates. Equally, in some high-security environments, the protocol should be able to leverage certificates embedded in third-party hardware.

To simplify the job for app developers, the Mobile Access Gateway now ships with a Mobile SDK featuring libraries that implement the client side of the handshake. The developer will only have to call a single API on the device with a URL path for the resource as its parameter. If the device is not yet registered or there are no valid tokens, the client will do the necessary handshake to get these artifacts in place. This way, app developers can leverage cryptographic security in an easy-to-use manner, giving users and security architects peace of mind.

July 24th, 2013

IoT: The Weighting Game

Written by
Category IoT, M2M, Security, Twitter
 

Data Weighting for IoTThis must have been a scary few moments. On March 23, the main Associated Press Twitter account tweeted about explosions at the White House and President Obama being hurt. Guess what happened next? The Dow went down by over 100 points within minutes of the tweet.

So why did this happen? Regardless of whether the trades were executed by an algorithm or a human, both where treating all tweets from that AP feed as equal. They traded  based on the content of a single tweet – and the resulting feedback loop caused the drop in the stock market.

Fast forward to IoT and imagine that each Twitter account is a sensor (for instance, a smart meter) and the tweets are the sensor readings. Further imagine that the stock market is the grid manager balancing electricity supply and demand. If we were to attach the same weight to each data point from each smart meter, a potential attack on the smart meters could easily be used to manipulate the electrical grid and – for instance – cause the local transformer to blow up or trigger a regional blackout via a feedback loop.

Yet strangely enough – when talking about the IoT – the trustworthiness of sensor data does not appear to be of concern.  All data are created equal or so the assumption seems to be. But data have an inherent quality or weight inferred by the characteristics of the endpoint and how much it is trusted. Any algorithm using sensor data would need to not only take into account the data points as such but also weight the data based on the actual capabilities of the sensor, its identity and its trust relationship with the sensor.

I tried to capture this relationship in picture below.

Endpoint Security in IoT

How can we account for the risk that not all data are created equal?

Credit card companies provide a good object lesson in the way they have embraced inherent insecurity. They decided to forgo stronger security at the endpoint (the credit card) in order to lower the bar for use and increase market adoption. But in order to limit the risk of fraudulent use, every credit card transaction is being evaluated in the context of most recent transactions.

A similar approach will be required for IoT. Instead of chasing impossible endpoint security, we should embrace the management of (data) risk in the decision-making process. An advanced, high-performing API Gateway like Layer 7’s can be used to perform data classification at the edge of the enterprise and attach labels to the data flowing through the Gateway and into the control processes.

I’d be curious to learn if and how you would deal with the data risk. Do you assume that all data are created equal? Or does the above picture resonate with your experiences?