February 26th, 2014

What We Should Learn from the Apple SSL Bug

Written by
 

What We Should Learn from the Apple SSL BugTwo years ago, a paper appeared with the provocative title “The Most Dangerous Code in the World.” Its subject? SSL, the foundation for secure e-commerce. The world’s most dangerous software, it turns out, is a technology we all use on a more-or-less daily basis.

The problem the paper described wasn’t an issue with the SSL protocol, which is a solid and mature technology but with the client libraries developers use to start a session. SSL is easy to use but you must be careful to set it up properly. The authors found that many developers aren’t so careful, leaving the protocol open to exploit. Most of these mistakes are elementary, such as not fully validating server certificates and trust chains.

Another dramatic example of the pitfalls of SSL emerged this last weekend as Apple issued a warning about an issue discovered in its own SSL libraries on iOS. The problem seems to come from a spurious goto fail statement that crept into the source code, likely the result of a bad copy/paste. Ironically, fail is exactly what this extra code did. Clients using the library failed to completely validate server certificates, leaving them vulnerable to exploit.

The problem should have been caught in QA; obviously, it wasn’t. The lesson to take away from here is not that Apple is bad — it responded quickly and efficiently the way it should — but that even the best of the best sometimes make mistakes. Security is just hard.

So, if security is too hard and people will always make mistakes, how should we protect ourselves? The answer is to simplify. Complexity is the enemy of good security because complexity masks problems. We need to build our security architectures on basic principles that promote peer-reviewed validation of configuration as well as continuous audit of operation.

Despite this very public failure, it is safe to rely on SSL as a security solution but only if you configure it correctly. SSL is a mature technology and it is unusual for problems to appear in libraries. But this weekend’s events do highlight the uncomfortable line of trust we necessarily draw with third-party code. Obviously, we need to invest our trust carefully. But we also must recognize that bugs happen and the real test is about how effectively we respond when exploits appear and patches become available. Simple architectures work to our favor when the zero-day clock starts ticking.

On Monday at the RSA Conference, CA Technologies announced the general availability of the new Layer 7 SDK for securing mobile transactions. We designed this SDK with one goal: to make API security simpler for mobile developers. We do this by automating the process of authentication and setting up secure connections with API servers. If developers are freed up from tedious security programming, they are less likely to do something wrong — however simple the configuration may appear. In this way, developers can focus on building great apps, instead of worrying about security minutia.

In addition to offering secure authentication and communications, the SDK also provides secure Single Sign-On (SSO) across mobile apps. Use the term “SSO” and most people instinctively picture one browser authenticating across many Web servers. This common use case defined the term. But SSO can also be applied to the client apps on a mobile device. Apps are very independent in iOS and Android, and sharing information between them, such as in an authentication context, is challenging. Our SDK does this automatically and securely, providing a VPN-like experience for apps without the very negative user experience of mobile VPNs.

Let me assure you that this is not yet another opaque, proprietary security solution. Peel back the layers of this onion and you will find a standards-based OAuth and OpenID Connect implementation. We built this solution on top of the Layer 7 Gateway’s underlying PKI system and we leveraged this to provide increased levels of trust.

If you see me in the halls of the RSA Conference, don’t hesitate to stop me and ask for a demo. Or drop by the CA Technologies booth where we can show you this exciting new technology in action.

January 3rd, 2014

Snapchat Snafu!

Snapchat Logo

When the folks at Snapchat recently turned down an acquisition offer of three billion dollars, I have to admit I was shocked by their incredibly high estimation of their own importance. After all, half of their “secret sauce” is an easily-reproducible photo sharing app; the other half is the fact that their users’ parents haven’t discovered it yet. I’ll admit a bit of jealousy and the fact that my age starting with “3” makes me demographically incapable of understanding the app’s appeal. However, what I do understand is that a frightening disregard for API security might have jeopardized the entire company’s value. Loss of user trust is a fate worse than being co-opted by grandparents sharing cat pictures.

While Snapchat does not expose its API publicly, this API can easily be reverse engineered, documented and exploited. Such exploits were recently published by three students at Gibson Security and used by at least one hacker organization that collected the usernames and phone numbers of 4.6 million Snapchat users. Worse, the company has been aware of these weaknesses since August and has taken only cursory measures to curtail malicious activity.

Before we talk about what went wrong, let me first state that the actual security employed by Snapchat could be worse. Some basic security requirements have clearly been considered and simple measures such as SSL, token hashing and elementary encryption have been used to protect against the laziest of hackers. However, this security posture is incomplete at best and irresponsible at worst because it provides a veneer of safety while still exposing user data to major breaches.

There are a few obvious problems with the security on Snapchat’s API. Its “find friends” operation allows unlimited bulk calls tying phone numbers to account information; when combined with a simple number sequencer, every possible phone number can be looked up and compromised. Snapchat’s account registration can also be called in bulk, presenting the opportunity for user fraud, spam etc. And finally, the encryption that Snapchat uses for the most personal information it processes – your pictures – is weak enough to be called obfuscation rather than true encryption, especially since its shared secret key was hard-coded as a simple string constant in the app itself.

These vulnerabilities could be minimized or eliminated with some incredibly basic API Management functionality: rate limiting, better encryption, more dynamic hashing mechanisms etc. However, APIs are always going to be a potential attack vector and you can’t just focus on weaknesses discovered and reported by white hat hackers. No security – especially reactive (instead of proactive) security – is foolproof but your customer’s personal data should be sacrosanct. You need the ability to protect this personally-identifiable information, to detect when someone is trying to access or “exfiltrate” that data and to enable developers to write standards-based application code in order to implement the required security without undermining it at the same time. You need a comprehensive end-to-end solution that can protect both the edge and the data itself – and which has the intelligence to guard against unanticipated misuse.

While our enterprise customers often look to the startup world for lessons on what to do around developer experience and dynamic development, these environments sometimes also provide lessons in what not to do when it comes to security. The exploits in question happened to divulge only user telephone and username data but large-scale breaches of Snapchat images might not be far behind. When talking about an API exposed by an enterprise or governmental agency, the affected data might be detailed financial information, personal health records or classified intelligence information. The potential loss of Snapchat’s $3 billion payday is serious to its founders but lax enterprise API security could be worse for everyone else.

CA’s line of API security products – centered around the Layer 7 API Management & Security Suite for runtime enforcement of identity management, data protection, threat prevention and access control policies – can help you confidently expose enterprise-class APIs to enable your business while preventing the type of breach experienced by Snapchat, among others.

November 5th, 2013

Thoughts on Trends in IoT & Mobile Security

Written by
 

IoT and Mobile SecurityRecently, I read an article about predicted growth in the Internet of Things (IoT). Extrapolating a previous estimation from Cisco, Morgan Stanley is predicting there will be 75 billion connected devices by 2020. This sort of math exercise is entertaining and has a real “wow” factor but the real question here is: What does this mean for consumers and enterprises?

In recent years, consumer electronics manufactures have started to see the usefulness of building Internet connectivity into their appliances. This enables the post-sales delivery of service upgrades and enhanced features. It also allows mobile apps to control home appliances remotely. This is nothing radical per se, a decade ago I observed a sauna in the Nokia Research Center’s lab being controlled by voice and WML. But this was still a simple one-off integration. As the number of device form factors increases, the complexity of integrating devices grows. The term “anytime, anywhere computing” is usually used to describe this scenario but it isn’t entirely adequate. As a consumer I don’t only want device-independent access to a service – I want the various devices and appliances to work with each other so that smarter interactions can be achieved.

Today, we already see a plethora of connected devices with more-or-less crude connectivity and integration options. Smartphones can sync and connect with tablets, TVs and laptops. Mostly, these are very basic integrations, such as your various devices “knowing” about the last page you read in an eBook, regardless of which device you used. But the number and complexity of these integrations will increase greatly in the coming years.

The Coming Age of Connectivity
One of the main reasons the iPhone revolutionized mobile computing was Apple’s focus on user experience. Since then, mobile vendors have battled to see who could provide the best experience within the device. The next battle will be over cross-device experiences within the broader ecosystem, as users roam from device to device. And in the battle, the big players will keep adding their own proprietary components (software and hardware). The sheer size of these ecosystems will make the opportunity large enough to attract even more mindshare. If you make money – who cares about proprietary protocols and connectors?

But how does this relate to IoT, you may ask – isn’t this just a subset of IoT’s promise? The answer is “yes” but that is how this revolution will work – closer to an evolution where the consumer-driven use cases will be implemented first. Yes, there are other enterprise use cases and we can see many protocols and frameworks that claim to address these requirements. In the end though, I believe most of these platforms will struggle with developer uptake as most of the developer mindshare is found in the big mobile ecosystems. As with mobile, the successful approaches will be the platforms that can offer developers familiar tools and a roadmap to revenue.

It’s clear the big players in mobile, like Samsung and Apple, see a huge opportunity in connected devices. As we move on, we will see more devices get included in each of the mobile ecosystems’ spheres. Increased integration between mobile devices and cars is already in the works. Similarly, among the many notable events at last week’s Samsung DevCon (an excellent show, by the way), several SDKs were launched with the aim of solving specific consumer needs around media consumption in the home. But the impact of increasing connectivity will go beyond these relatively well-understood use cases to encompass home automation, smart grid, healthcare and much more.

Alternative Authentication Methods for the Connected World
In this multi-device, multi-service world, conventional username/password login methods will not be convenient. Advances in the biometric space (such as Nymi or Apple Touch ID) will be relevant here. I suspect that, just as we have seen a bring-your-own-device trend grow in enterprise mobile, we will see a bring-your-own-authentication paradigm develop. As a larger set of authentication methods develops in the consumer space, enterprise IT systems will need to support these methods and often be required to adopt a multi-layered approach.

Ensuring Big Data Privacy in the Age of IoT
Another set of challenges will be created by the enormous amounts of data generated by IoT. Increasingly, connected devices are able to collect and transmit contextual data on users. This information can be highly useful for vendors and users alike. But what happens if data is used for purposes other than those first intended or agreed to? Who owns the raw data and the generated insights? And how is the rightful owner in control of this? Today, there is no general standard available nor are the mobile ecosystems providing adequate privacy protection. Sometimes one gets the feeling that users don’t care but they will probably start caring if and when data leakage starts to make an impact on their wallets.

Meanwhile, Layer 7 will continue to innovate and work on solutions that address the challenges created by IoT, multi-device authentication and Big Data. Oh and by the way, I believe Morgan Stanley underestimated the number, I think it will be double that. You heard it here first…

October 31st, 2013

Security in the Frenetic Age

Written by
 

I AcceptHappy Halloween everyone!

There has been a lot of talk about data leaks and data privacy lately, not naming any names. The articles and blog entries on this topic are filled with outrage and spoken with dropped jaws. I have to admit that the only shock I experience on this subject is at how shocked people are. As divisive as these issues are, fundamental questions remain. How much privacy should be expected? How many times a week are you prompted to accept a long block of terms and conditions in order to access online services? How many times do you read them? Isn’t that the scary part?

The mobile revolution has brought us into the Frenetic Age. Hear two bars of a song you like? Buy it on iTunes. Order a tasty looking burrito? Instagram it.  Overcome by wit?  Facebook, Twitter, Tumblr…  Digitizing our social lives — and our lives in general — leaves a trail of data.  Eric Schmidt claims that we now create as much information every two days as we did up until 2003. “If you aren’t paying for the product, you are the product” goes the current mantra.  Should we accept this as easily as we accept those terms and conditions?

In this frenetic age, how can we protect our privacy? I believe data protection and access control will become increasingly vital topics for all of us. Being a responsible company that protects its consumers’ privacy will become a competitive advantage.  Providing safe harbour for third-party data will provide similar opportunities for companies in the next decade, as collecting private data did for social networks in the last decade. At Layer 7, we feel that our Data Lens solution offers a good starting point for companies who want to expose their data, their partners’ data and their customers’ data in an acceptable way.

October 4th, 2013

Can Your API be BREACHed?

Secure APITLS and SSL form the foundations of security on the Web. Everything from card payments to OAuth bearer tokens depend on the confidentiality and integrity that a secure TLS connection can provide. So when a team of clever engineers unveiled a new attack on SSL/TLS – called BREACH – at July’s Black Hat conference, more than a few eyebrows were raised. Now that it’s Cyber Security Awareness Month, it seems like a good time to examine the BREACH threat.

There have already been a number of articles in the technology press identifying threats BREACH poses to traditional Web sites and suggesting ways to mitigate the risks but it is important for us to examine this attack vector from an API perspective. API designers need to understand what the attack is, what risks there are to Web-based APIs and what can be done to mitigate the risks.

The BREACH attack is actually an iteration of  a TLS attack named CRIME, which emerged last year. Both attacks are able to retrieve encrypted data from a TLS connection by taking advantage of the way data compression works in order to guess the value of a confidential token in the payload. While CRIME relied specifically on exploiting the way TLS-based compression works, the BREACH exploit can target messages sent with compression enabled at the HTTP/S level, which may be much more widely enabled in the field.

HTTP compression is based on two compression strategies for data reduction: Huffman coding and LZ77. The LZ77 algorithm accomplishes its goal of data compression by identifying and removing duplicate pieces of data from an uncompressed message. In other words,  LZ77 makes a message smaller by finding duplicate occurrences of data in the text to be compressed and replacing them with smaller references to their locations.

A side effect of this algorithm is that the compressed data size is indicative of the amount of duplicate data in the payload. The BREACH attack exploits this side effect of LZ77 by using the size of a message as a way of guessing the contents of a confidential token. It is similar in nature to continually guessing a user’s credentials on a system that provides you with unlimited guesses.

While the premise is scary, the good news is that the BREACH attack doesn’t give an attacker unfettered access to the encrypted TLS payload. Instead, it is a targeted attack that attempts to retrieve a confidential token through repeated and iterative guesses. In fact, the attack isn’t an exploit of the TLS protocol at all, rather it is an attack that can be applied to any messaging system that uses the gzip compression algorithm (which is a variation of LZ77).

On top of this, BREACH is not an easy attack to pull off. A would-be BREACHer must:

  1. Identify an HTTPs message which has compressed data, a static secret token and a property that can be manipulated
  2. Trigger the application or server to generate many such messages in order to have a large enough sample size to iteratively guess the token
  3. Intercept all of these messages in order to analyze their sizes

Each of these requirements is non-trivial. When combined, they greatly reduce the attack surface for a BREACH attack in the API space. While API messages certainly contain data that may be manipulated and while many APIs do provide compressed response data, very few of those API messages also contain confidential tokens.

But designers of APIs shouldn’t dismiss the possibility of being BREACHed. There are at least two scenarios that might make an API susceptible to this attack vector.

Scenario 1 – Authentication & CSRF Tokens in Payloads:
Many APIs return an authentication token or CSRF token within successful responses.  For example, a search API might provide the following response message:

<SearchResponse>
    <AuthToken>d2a372efa35aab29028c49d71f56789</AuthToken>
    <Terms>…</Terms>
    <Results>…</Results>
</SearchResponse>

If this response message was compressed and the attacker was able to coerce a victim into sending many requests with specific search terms, it would only be a matter of time before the AuthToken credential was retrieved.

Scenario 2 – Three-Legged OAuth:
APIs that support the OAuth 2 framework for delegated authorization often implement CSRF tokens, as recommended in the OAuth 2 RFC. The purpose of the token is to protect client applications from an attack vector in which a client can be tricked into unknowingly acting upon someone else’s resources (Stephen Sclafani provides  a better explanation of the CSRF threat here.)  Due to the fact that CSRF tokens are reflected back by the server, the three-legged OAuth dance becomes a possible attack surface for BREACH.

For example, an attacker could coerce a victim to send repeated OAuth 2 authorization requests and utilize the state parameter to guess the value of the authorization token. Of course, all of this comes with the caveat that the OAuth server must be compressing responses to become a target. The fact that a new authorization code is generated for each authorization attempt would make this attack less practical but still theoretically possible.

Ultimately, the simplest way to mitigate the BREACH attack is to simply turn off compression for all messages. It isn’t hard to do and it will stop BREACH dead in its tracks. However, in some cases, designers may need to support compression for large data responses containing non-critical information or because they are supporting platforms with low bandwidth capabilities.  In these instances, it makes sense to implement a selective compression policy on an API Gateway.

While disabling compression will certainly negate the impact of the BREACH attack, a more general solution is to impose smart rate limiting on API requests. This will not only negate the sample size that a BREACH attacker needs to guess data, it will also stop other side-effect attacks that don’t rely solely on compression. In addition, log analysis and analytics will make it easier to spot any attempt at an attack of this kind.

An API Gateway is the key component for this type of security mitigation in the API space. A Gateway can provide the level of abstraction needed to enforce configurable compression and rate limiting policies that server side developers may not have the security background to implement effectively. In addition, the Gateway acts as a central enforcement point for security policy – particularly useful in larger federated organizations.

TLS is core to most of the security implementations that have evolved on the Web, including the OAuth 2 framework. This latest published attack does not render the world’s TLS implementations useless but it does introduce an interesting attack vector that is worth protecting against in the API domain. Remember, API rate limiting and usage monitoring are useful for much more than just monetizing an API!