December 10th, 2010

WikiLeaks–How to Fix a Leak with Better Plumbing

Written by
Category Uncategorized
 

The 9/11 Commission Report cited "pervasive problems of managing and sharing information across a large and unwieldy government that had been built in a different era to confront different dangers". Since 9/11 governments around the world have considerably adjusted their stance on information-sharing to allow more adequate and timely sharing of information. Unfortunately, the need to share information quickly in many situations had priority over the need to protect it and this left security policies, certification and accreditation practices, and existing security controls behind.

WikiLeaks may jeopardize all we've worked towards to enhance information sharing, and impede pursuits to make information-sharing more effective. Or it may serve as a wakeup call that our current policies, processes and solutions are not adequate in today's world where information must be collected, fused, discovered, shared and protected at network speed.

Here at Layer 7, we've been working with government agencies worldwide to support their needs for sharing information more quickly, while introducing a more robust set of access and security controls to allow only those with need-to-know clearance access to privileged information. In the following paragraphs, I'm going to discuss how Layer 7 Technologies aids in breaking down information-sharing silos while maintaining a high degree of information protection, control and tracking.

There are multiple efforts underway across government agencies to use digital policy to control who gets access to what information when, as opposed to relying on a written policy. Layer 7's policy-oriented controls allow for digital policy to be defined and enforced across distributed information silos. Either inside an enterprise or in the cloud, using Layer 7,government agencies and commercial entities can define and enforce rules for information discovery, retrieval and dissemination across a variety of security realms and boundaries. With the right kind of policy controls, companies can avoid a WikiLeak of their own.

Layer 7 provides information plumbing for the new IT reality. Using Layer 7 products organizations can ensure:

Data Exfiltration –The WikiLeaks scandal broke because of a single user’s ability to discover, collect and exfiltrate massive quantities of information, much of which was not needed for the day-to-day activities of the user. With Layer 7, digital policies can be defined and enforced which put limits on the number of times a single user can retrieve a single type of data or multiple types of data that, when aggregated together, could be interpreted as having malicious intent. If the user goes beyond his administratively imposed limit, Layer 7 can either allow the operation while notifying administrative or security personnel of the potential issue, or can disallow access altogether while awaiting remediation.

Access Control -The heart of any information system is its ability to grant access to people who meet the "need to know" requirement for accessing the information contained within. The reality with government organizations is that many information systems rely on the user’s level of clearance, the network he is using, or course-grained information likethe branch of service he belongs to, in order to grant or deny access to an information-sharing system in its entirety. For those going beyond the norm with usage of Role Based Access Control (RBAC), the burden of administrating hundreds or thousands users, based on groups, is formidable and limits the effectiveness of the system; it increases the likelihood that the system has authorized users whom no longer have “need to know” of the information.

Layer 7 policy enforcement and decision allows for user authorization through either Attribute Based Access Control (ABAC) or Policy Based Access Control (PBAC). These types of authorizations correlate through policy, attributes about the user, resource and environment in order to allow/deny access. Attributes can be collected from local identity repositories or from enterprise attribute services.

In addition, enterprise attribute services can be federated to allow for attributes to be shared across organizations, thereby minimizing the requirement of having to manage attributes about users from other organizations. An often-overlooked factor of authorization is the need to tie typical authorization policy languages like XACML (is user X allowed to access resource Y) to policies around data exfiltration, data sanitization and transformation, and audit. This is the area where Layer 7 stands out: not only do we have the ability to authorize the user, but we can also enforce a wide variety of policy controls that are integrated with access control.

The following blog posts by Anil John, a colleague whom has specialization in the identity space, provides good information about the benefits and needs of the community in moving from roles to policy and attributes. Policy Based Access Control (PBAC) and Federated Attribute Services


Monitoring, Visibility & Tracking - Even when controls are in place that help mitigate the issue of “need to know,” there will always be a risk of authorized users collecting information within the norms of their current job and role. In support of this, visibility of usage by the individual IT system owner and across enterprise systems is key to limiting this type of event in the future. Layer 7 allows for federation of monitoring data so information about data accesses can be shared with those organizations monitoring the network or enterprise. This allows authentication attempts and valid authorizations to be tracked, and distributed data retrieval trends analyzed on a per user basis across the extended enterprise.

Leakage of privileged information to unauthorized users can never be 100% guaranteed. However, with the simple implementation of a policy-based information control like Layer 7, access to confidential information can be restrictedand tracked.


November 2nd, 2010

Creating Robust Net-Centric Services through Policy

Written by
Category Uncategorized
 

Next Tuesday at TMForum Management World Americas conference in Orlando, I'll be presenting along with Sriram Chakrapani, (Chief, Integration Engineering Division, DISA) a presentation titled Policy Enabled Net-Centric Information Sharing. Due to this, and a whitepaper I'm putting the final touches on titled "Robust Net-Centric Services", I thought it would be an opportune time to write a post discussing the value of policy in defining robust net-centric services.

As integration frameworks, Web Services and Restful applications adequately address how applications get exposed and communicate via SOAP/XML to exchange information with one another in a platform agnostic way. In real-world applications however, security, reliability, routing, bandwidth conservation, versioning and other requirements still have to be dealt with and in turn severely impact the loosely coupled nature of net-centric services.

For tactical edge deployments as well as disadvantaged (in one way or another) enterprise deployments these requirements are vital as web services and consumers undergo challenges and need to operate in a constantly changing environment. Bandwidth and connection state among other things require web services to have situational awareness where they can adapt to a constantly changing scenario. A simple example of such a change could be that a consumer and service are in use in a connected state to DISA Net-Centric Enterprise Services (NCES) and then become disconnected due to a kinetic or cyber attack. In this disconnected state the information exchange must continue to operate seamlessly by moving to a fall-back set of requirements (security, transport, reliability, etc.), locally deployed core enterprise services (machine to machine messaging), and potentially a cached business service. All without impacting the user.

The presentation and paper proposes the concept of “Robust Net-Centric Services” or “net-centric services with a high degree of resilience even when faced with a comprehensive array of faults and/or challenges and inherently capable of reacting gracefully to both internal application changes as well as external environmental changes, all without impacting information exchange”.

Given the distributed and federated nature of robust net-centric services, especially those supporting tactical edge communications; the ability to define robust requirements using policies, which are understandable and interoperable across a variety of implementations while at the same time implemented in a distributed fashion and subsequently easily changed is key to achieving complete information superiority.

The paper and presentation will highlight the four primary challenges to creating robustness. For the sake of brevity, I'm only going to list the four categories in this blog post. Each will be detailed in the paper when it is released.

  1. The availability and robustness of a network
  2. The availability of resources to execute a particular function
  3. Information Assurance (IA)
  4. User Interface (UI)

In order to accommodate the challenges above, it is required that we look back to the fundamental principles of software engineering: flexible systems are achieved by decoupling the variable parts of the implementation from the invariant parts. This variable layer can then be managed without affecting the system invariants. In this, conflicting constraints and capabilities can be reconciled, managed and constantly monitored. For example, performance and response time requirements can be weighed against security, confidentiality and privacy requirements.

Robust Net-centric services employ a deployed policy-driven and intelligent run-time capability to provide a Policy Layer, so that applications can be built based on their perspective business requirements, allowing applications to be deployed without knowledge of requirements they might face during certification, deployment, or during operation.

The Policy Layer provides a light-weight federated on-ramp to the enterprise and to the particular enterprise services in which the application depends upon, and facilitates a policy oriented approach to connectivity, and integration to locally deployed resources as well as those available on the enterprise network. This layer architecturally is made up of two fundamental concepts a Policy Enforcement Point (PEP) and a Policy Application Point (PAP). The following diagram illustrates how policy and a run-time policy enforcement and application capability could be deployed to allow for robustness in the face of a comprehensive array of requirements, and or situational challenges.

Through Policy enablement, operators can create and modify integration, caching, access control, privacy, confidentiality, audit logging and other such policies around the business services, without interfering with the development of the services themselves. This is the first step towards real-world implementation of loosely coupled SOA and a necessary step in preparation for robustness.

Email me if you would like to receive the paper on robust net-centric services when it is completed or if you have unique challenges/situations that you would like to see conveyed in the paper. If you would like to learn more about how Layer 7 products support the vision of robust net-centric services today, contact your local sales government representative. I hope to see some of you in Orlando!

October 2nd, 2010

RESTful Web services and signatures

Category OAuth, REST, Security
 

A common question relating to REST security is whether or not one can achieve message level integrity in the context of a RESTful web service exchange. Security at the message level (as opposed to transport level security such as HTTPS) presents a number of advantages and is essential for achieving a number of advanced security related goals.

When faced with the question of how to achieve message level integrity in REST, the typical reaction of an architect with a WS-* background is to incorporate an XML digital signature in the payload. Technically, including an XML dSig inside a REST payload is certainly possible. After all, XML dSig can be used independently of WS-Security. However there are a number of reasons why this approach is awkward. First, REST is not bound to XML. XML signatures only sign XML, not JSON, and other content types popular with RESTful web services. Also, it is practical to separate the signatures from the payload. This is why WS-Security defines signatures located in SOAP headers as opposed to using enveloped signatures. And most importantly, a REST ‘payload’ by itself has limited meaning without its associated network level entities such as the HTTP verb and the HTTP URI. This is a fundamental difference between REST and WS-*, let me explain further.

Below, I illustrate a REST message and a WS-* (SOAP) message. Notice how the SOAP messages has it’s own SOAP headers in addition to transport level headers such as HTTP headers.

The reason is simple: WS-* specifications go out of their way to be transport independent. You can take a soap message and send it over HTTP, FTP, SMTP, JMS, whatever. The ‘W’ in WS-* does stand for ‘Web’ but this etymology does not reflect today’s reality.

In WS-*, the SOAP envelope can be isolated. All the necessary information needed is in there including the action. In REST, you cannot separate the payload from the HTTP verb because this is what defines the action. You can’t separate the payload from the HTTP URI either because this defines the resource which is being acted upon.

Any signature based integrity mechanism for REST needs to have the signature not only cover the payload but also cover those HTTP URIs and HTTP verbs as well. And since you can’t separate the payload from those HTTP entities, you might as well include the signature in the HTTP headers.

This is what is achieved by a number of proprietary authentication schemes today. For example Amazon S3 REST authentication and Windows Azure Platform both use HMAC based signatures located in the HTTP Authorization header. Those signatures cover the payload as well as the verb, the URI and other key headers.

OAuth v1 also defined a standard signature based token which does just this: it covers the verb, the uri, the payload, and other crucial headers. This is an elegant way to achieve integrity for REST. Unfortunately, OAuth v2 dropped this signature component of the specification. Bearer type tokens are also useful but, as explained by Eran Hammer-Lahav in this post, dropping payload signatures completely from OAuth is very unfortunate.


September 17th, 2010

Enteprise SaaS integration using REST and OAuth

Category Uncategorized
 

The current trend of moving enterprise applications to SaaS-style public cloud solutions is raising a number of concerns regarding security and governance. What about integration though? In the now legacy enterprise, various applications are deployed within the same trusted network under a single security domain which facilitate the integration between these applications.

How do you integrate these applications moving forward when they are separated across a number of different public cloud providers independent from each other? If you thought it was hard enough to integrate applications from different vendors inside your domain, imagine what this will turn into once different solution providers host these applications. As a consumer of such services, you need to demand and favor solutions providing adequate integration mechanisms; this is a critical selection factor. On the web, an elegant solution to integrate various services on behalf of users is gaining popularity: OAuth.

OAuth standardizes the process where the owner of a resource authorizes an application to access this resource on the resource provider. OAuth is very ‘resource-oriented’. As such, OAuth is well suited to enable authorization between two entities communicating using a RESTful web service interaction. This very pattern involving OAuth and REST, is ideal to enable the integration of two SaaS provider acting on behalf of their common enterprise subscriber as illustrated below.

In this case, two SaaS (or PaaS) solutions, which are otherwise independent, can share data as coordinated by the enterprise subscriber. This interaction substitutes the integration that would traditionally occur on-premise between two applications managed by the enterprise itself and provides the basis for restoring integration on the cloud.

Of course, the SaaS/PaaS adoption by the enterprise is only partial, and many IT assets remain on-premise. The enterprise therefore requires the same level of integration between externally hosted SaaS and these resources within the enterprise itself. It is logical that the enterprise supports the very integration mechanism that it demands from its external providers. This pattern is known as the ‘cloud call-back’ and is enabled by a specialized perimeter gateway that facilitates the enterprise cloud adoption such as CloudConnect.

To learn more about such patterns or find out how Layer 7 Technologies can help your enterprise integrate to the cloud securely, I invite you to visit us at the SOA/Cloud symposium October 5-6 2010 in Berlin. I will be presenting on the topic of Enterprise Security Patterns for RESTful Web Services.


September 15th, 2010

Hacking as a Service (HaaS)

Written by
Category Uncategorized
 
On Monday this week there was a very interesting post by Andy Greenberg a blog writer for Forbes.com which introduces a botnet herd standing by for payment and targeting instructions to launch a powerful Distributed Denial of Service (DDoS) attack. It appears based on his research that the botherd called "I'm DDOS" and available at "IMDDOS.org" is supposed to be used for testing purposes, however it is not clear how any type of target validation would or could be done by the company running the service to validate the target belongs to the attacker. You can see from the User Interface (UI) that the service looks to be fairly easy to use making it a likely attack tool for anyone with minimal computer skills and a grudge.

As with pioneers in computer infrastructure as a service, such as Salesforce and Amazon’s EC2 cloud, cyber arms dealers have begun asking customers, “Why buy when you can rent?” Renting cyber attack capabilities allows a political activist, terrorist group, or nation state to launch an attack on an online application - on demand. Those familiar with Cloud Computing and Software as a Service should recognize this as being the malicious equivalent - "hacking as a service".
It is clear that the "?? as a Service" model is going to be popular for people wanting to bring their products to market quickly and for those that want to see results with minimal up front capital costs.