Friday, July 22, 2011

Accenture Cloud Security POV

It took me for ever to write, review and pull together all the great contributions from my colleagues.

The white paper is published below--


Taming the Wild West: How to Build a Strong Cloud Security Strategy

Saturday, May 28, 2011

Security Incident Response: Not If but When

First some dismal head-line news:
We know the volume of direct attacks on networks, smart phones and applications are increasing  -- SPAM, Phishing scams, malware, mobile devices, DDoS, advanced persistent threats etc. Unfortunately some companies still do not have a clue what to do right after they've been a victim of an incident. 

There is no official word from Lockheed on the precise nature of the cyber attack they encountered. They are relatively mum on the subject. Yet reports dribbling out do paint a Lockheed at the helm as it mitigates a "significant and tenacious" threat. 


Lockheed's operations team are in the throes of a well defined security incident response process. Here are snippets of Lockheed's actions (announced or reported) in the context of common incident handling framework: 
  • Declaration, Triage and Investigation: When an event has been reported by employees, or detected by automated security controls) the first stage carried out by the incident response team should be to understand how bad the situation is and understand the severity and set the priority on how to deal with the incident. By the announcements and its conviction we know that Lockheed immediately began an investigation strategy to determine the category of the attack – what is internal or external, the assets affected by the incident and the criticality of those assets.
  • Containment: A containment strategy buys the incident response team time for proper investigation and determination of the incident’s root cause. It is reported that Lockheed shutdown its virtual private network having determined that the SecureID tokens were used to gain access to it's network.
  • Analysis: Figure out what happened and try and figure out the root cause of the incident. We've read that work is well under way to preserve "electronic DNA" that may have been left by the attackers. Chris Ortman, US Homeland Security spokesman, said that his agency and the Pentagon are working with Lockheed to "provide recommendations to mitigate further risk". 
  • Recovery: Once the incident is understood, we move into the recovery stage which means the implementation of the necessary fix to ensure this type if incident cannot happen again. It is reported that Lockheed has moved ahead with some sort of upgrade to its existing SecureID tokens, incorporated additional security for remote login's, reset employee passwords and switched to eight-digit access codes from four-digit codes that are generated by the tokens.
You can find various activity models at CERT Coordination CenterForum of Incident Response and Security Teams (FIRST), National Institute of Standards and Technology (Computer Security Incident Handling Guide) or ISO/27000 Series. 


If its just a matter of time, then your organization's capability to properly handle incidents should be a  first class citizen. Sony is reported to have had 7 security incidents in two months. They are not alone as targets. Its very likely your organization is under reconnaissance or even low-and-slow attack right now. While economic times are tight, you will have no choice but to invest in technology and processes that improve response. 


And the future of incident response is "just in time solutions" to an ongoing situation. These configurable "courses of action" will be represented by remediation workflows and decision making loops. You don't want to initiate a fix to a problem in one system, only to cause a loss of function in another system.


As cyber attacks become more complex, the recovery and restoration workflows will be more diverse. They will be codified to behave in accordance to variable inputs and outputs of each of the incident investigation, analysis and containment activities. Security incident response will be optimized based upon resource availability and risk factors. To move beyond the sound-bits and theory send me a note to discuss this topic. 


How best to end this particular blog entry, with a cliched, yet accurate quote: "by failing to prepare, you are preparing to fail" - Benjamin Franklin.






Wednesday, April 6, 2011

In The Cloud: Sharing is Caring

At Accenture we are refreshing our Cloud Security & Data Privacy point of view. It’s been 2 years since we talked more caution (less action) in public cloud computing. 

Today, we are more optimistic and more realistic about the road ahead. 

As a co-author of both here are some observations of what has changed the sentiment: 
  • We've moved away from a lot of the red-herring topics that can distract from the more significant issues 
  • Cloud providers have done a good job plowing the field and helping organization's get a good "feeling" about security and privacy  - in particular the SaaS providers 
  • Cloud providers are now willing to change standard contracting and acknowledging that data owners remain responsible for the acts and omissions of their service providers.
  • We are seeing a move away from a take or leave approach to security and compliance on the part of cloud provider offerings 
Many companies will no doubt worry about theft, loss, or legal noncompliance if they put data in the public cloud. But waiting on the sidelines isn’t a good option, either. In the refreshed point of view we talk about five steps for crafting a strong cloud security strategy .. now.

One of those steps is to know to share responsibility and risk. Clarifying the roles of the data owner, cloud provider (and system integrator, if applicable) in delivering legally compliant solutions is crucial. From a legal perspective, there is no clear division of labor between the cloud provider, an application manager (or system integrator), and the data owner.  The law only cares that certain things get done and makes the data owner responsible for causing them to be done—it does not care who actually does them.

Unfortunately, many data owners and cloud providers have misperceptions of their responsibilities that hinder the evolution of a secure and compliant cloud solution. That division of labor varies by the cloud service model. Some requirements will be in the span of the cloud providers’ control, others in the tenant’s control. For example, perhaps there is business continuity or disaster recovery capability that does not ship “standard”, but can be designed-in as a separate data center or a dedicated backup tape solution.  The irony is that plenty of security and compliance capabilities exist today, but cloud providers have not considered how to use these capabilities to meet customer needs.

Cloud providers now acknowledge their role in supporting their clients legal compliance and agreeing to "sign" contracts that allow their clients to meet their obligations. 

Tuesday, February 22, 2011

Security Architecture: Getting the foundation right

Words of wisdom: avoid downloading software that is of unknown origin, connect to public wireless points at your own risk and change your passwords every couple of months. 

These are reasonable rules of thumb for online safety.  Of course most people would snicker at such tedious tips. When was the last time you changed your Twitter password? 

On the other hand when "connected to work" we are held to a higher standard. Our computer activities are watched by our employers device and network firewalls. Business owned applications are guarded by strong passwords. Security monitoring is reinforced by a bevy of analytic algorithms that try to spot negative trends in user behavior against a base-line "normal".

Everything seems to be under control in a Fortune 1000. Right ?

So why is malicious software still seeping into corporate networks. Why is there a mad dash by organizations to prevent "data loss"? And why are mobile devices the new hot bed for Cyber insurgency?

Part of the answer lies in so called Advanced and Persistent Threats or in shorthand: APT. The source (“bad guy”) is geographically dispersed. The actors: state and non-state. The attacks are professional and deliberate. The result is a far superior adversary that makes it hard for us (good guy) to predict the planning, execution and escape, as well as the consequences of these unmitigated attacks.


Richard Bejtlich, director of incident response for General Electric qualifies APT by the perpetrator's skills: "They do quality control on their code and have deep teams rich in expertise who are patient and determined to exfiltrate intellectual property and trade secrets. Despite this level of organization, their means of initial compromise are sometimes less sophisticated." For example, the mission could be to place a memory stick lying around and wait for a  naive employee to pick it up and use it at work. The memory stick would release it's exploit code onto the laptop and call home.....

No matter the security controls in place, it’s a sure thing that an APT will find a way in. Want to read about a gnarly and subversive APT -- look no further than the latest wiki-leaks reports about a vendor called HBGary and its plans for spy software that cannot be identified because it has no file name, no process or computer-readable structure that can be detected by scanning alone. It also has a made for Hollywood name: 12 Monkeys root-kit.


Not all is lost....If we look at history we can see examples where regardless of the passage of time or introduction of new types of threats a security paradigm was found (more or less) to keep the peace. Let's take a look at mobile cell phones. With over 4 billion mobile users worldwide, cell phone hijacking is pretty much a thing of the past. Or at least in the GSM industry which developed an international standard to positively identify each cell phone: the SIM card. Cable boxes are another example that posed a new attack surface. Today, cable boxes have a unique serial number that eliminates pirate services.
 
In both these examples, the security paradigm incorporated "base protection". The idea is to create a highly defended perimeter for sensitive hardware and "must work" software functions. 

The trusted base is designed to reduce the attack surface to a point where you can literally measure the confidence you have that certain functions will work as advertised.  A primary trait of an APT is to piggy-back over vulnerabilities that no one really knows about. If there was a 0-day exploit laying around it would not be possible to take advantage of the situation because any local changes are forbidden. 

Organizations can then decide to allow only “known” computers and software to connect to a sensitive network. There would be a means to ascertain that nothing in the trusted base has been tampered be it the chip sets, network cards, operating system and applications. Akin to deny-by-default, trusted bases are to a degree 'unchangeable'-by-default. Any trace of an exploit software that try's to implant itself on the target computer would not be tolerated -- not unlike an immune system - the trusted base would react violently.

If you could ensure integrity of the foundation, then an organization can put in place credible process isolation strategies for sensitive applications. Think of a house where you know "for sure" a thief cannot bury underneath. Security administrators can deploy compartmentalization policies on an activity by activity basis. For example, web browsing would be compartmentalized from all business work. All software that is downloaded is trapped in a quarantined area or sandbox. Any attempt to move software that has not been a-priori classified as "white-listed" would sound an by the trusted computing base. Users can of course still be conned by a fraudulent web sites that remotely capture credentials. Let's just say trusted computing is not a panacea.

The principles and techniques behind trusted computing have been talked about for decades and encompasses a variety of techniques such as hardware security modules, integrity measurements, white-listing software and code obfuscation. The term Trusted Computing Base also has a formal definition: "The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and software components that are critical to its security. Bugs occurring inside the TCB might jeopardize the security properties of the entire system". Incidentally the smaller the TCB, the better -- although there are means to stretch the TCB far and wide to cover an entire system.

The techniques are not yet in popular mainstream computer and security architecture. Take for example the Trusted Platform Modules (TPM) which can play a role in checking the integrity of an operating systems or hold sensitive material such as passwords. There are in fact more than 350 million computers with these devices. They are under-utilized because they are not worth the hassle. 

Its going to be up-to computer and security architects to incorporate key elements of trusted computing into next generation product and systems designs. The challenge will be to truly make it harder for the attacker to break in and walk out without a trace (and without punishment). These solution architectures should take into account rich capabilities to:

Lower the value to the would-be "doer":
  • Prevention: the probability that critical assets would be made unavailable, or the act would be uncovered (and therefore prevented) by intelligence during the planning stages 
  • Interdiction: the probability that the adversary is discovered and caught while carrying out the act
  • Mitigation: the degree to which damage is reduced by improved response action
  • Deterrence: prevent an enemy from conducting future attacks by changing their minds, by attacking their technology, or by more palpable means.
Raise the cost to the would-be "doer" / Raise the bar:  
  • Attribution: The probability that the owner of the asset would be able to identify the adversary their supporters or suppliers
  • Retribution: the probability that that owner of the asset is given proper attribution of the act, could deliver the desired degree of retribution in terms that strike the heart of the adversary’s value structure  (source: Defense Science Board 2005 Summary Study)

Thursday, February 17, 2011

Are you closer to getting "owned"?

Owned is a slang word  that originated among the 1990s hacker sub-culture and refers to the acquisition of administrative control over someone else's computer. More than twenty years later, getting "owned" is getting easier and easier: 
  • New Capabilities = New Vulnerabilities:  Think about it. Add a window to your house and you create space between the window sill for bugs to crawl in the summer time. iPhone's, Netflix, XBoxe's, iPads are just a few of the dizzying number of ways for us to get online. Each of these new devices create their own space for "bugs" -- the software and hardware type.  
  • Unfair Advantage: Speaking of software bugs, they have a very "long tail" i.e. it takes a long time to find all of them. In one case, there was a 17 years for a Microsoft Internet Explorer bug that was closed out just last year. That's an unfair advantage for the bad guy. One of many unfair advantages. Some others: the ability for someone to disguise themselves, “act at a distance” and influence without bodily presence. Remember, there is no caller ID for the Internet. It was not designed to trace back the precise location of a person that "clicks" buy.  

Are you owned? 

Monday, January 24, 2011

Cloud Shopping. Signing on the dotted line and then some.


Sometimes the blindingly obvious takes a long time to -- well -- become obvious. 

Not so long ago we were promised cloud services would be a one-click shopping experience. Simple or no contracts at all. That seemed far fetched. And it is. 

The business and legal contract is alive and well. And by all account there remains a gulf in cogent understanding about all of the compulsory commercial steps (legal/contract, risk management, solution scope) before you (as a customer) can claim victory and turn that sometimes intangible "cloud service" on. The contract is just one component of the due-diligence and commercial process.
  
A cloud computing “purchasing” guide will not differ from the run of the mill due-diligence in many IT procurement guides. The detail is a re-frame of what we know to be true, but for some reason have not been applied. Maybe humans abandon what is blindingly obvious, until we make a mistake or when we realize the short-cuts lead us astray.

Whatever the physiology, here are the activities that we've come up with: 
  • Perform preliminary/legal due diligence
  • Define compliance requirements
  • Conduct risk assessments
  • Define security controls, including security architecture components provided as standard service elements and controls that may need to be negotiated to meet your requirements. 
  • Define and implement contract monitoring requirements to measure and correct deviations from the service agreement
Nothing here is rocket-science. If you are part of an "enterprise" (enter company size here) then the scale of your business means the following is mandatory:

1.    Legal due diligence with the cloud provider
  • A legal contract (document) must be in place to clarify the roles and responsibilities in accordance to a sub-set of regulatory sources e.g. European Union, Spain, Healthcare. As an initial step the customer of the cloud provider would need to identify the regulatory requirements/frameworks to which they are subject. At the end of the day you will be responsible for meeting those requirements..even those you have not initially identified as relevant in the contract. 
  • It's obvious that the regulated entity will be ultimately responsible for compliance. This fact will likely be high-lighted in clear contract language to reinforce each party's rights, obligations and intentions. The use of a cloud solution will not in and itself make a client organization compliant with *all* laws and regulations. 
  • Depending on the risks (and complexity of the relationship) it will be necessary to enumerate all the assumptions which are related to the ability of all parties involved to deliver on their commitments be they the data owner, processor and custodian. Here a common interpretation of the law is necessary. 
  • A minimum set of terms and conditions are included to raise trust and to help get both the customer and cloud vendor on common ground e.g. confidentiality, intellectual property, warranty, payments, termination, limitation of liability. The list of clauses will grow and shrink depending on whether we are dealing with a regulated entity, private company or mom-and-pop shop
  • A cloud vendor will be expected to state in the contract their ability to "support" data protection acts or law. 
  • Let’s get an analogy to bring all the points above together. The contracts builds trust between two parties. Trust is like a bank account. You add and remove to it. It will be clear and unambiguous clauses that add to the trust balance. If you start with a “click-through” agreement – you may-be looking at the other end of the gun barrel. The short end of the stick etc.
2.    Compliance Audit of the cloud provider 
  • There will be a point where a vendor's assertions (I do this or that) simply have to be demonstrated and proven. An audit will have to be conducted against some "standard" criteria. You can decide to believe what they say – at your own risk.
  • An audit can be a cross check against an industry (e.g. PCI DS) or government standard (e.g. NIST SP800-53) or against self-asserted statements (e.g. SAS 70 report).  It is important to specify which compliance or regulatory framework(s) must be met. Other common frameworks: FIPS, ISO27000 series, CoBIT, COSO, and HITRUST (Health Industry Trust Alliance, which includes elements of HIPAA, ISO, PCI, and NIST SP800-53.). The Cloud Security Alliance  has a number of fantastic check-lists and guidelines. 
  • There will be compliance regimes that mandate some sort of certification (testing and evaluation) and a separate accreditation step (formal audit by an independent third-party) after the contract is signed and for specific scenario's. Regardless, the contract language will need to be very clear on which parties must be compliant, accredited, or audited, if that is a requirement. The customer or the service provider may need to be compliant/audited/accredited, or that a total solution, spanning across services and components of both the customer and service provider. All depends.
  • Another practical matter: who is responsible for costs and for management of any audit or accreditation review and certification for each of the scenarios described above? A customer may want a comprehensive, automated capability for system monitoring, but are not willing to pay for it and seem to expect that the service provider will supply those capabilities without added cost.
3.    Security Due Diligence and Risk Assessment
  • A customer may want to (re)confirm assumptions that could have a material effect on the cost of the solution and "true-up" the cost and the solution against a use-case or expectation. The degree of security due diligence depending on the level of trust that has been achieved and that is desired. Security due diligence can take place before the contract is signed, or after -- or (unlikely) never. 
  • A list of questions maybe asked of the cloud provider, above and beyond compliance. For example: "Do you have controls in place to prevent data leakage or intentional/accidental compromise". An industry-standard assessment will help in understanding any potential vulnerabilities. The SIG shared assessment framework was developed by several bank and publish a public domain tool that is available on their site at www.sharedassessments.org. They also have a white paper on applying the SIG assessment methodology to cloud computing (available from the Resources page at www.sharedassessment.org/value/resources.html).  HITRUST also publishes an assessment tool for their Common Security Framework.
  • Intellectual property and confidentially will be mentioned in the contract. However, that’s not the end of the road. A customer will still need to “design in” the right procedures for things like encryption, restrictions on data access. Key point: it is critical to identify the specific expectations of customer and service provider. As described elsewhere in this list, a customer may ask or be told that a solution will support a particular compliance regime; however, there is often a wide range of possible methods to “support” or “meet” compliance requirements. As an example, consider the HIPAA requirement to monitor system activity; that could be met several ways, each of which would differ in deployment and operational costs: logging system/application activity and manually reviewing logs; IDS; SIEM; GRC, etc. There is as yet no widely agreed “standard of practice” for minimal levels of protection, so a service provider should specify what capabilities will be provided. 
  • A security subject matter expert will be expected to dig deeper into vulnerabilities depending on the scope of the solution. Adequate security controls will then be proposed.
  • Security due-diligence of the cloud provider (the "host") ideally should be wrapped up before the contract is signed. 
  • A security risk assessment of the custom system, application or data elements may need to be completed after the contract is signed and ahead the lights being turned on. 
4.    Security Methods and Quality of the delivered solution 
  • Security controls will not be implemented by magic. The necessary resources and tools will need to be put into action by someone Indeed, for both the initial implementation and the ongoing management/maintenance; see my previous comment.
  • Someone has to accept the risk decision before the lights are turned on and the “go" or "no go" decision should be based on the circumstances of that moment. In other words risk tolerance may have changed since the contract was signed. 
  • This procedural step argues for a robust system security review and acceptance process, similar to the Federal “Authority to Operate” procedure (although, hopefully, not as time consuming or costly). FedRamp is another example of a repeatable, portable and certification and accreditation process.  
5.    Security Operations and Continuous Monitoring: 
  • Need to consider "run-time" and operational processes in addition to the technical and procedural security controls. The goal: continual validation of the physical and logical environment and it's status. What level or reporting or visibility will the service provider make available to the client? Will the service provider track incidents and report them (or all that exceed an agreed-to threshold) to the client? Or will the service provider maintain that they will manage the system and treat it as a “black box” from the customers perspective? (This argues that reporting requirements, reporting processes and frequency, and operational responsibilities should be explicitly specified in the contract)
  • You can't just audit your systems on an snap-shot basis, things may drift over-time and its best to be "on top of things". 
  • Adherence to regulatory compliance is increasingly about a more near-real-time reporting of changes, the status of the operating environment and any priority milestones