Thursday, April 12, 2012

Leaks R US

Been a while. This caught my eye: http://vimeo.com/40128698

Whenever I have to do a double take on what it all means -- well, then the tech world is moving very, very fast.

Walid

Friday, July 22, 2011

Accenture Cloud Security POV

It took me for ever to write, review and pull together all the great contributions from my colleagues.

The white paper is published below--


Taming the Wild West: How to Build a Strong Cloud Security Strategy

Saturday, May 28, 2011

Security Incident Response: Not If but When

First some dismal head-line news:
We know the volume of direct attacks on networks, smart phones and applications are increasing  -- SPAM, Phishing scams, malware, mobile devices, DDoS, advanced persistent threats etc. Unfortunately some companies still do not have a clue what to do right after they've been a victim of an incident. 

There is no official word from Lockheed on the precise nature of the cyber attack they encountered. They are relatively mum on the subject. Yet reports dribbling out do paint a Lockheed at the helm as it mitigates a "significant and tenacious" threat. 


Lockheed's operations team are in the throes of a well defined security incident response process. Here are snippets of Lockheed's actions (announced or reported) in the context of common incident handling framework: 
  • Declaration, Triage and Investigation: When an event has been reported by employees, or detected by automated security controls) the first stage carried out by the incident response team should be to understand how bad the situation is and understand the severity and set the priority on how to deal with the incident. By the announcements and its conviction we know that Lockheed immediately began an investigation strategy to determine the category of the attack – what is internal or external, the assets affected by the incident and the criticality of those assets.
  • Containment: A containment strategy buys the incident response team time for proper investigation and determination of the incident’s root cause. It is reported that Lockheed shutdown its virtual private network having determined that the SecureID tokens were used to gain access to it's network.
  • Analysis: Figure out what happened and try and figure out the root cause of the incident. We've read that work is well under way to preserve "electronic DNA" that may have been left by the attackers. Chris Ortman, US Homeland Security spokesman, said that his agency and the Pentagon are working with Lockheed to "provide recommendations to mitigate further risk". 
  • Recovery: Once the incident is understood, we move into the recovery stage which means the implementation of the necessary fix to ensure this type if incident cannot happen again. It is reported that Lockheed has moved ahead with some sort of upgrade to its existing SecureID tokens, incorporated additional security for remote login's, reset employee passwords and switched to eight-digit access codes from four-digit codes that are generated by the tokens.
You can find various activity models at CERT Coordination CenterForum of Incident Response and Security Teams (FIRST), National Institute of Standards and Technology (Computer Security Incident Handling Guide) or ISO/27000 Series. 


If its just a matter of time, then your organization's capability to properly handle incidents should be a  first class citizen. Sony is reported to have had 7 security incidents in two months. They are not alone as targets. Its very likely your organization is under reconnaissance or even low-and-slow attack right now. While economic times are tight, you will have no choice but to invest in technology and processes that improve response. 


And the future of incident response is "just in time solutions" to an ongoing situation. These configurable "courses of action" will be represented by remediation workflows and decision making loops. You don't want to initiate a fix to a problem in one system, only to cause a loss of function in another system.


As cyber attacks become more complex, the recovery and restoration workflows will be more diverse. They will be codified to behave in accordance to variable inputs and outputs of each of the incident investigation, analysis and containment activities. Security incident response will be optimized based upon resource availability and risk factors. To move beyond the sound-bits and theory send me a note to discuss this topic. 


How best to end this particular blog entry, with a cliched, yet accurate quote: "by failing to prepare, you are preparing to fail" - Benjamin Franklin.






Wednesday, April 6, 2011

In The Cloud: Sharing is Caring

At Accenture we are refreshing our Cloud Security & Data Privacy point of view. It’s been 2 years since we talked more caution (less action) in public cloud computing. 

Today, we are more optimistic and more realistic about the road ahead. 

As a co-author of both here are some observations of what has changed the sentiment: 
  • We've moved away from a lot of the red-herring topics that can distract from the more significant issues 
  • Cloud providers have done a good job plowing the field and helping organization's get a good "feeling" about security and privacy  - in particular the SaaS providers 
  • Cloud providers are now willing to change standard contracting and acknowledging that data owners remain responsible for the acts and omissions of their service providers.
  • We are seeing a move away from a take or leave approach to security and compliance on the part of cloud provider offerings 
Many companies will no doubt worry about theft, loss, or legal noncompliance if they put data in the public cloud. But waiting on the sidelines isn’t a good option, either. In the refreshed point of view we talk about five steps for crafting a strong cloud security strategy .. now.

One of those steps is to know to share responsibility and risk. Clarifying the roles of the data owner, cloud provider (and system integrator, if applicable) in delivering legally compliant solutions is crucial. From a legal perspective, there is no clear division of labor between the cloud provider, an application manager (or system integrator), and the data owner.  The law only cares that certain things get done and makes the data owner responsible for causing them to be done—it does not care who actually does them.

Unfortunately, many data owners and cloud providers have misperceptions of their responsibilities that hinder the evolution of a secure and compliant cloud solution. That division of labor varies by the cloud service model. Some requirements will be in the span of the cloud providers’ control, others in the tenant’s control. For example, perhaps there is business continuity or disaster recovery capability that does not ship “standard”, but can be designed-in as a separate data center or a dedicated backup tape solution.  The irony is that plenty of security and compliance capabilities exist today, but cloud providers have not considered how to use these capabilities to meet customer needs.

Cloud providers now acknowledge their role in supporting their clients legal compliance and agreeing to "sign" contracts that allow their clients to meet their obligations. 

Tuesday, February 22, 2011

Security Architecture: Getting the foundation right

Words of wisdom: avoid downloading software that is of unknown origin, connect to public wireless points at your own risk and change your passwords every couple of months. 

These are reasonable rules of thumb for online safety.  Of course most people would snicker at such tedious tips. When was the last time you changed your Twitter password? 

On the other hand when "connected to work" we are held to a higher standard. Our computer activities are watched by our employers device and network firewalls. Business owned applications are guarded by strong passwords. Security monitoring is reinforced by a bevy of analytic algorithms that try to spot negative trends in user behavior against a base-line "normal".

Everything seems to be under control in a Fortune 1000. Right ?

So why is malicious software still seeping into corporate networks. Why is there a mad dash by organizations to prevent "data loss"? And why are mobile devices the new hot bed for Cyber insurgency?

Part of the answer lies in so called Advanced and Persistent Threats or in shorthand: APT. The source (“bad guy”) is geographically dispersed. The actors: state and non-state. The attacks are professional and deliberate. The result is a far superior adversary that makes it hard for us (good guy) to predict the planning, execution and escape, as well as the consequences of these unmitigated attacks.


Richard Bejtlich, director of incident response for General Electric qualifies APT by the perpetrator's skills: "They do quality control on their code and have deep teams rich in expertise who are patient and determined to exfiltrate intellectual property and trade secrets. Despite this level of organization, their means of initial compromise are sometimes less sophisticated." For example, the mission could be to place a memory stick lying around and wait for a  naive employee to pick it up and use it at work. The memory stick would release it's exploit code onto the laptop and call home.....

No matter the security controls in place, it’s a sure thing that an APT will find a way in. Want to read about a gnarly and subversive APT -- look no further than the latest wiki-leaks reports about a vendor called HBGary and its plans for spy software that cannot be identified because it has no file name, no process or computer-readable structure that can be detected by scanning alone. It also has a made for Hollywood name: 12 Monkeys root-kit.


Not all is lost....If we look at history we can see examples where regardless of the passage of time or introduction of new types of threats a security paradigm was found (more or less) to keep the peace. Let's take a look at mobile cell phones. With over 4 billion mobile users worldwide, cell phone hijacking is pretty much a thing of the past. Or at least in the GSM industry which developed an international standard to positively identify each cell phone: the SIM card. Cable boxes are another example that posed a new attack surface. Today, cable boxes have a unique serial number that eliminates pirate services.
 
In both these examples, the security paradigm incorporated "base protection". The idea is to create a highly defended perimeter for sensitive hardware and "must work" software functions. 

The trusted base is designed to reduce the attack surface to a point where you can literally measure the confidence you have that certain functions will work as advertised.  A primary trait of an APT is to piggy-back over vulnerabilities that no one really knows about. If there was a 0-day exploit laying around it would not be possible to take advantage of the situation because any local changes are forbidden. 

Organizations can then decide to allow only “known” computers and software to connect to a sensitive network. There would be a means to ascertain that nothing in the trusted base has been tampered be it the chip sets, network cards, operating system and applications. Akin to deny-by-default, trusted bases are to a degree 'unchangeable'-by-default. Any trace of an exploit software that try's to implant itself on the target computer would not be tolerated -- not unlike an immune system - the trusted base would react violently.

If you could ensure integrity of the foundation, then an organization can put in place credible process isolation strategies for sensitive applications. Think of a house where you know "for sure" a thief cannot bury underneath. Security administrators can deploy compartmentalization policies on an activity by activity basis. For example, web browsing would be compartmentalized from all business work. All software that is downloaded is trapped in a quarantined area or sandbox. Any attempt to move software that has not been a-priori classified as "white-listed" would sound an by the trusted computing base. Users can of course still be conned by a fraudulent web sites that remotely capture credentials. Let's just say trusted computing is not a panacea.

The principles and techniques behind trusted computing have been talked about for decades and encompasses a variety of techniques such as hardware security modules, integrity measurements, white-listing software and code obfuscation. The term Trusted Computing Base also has a formal definition: "The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and software components that are critical to its security. Bugs occurring inside the TCB might jeopardize the security properties of the entire system". Incidentally the smaller the TCB, the better -- although there are means to stretch the TCB far and wide to cover an entire system.

The techniques are not yet in popular mainstream computer and security architecture. Take for example the Trusted Platform Modules (TPM) which can play a role in checking the integrity of an operating systems or hold sensitive material such as passwords. There are in fact more than 350 million computers with these devices. They are under-utilized because they are not worth the hassle. 

Its going to be up-to computer and security architects to incorporate key elements of trusted computing into next generation product and systems designs. The challenge will be to truly make it harder for the attacker to break in and walk out without a trace (and without punishment). These solution architectures should take into account rich capabilities to:

Lower the value to the would-be "doer":
  • Prevention: the probability that critical assets would be made unavailable, or the act would be uncovered (and therefore prevented) by intelligence during the planning stages 
  • Interdiction: the probability that the adversary is discovered and caught while carrying out the act
  • Mitigation: the degree to which damage is reduced by improved response action
  • Deterrence: prevent an enemy from conducting future attacks by changing their minds, by attacking their technology, or by more palpable means.
Raise the cost to the would-be "doer" / Raise the bar:  
  • Attribution: The probability that the owner of the asset would be able to identify the adversary their supporters or suppliers
  • Retribution: the probability that that owner of the asset is given proper attribution of the act, could deliver the desired degree of retribution in terms that strike the heart of the adversary’s value structure  (source: Defense Science Board 2005 Summary Study)

Thursday, February 17, 2011

Are you closer to getting "owned"?

Owned is a slang word  that originated among the 1990s hacker sub-culture and refers to the acquisition of administrative control over someone else's computer. More than twenty years later, getting "owned" is getting easier and easier: 
  • New Capabilities = New Vulnerabilities:  Think about it. Add a window to your house and you create space between the window sill for bugs to crawl in the summer time. iPhone's, Netflix, XBoxe's, iPads are just a few of the dizzying number of ways for us to get online. Each of these new devices create their own space for "bugs" -- the software and hardware type.  
  • Unfair Advantage: Speaking of software bugs, they have a very "long tail" i.e. it takes a long time to find all of them. In one case, there was a 17 years for a Microsoft Internet Explorer bug that was closed out just last year. That's an unfair advantage for the bad guy. One of many unfair advantages. Some others: the ability for someone to disguise themselves, “act at a distance” and influence without bodily presence. Remember, there is no caller ID for the Internet. It was not designed to trace back the precise location of a person that "clicks" buy.  

Are you owned?