Tuesday, February 22, 2011

Security Architecture: Getting the foundation right

Words of wisdom: avoid downloading software that is of unknown origin, connect to public wireless points at your own risk and change your passwords every couple of months. 

These are reasonable rules of thumb for online safety.  Of course most people would snicker at such tedious tips. When was the last time you changed your Twitter password? 

On the other hand when "connected to work" we are held to a higher standard. Our computer activities are watched by our employers device and network firewalls. Business owned applications are guarded by strong passwords. Security monitoring is reinforced by a bevy of analytic algorithms that try to spot negative trends in user behavior against a base-line "normal".

Everything seems to be under control in a Fortune 1000. Right ?

So why is malicious software still seeping into corporate networks. Why is there a mad dash by organizations to prevent "data loss"? And why are mobile devices the new hot bed for Cyber insurgency?

Part of the answer lies in so called Advanced and Persistent Threats or in shorthand: APT. The source (“bad guy”) is geographically dispersed. The actors: state and non-state. The attacks are professional and deliberate. The result is a far superior adversary that makes it hard for us (good guy) to predict the planning, execution and escape, as well as the consequences of these unmitigated attacks.


Richard Bejtlich, director of incident response for General Electric qualifies APT by the perpetrator's skills: "They do quality control on their code and have deep teams rich in expertise who are patient and determined to exfiltrate intellectual property and trade secrets. Despite this level of organization, their means of initial compromise are sometimes less sophisticated." For example, the mission could be to place a memory stick lying around and wait for a  naive employee to pick it up and use it at work. The memory stick would release it's exploit code onto the laptop and call home.....

No matter the security controls in place, it’s a sure thing that an APT will find a way in. Want to read about a gnarly and subversive APT -- look no further than the latest wiki-leaks reports about a vendor called HBGary and its plans for spy software that cannot be identified because it has no file name, no process or computer-readable structure that can be detected by scanning alone. It also has a made for Hollywood name: 12 Monkeys root-kit.


Not all is lost....If we look at history we can see examples where regardless of the passage of time or introduction of new types of threats a security paradigm was found (more or less) to keep the peace. Let's take a look at mobile cell phones. With over 4 billion mobile users worldwide, cell phone hijacking is pretty much a thing of the past. Or at least in the GSM industry which developed an international standard to positively identify each cell phone: the SIM card. Cable boxes are another example that posed a new attack surface. Today, cable boxes have a unique serial number that eliminates pirate services.
 
In both these examples, the security paradigm incorporated "base protection". The idea is to create a highly defended perimeter for sensitive hardware and "must work" software functions. 

The trusted base is designed to reduce the attack surface to a point where you can literally measure the confidence you have that certain functions will work as advertised.  A primary trait of an APT is to piggy-back over vulnerabilities that no one really knows about. If there was a 0-day exploit laying around it would not be possible to take advantage of the situation because any local changes are forbidden. 

Organizations can then decide to allow only “known” computers and software to connect to a sensitive network. There would be a means to ascertain that nothing in the trusted base has been tampered be it the chip sets, network cards, operating system and applications. Akin to deny-by-default, trusted bases are to a degree 'unchangeable'-by-default. Any trace of an exploit software that try's to implant itself on the target computer would not be tolerated -- not unlike an immune system - the trusted base would react violently.

If you could ensure integrity of the foundation, then an organization can put in place credible process isolation strategies for sensitive applications. Think of a house where you know "for sure" a thief cannot bury underneath. Security administrators can deploy compartmentalization policies on an activity by activity basis. For example, web browsing would be compartmentalized from all business work. All software that is downloaded is trapped in a quarantined area or sandbox. Any attempt to move software that has not been a-priori classified as "white-listed" would sound an by the trusted computing base. Users can of course still be conned by a fraudulent web sites that remotely capture credentials. Let's just say trusted computing is not a panacea.

The principles and techniques behind trusted computing have been talked about for decades and encompasses a variety of techniques such as hardware security modules, integrity measurements, white-listing software and code obfuscation. The term Trusted Computing Base also has a formal definition: "The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and software components that are critical to its security. Bugs occurring inside the TCB might jeopardize the security properties of the entire system". Incidentally the smaller the TCB, the better -- although there are means to stretch the TCB far and wide to cover an entire system.

The techniques are not yet in popular mainstream computer and security architecture. Take for example the Trusted Platform Modules (TPM) which can play a role in checking the integrity of an operating systems or hold sensitive material such as passwords. There are in fact more than 350 million computers with these devices. They are under-utilized because they are not worth the hassle. 

Its going to be up-to computer and security architects to incorporate key elements of trusted computing into next generation product and systems designs. The challenge will be to truly make it harder for the attacker to break in and walk out without a trace (and without punishment). These solution architectures should take into account rich capabilities to:

Lower the value to the would-be "doer":
  • Prevention: the probability that critical assets would be made unavailable, or the act would be uncovered (and therefore prevented) by intelligence during the planning stages 
  • Interdiction: the probability that the adversary is discovered and caught while carrying out the act
  • Mitigation: the degree to which damage is reduced by improved response action
  • Deterrence: prevent an enemy from conducting future attacks by changing their minds, by attacking their technology, or by more palpable means.
Raise the cost to the would-be "doer" / Raise the bar:  
  • Attribution: The probability that the owner of the asset would be able to identify the adversary their supporters or suppliers
  • Retribution: the probability that that owner of the asset is given proper attribution of the act, could deliver the desired degree of retribution in terms that strike the heart of the adversary’s value structure  (source: Defense Science Board 2005 Summary Study)

Thursday, February 17, 2011

Are you closer to getting "owned"?

Owned is a slang word  that originated among the 1990s hacker sub-culture and refers to the acquisition of administrative control over someone else's computer. More than twenty years later, getting "owned" is getting easier and easier: 
  • New Capabilities = New Vulnerabilities:  Think about it. Add a window to your house and you create space between the window sill for bugs to crawl in the summer time. iPhone's, Netflix, XBoxe's, iPads are just a few of the dizzying number of ways for us to get online. Each of these new devices create their own space for "bugs" -- the software and hardware type.  
  • Unfair Advantage: Speaking of software bugs, they have a very "long tail" i.e. it takes a long time to find all of them. In one case, there was a 17 years for a Microsoft Internet Explorer bug that was closed out just last year. That's an unfair advantage for the bad guy. One of many unfair advantages. Some others: the ability for someone to disguise themselves, “act at a distance” and influence without bodily presence. Remember, there is no caller ID for the Internet. It was not designed to trace back the precise location of a person that "clicks" buy.  

Are you owned?