Sunday, December 12, 2010

A Model for Credible and Response Security Operations

The focal point for keeping tabs with security and compliance activities has been the Security Operations Center (SOC): a physical location that is the front-line to handle incident reports, review system logs and constantly monitor the environment - 24x7. 

Take the latest Wikileaks extravaganza (details left for another blog post) that pits the security operations of some of the world’s most IT savvy companies (Visa, Mastercard, Paypal and Amazon.com) against relatively un-organized “hactivists”. 

The wall-street journal online gives a picture of what a SOC looks and feels like: “in PayPal's network operation center, charts showing total payments processed per minute and total traffic to the site, along with other data, are projected on a large, curved wall in front of around 20 workstations, each holding three to five computer monitors.” Add security events and blinking lights for threat alerts and you get a SOC.

According to the reports Paypal did not suffer a down-time, neither did Amazon.com. Mastercard and Visa didn’t fair so well. In the article it was speculated that MasterCard and Visa simply did not invest in their security operations to “gird for attacks from a more-sophisticated cyber army”.  

That sort of after-the-fact (lets investigate what happened and then make things better) approach to security operations can be very costly. A survey of 45 organizations by the Ponemon Institute found that on average, cybercrime takes 4 weeks to investigate and each cybercrime averages a cost of $3.8 million/year in financial loss and response and remediation costs.

Security operations also have another omnipresent master: regulatory compliance mandates and legal enforcements. Hundreds of laws are introduced every year at the US state level that affect the collection, use, handling, and disclosure of personal data in one way or another.  These laws may be introduced as “privacy” laws, or may be attached to financial services, health care, employment, children’s services or other laws as well.

And so the conventional model of security operations is too pick and choose from a menu of out of the box technologies and tools with no over-arching strategy or long-tail capabilities road-map. By the latter I mean a lack of investments in niche and customizable applications such as illicit insider threats or dealing with persistent threats. The standard functions that are sourced and acquired either in-house or as a managed service: Infrastructure security, device monitoring and management, Security incident and event management, Security incident response and forensics and Threat research and vulnerability management. 

If we are not getting any safer and compliance is unabated – is doing the same thing over and over again not insanity? The thought is a new working model that helps guide an organization towards a better-quality operational picture that is more responsive, rather than reactive. I am not suggesting new technology or standards. Instead a framework to orient the enterprise in sourcing, acquiring and deploying the arsenal.

The model starts with an abstract layer of requirements to help evaluate the maturity of an organization. Not unlike some common principles (integration, automation, service oriented architectures) that would apply to CRM, ERP, or SCM – a domain that is being re-framed by cloud computing.

Here are some of the competencies or master specifications that could apply to “next generation” security operations:

Process Automated Responses
  • Operator activity should be automated processes that accommodate human-in-the-loop work-flows for decision making that can be optimized. The aspiration here is processes that shuttle information to policy makers, engineers, the C-Suite and application developers. If data encryption is ramping up then security operations would be prepared to deal with the ramifications of additional reporting. An anti-virus clean-up task would trigger notifications to engineers and analysts spell out countermeasures
  • Strategic plans and courses-of-actions would govern pre-and post incidents and designed to avoid disruption to the mission or business. Basically some sort of “rules engine” that makes the system hum. EINSTEIN 3 is a system that will be deployed by US government agencies that “will have the ability to automatically detect and respond appropriately to cyber threats before harm is done, providing an intrusion-prevention system supporting dynamic defense”.
Enablement of Virtual Resources
  • Ability to rapidly source high powered computing resources, to process sudden or unplanned volumes of traffic as well as test countermeasures. For example, a bandwidth-based attack like a Denial-of-service would be met with defense that shield parts of the network from collateral damage. The current trend is also to break apart malware and study it for intent, origin and then design of countermeasures. There are vendors that offer tools to simulate the behavior of malware in a safe and condoned test-bed environments. Support for integration and interoperability is key.
  • Private and public partnerships are used to to analyze indicators of attacks and early into the planning phases of the adversary. Imagine, if you will, the National Security Agency (in the US) working with private firms to alert them of possible cyber strikes.  A slippery slope of government intervention. Google is working with NSA to help sort through the Chinese hack of its computers.
  • A virtual team of experts and analysts will make-up the diverse mix of users and consumers. Decades of research and technology in IT security has shown us that absolute security is a fallacy. Now its also clear that we have not invested in nurturing the right crop of professionals. A recent study by the Center for Strategic and International Studies nails home the point for the US, that we simply don’t have the talent to stay steps ahead. So outside collaboration and the means to do so will be critical.
Analytics-Driven Security
  • We know that fusion of all sorts of data is pivotal to create higher-fidelity alerts of intruders, and spot subtle yet suspicious activity. The idea is to monitor ingress and egress points and pull data from human resources, firewall logs and even law enforcement. Trouble is that today’s intrusion detection systems are short-sighted and often blind-sided. The US government’s EINSTIEN 2 system that is meant to up the ante and is installed at all connections between government computer systems and the public Internet. The system gathers and sketches out threat signatures from foreign intelligence and DoD information assurance missions
  • Absorbing and monitoring Internet traffic; human, software and computer activity is one thing and then trying to make sense of it all is quite another. The mathematics behind statistical analysis and analogue techniques in data mining offer powerful aids to understand past and present events. Forecasting models can be used to project probability of whether a threat scenario will come to pass.
  • Privacy-enhancing mechanisms will be needed to limit the collection and retention of personally identifiable information. When speaking about any surveillance recommendations the temptation will be to over-reach authority and the risk of privacy violation is all too real. Basically the principle here should be to redact or obfuscate sensitive content that is not pertinent to down-stream threat analysis or won’t help the advancement of compliance.
Do these above themes make sense? Send me a note at walid.negm@acecnture.com. I’d like to hear your thoughts on whether the above above themes are relevant to CIOs, CTOs, CISOs and business executives struggling to get their heads around where to prioritize their security investments.

No comments:

Post a Comment