Wednesday, June 30, 2010

The Impact of Scale


The topic of scale is best illustrated through an analogy. 

If you live in a mega-city, then city planning and transportation come to mind. With so many people living in one city, responsible governments must deal with public safety and creating livable structures. Individual buildings, transportation routes, water supply systems - are part of the city. Overtime we got clever and started to architect and build vertically to deal with scarce physical space. As the number of cars on the streets zooms upwards, we study traffic patterns, congestion and invest in alternative modes of transportation. All told, all these "parts" of the city are a system.

The dimensions we use to characterize whether a "system" is large includes number of elements (people, hardware, "things"), tasks, relationships, policies, domains of interest, and enforcement points etc.

If we are investing in smart buildings, smart cities, smart transportation, smart grids, healthcare infrastructures etc. we are talking large scale. We are talking about "system of systems" that is complex and in fact ultra-large.

The theme of ultra-large scale IT systems is explored in "Ultra-Large-Scale Systems: The Software Challenge of the Future" here, a report, published in 2006.

Ask the statisticians, and they would agree, big is in: 
  • Number of Cell Phones Worldwide Hits 4.6B in 2010
  • 4,000: The number of lines of code in MSDOS 1.0 - Microsoft's first operating system
  • 50 million: The number of lines of code estimated to be in Microsoft Vista
  • Data storage requirements for smart meters will increase at a rate that resembles a natural logarithmic rate
  • The 2009 movie Avatar is reported to have taken over one petabyte of local storage for the rendering of the 3D CGI effects
  • The US DoD has 3 million desktops, with just one data-center housing 18 terabyte storage
Of course big things are broken down into manageable parts in an attempt to make sense of the pieces. Cars, routes, traffic control, toll plaza's etc. 

However, we are still left with menacing problems:
  • Emergent properties; (or in plain English) stuff will happen that we can't cope with because it is brand new and never happened before. (e.g. the impact of mobile phones on driving behavior)
  •  We cannot completely define let alone measure the properties of all components a-priori i.e. somethings are just out of our view-lens
  •  It is impossible to update all elements of the system (as each changes) without leaving some window of vulnerability or ambiguity 
  • There is continuous evolution of our surrounding environment with unknowable outcomes and inconsistent (changing) states (e.g. new devices, new personalities, new enemies)
  • There is no clear ownership, possession and boundaries (e.g. cloud computing)
The Internet and the typical IT infrastructure of an enterprise and government agency is very complex. We think about external networks, web sites, data flows, applications and users, insiders, hardware/software, transactions and supply chains. The Internet and everything in it and connected to it, is very much an ecosystem: a dynamic community of interdependent and competing organizations in a complex and changing environments. The elements have an intrinsic adaptive behavior and we can measure the health and sense troublesome indications. 

To deal with that sort of large-scale complexity, rich research is needed (and going on in) to:
  • Understand the biological metaphore and it's applicability to tech-centric views
  • Use heuristics and learn how to better predict behaviors 
  • Learn how to respond to external stimuli with keener intelligence gathering 
  • Assure the software and hardware components that are used to build systems 
  • Survive hostile intentions and operate in a continuously changing system

Thursday, June 17, 2010

Its a Messy World

We live in a world that is rather messy. 


Russell Ackoff defined "a mess" as  "interacting problems or issues that are not easy to appreciate as a whole" (Flood & Carson, 1993).  You are in  mess, if you can't put any structure to the situation.  


Organizations are not closed systems. You can't measure everything because things change so quickly. The Internet is a vexing source of "unknown, unknowns". We depend on software, that if you look closer, is made up of piece parts whose source is ambiguous at best. Its "Office 2.0" for most employees with no definite way for the employer to tell what's good or bad behavior. 


It is safe to say, that when it comes to IT we are dealing with ... an unstructured situation. 


The growth in terms of volume, speed and diversity of data, devices and threats is non-linear. So our thinking about IT Security must also be non-linear. 


For example, the correlation analysis that is used to flag a security incident or track an impending threat is very much divorced of cause-and-effect accuracy. We are left with false emergencies, lots of noise and inaction to priorities. We are still building "naive" applications instead of making them "street smart" and engineered with the knowledge they will be broken into and constantly attacked. Finally there is a tolerance to live with broken links. Yet to make informed security decisions we need to connect the rationale and outcomes of our policy decisions, cultural expectations, counter-intelligence activities and the lessons learnt from incidents. 


We have to shift from securing an environment to surviving an ever changing ecosystem.