Friday, November 20, 2009

Random Walk Down Cloud Street

An interview with a federal government journal:

1) We have heard many definitions of 'cloud computing'. How do you define it?We are headed towards a pay-per-view model for most (not all) of IT. So with that set-up: Cloud computing is any “IT” service that can be sold pay-as-you-go over the Internet. Ideally a cloud service should be available immediately. Click-to-buy. But also click-to-exit. Minimal hassle or contractual obligations.

Software-As-A-Service accounts for a lion’s share of the market (e-commerce suites, cloud storage and on-demand business software). But that’s part of the story. The other part is sourcing key aspects of your business processes that span both talent (or labor) + software.

NIST (National Institute of Standards and Technology) has an air-tight definition.

My take on definitions is, that it’s OK to bend a definition but not break it. For example: If a cloud provider charges you by the day, and bills you by the month. That is OK. It’s still metered – if not by the hour or minute. If the cloud does not switch on the power in a split second – that’s OK if they give you a better service-level agreement.

Note: Cloud Types
  • Not all cloud providers can compete on a low price point. So there will continue to be differentiation on value. In the future you will see software, and infrastructure and platform cloud services start to blur.
  • A cloud provider can create sub-divisions where it can dedicate a pool of resources for 1 customer. That structure is in opposition to multi-tenancy- where more than one customer shares the infrastructure or application. The cost to operate such a cloud will be higher. And the more 1-1 relationships the more we are bending the definition of cloud computing
  • It’s not all vanilla, there will be chocolate chip and mint. Organizations are actively looking at hybrid on-premise/off-premise hosting platform

2) Why, in your opinion, is the cloud getting so much attention right now? (It seems that no one was talking about it two years ago. Is this accurate?)
Every industry is rethinking how they get things done with a backdrop of scare resources–talent, budgets and energy. The stars happened to align the last 2 years for cloud computing. And the spotlight and vendor value proposition is squarely on Small/Medium Businesses, and now government agencies and the enterprise.

The appealing value is hard to ignore:

• Operating cost reduction (maintenance/support/upgrades)
• Better utilization of software licenses
• Any savings from replacing infrastructure CAPEX with subscription fee OPEX

Keep in mind that Salesforce.com was founded in 1999. And depending on how far you go in history we have been doing some sort of outsourced and time-shared computing for decades especially in the science community.

3) One of the challenges to cloud computing is security. What are the biggest things CIOs/IT managers should be wary of?
First, cloud computing introduces change. And change is the arch enemy of security. You add a window, you create a new way for someone to get in. So you need to understand all those things that change when moving an application or your data into the cloud. If you get these two rights right, you are on track: 1. can I tell what I own in the cloud, and 2. can I tell when something changes in the cloud?

Second, is your trust relationship. If a cloud provider won't let you see behind “their firewall”, won’t give you an audit of facilities (e.g. how they perform software upgrades, background checks for personnel), then you should look elsewhere. Mischief is inevitable. And you don’t know until you test, and you test because you want to verify, and you verify because you don't trust.

Third, you will have to ask yourself: how am I measured and what am I trying to protect? FISMA (Federal Information Security Management Act) makes it expensive for public cloud providers to meet Certification & Accreditation requirements. Enter the 1-1 cloud scenario’s (e.g. future Google Federal Cloud). Additionally, there are multiple Federal and State level Regulatory requirements, including HIPPA, GLBA, SOX, FFIEC, SEC, and PCI. Compliance is not security. You will need to keep the eye on the real issues: Cyber threats. Malware that morphs every 35 seconds, bot-nets that phone home and the active underground economy of cyber crime.

4) Another challenge sometimes has to do with a mindset. Some are worried about the cloud because . . . Well . . . It’s a new way of thinking. What do you suggest for CIOs/IT managers who might be timid about the cloud?
When a fixed enterprise mindset and the Internet collide, time and time again the Internet has won. This is no exception. Virtualization and cloud computing are here to stay. So why don’ you try before you buy? Get to know the technology, understand the Return on Investment and what you are giving up. There will be hidden costs. So it’s important to do your due-diligence and develop a business case for cloud services.

Wednesday, November 18, 2009

Its not rocket science to pick a cloud provider: Cut to the Chase.

Most of the vocal public cloud providers are "like a 5 year old, they run away" or start to babble when you open up a discussion on risk, compliance and security.

If a cloud provider won't let you model the network or get a full audit of servers (e.g. patching, virtual machine provisining, console activity), then one should choose another. If an organization can't understand the attack surface of the target operational environment nor identify vulnerabilities then they will likely be unable to access and accept the risk to operate (and meet fudicuary obligations).

Each industry will have its own set of impedence factors. The FDA imposes a set of critiera for a validated platform for healthcare/biotech. Federal agencies must deal with FISMA and the guidelines by NIST which I have have experience with. Government agencies start to ask tough questions when a provider builds their own hardware, let alone relying upon foreign supplied COTS components.

There are cloud providers that are making claims but security concerns will remain until there is an audit and verification. I would go with cloud providers that have had experience with enterprises and are able to offer a managed service components. The solution should be complemented with the right people that can speak to the issues.

A trusworthy cloud provider will transparent than not. They have to be willing to speak with you about their controls. In some cases, you need to exclude a provider if they can not provide a fully-managed offering - with physical server seperation. In other cases you mauy need a NOC with staff that has been thru back-ground checks. You will then have to assess and then select a cloud provider with say a separate NOC that is FISMA compliant. Some "cloud" provider are actually able to drop in a separate node for large Financial Services clients. But they still have to think about the economic costs, and unlikely to be "click-to-buy-to-provision". There is significant investment in the networking gear, patch panel, service management and capacity allocation where a public resource pool is adequately cut-off for private consumption. There would be contractual obligations to reserve and purchase resources e.g. 1000's of virtual / physical servers.

A check-list compliance appraoch with service providers will be necessary. Accenture has an assessment methdology and Cloud RFI survey that we've used with a number of cloud providers. The results would be verified with a site visit. Is the machine room isolated from other functions. Are there camera's on the peripheral of the building. Do they harden 'their" operating system before installing other applications.What is their process for applying patches/updates? Accenture is experienced in coordinating and supporting external security audits and can provide recommendations and guidance for security improvements and corrections.

Best practices carry over with cloud computing, especially with the concentration of high-value assets and the unknown threats of multi-tenancy and virtualization. Everything gets more fractrured and the operating picture (your understanding of cyber risks) change e.g. email traffic, user logins/behaviour, remote access traffic, building access, time reporting etc. If all types of customers (enterprise, small business and regualted) are using the same ingress and egress interfaces that may simply be unacceptable to some customers.

Its vital to understand the attack surface of the cloud and use an enterprise risk management framework to select security controls. Are there cloud providers and candidate for Financial services that make ssense in a cloud? It depends on your definition of a cloud and what they are providing. It will depend on what you are willing to give up. This starts with a risk assessment.

Friday, November 13, 2009

Risk Analysis versus Risk Assessment

I would like to distinguish a project impact analysis (PIA) from a risk assessment of the business solution under debate. The former is a business case justification. The latter allows the stakeholders (e.g. CISO, CIO, CEO etc.) to identify potential threats, prioritize those threats into risks and identify the controls that can reduce the risks to acceptable levels.

A due diligence exercise should examine capital outlay, development costs and long-term costs such as continued operations and maintenance. The cloud option (definitions aside) and whether it is a sound business case will be dependent on the cloud provider. Certainly issues such as regulatory compliance, process safety, validated platform can be show-stoppers. However the as-is system and the target cloud provider must be taken into consideration.

In my opinion a risk assessment does not need to be a long drawn out process. It can also be completed in a matter of days. It is the only way to provide management with the tools needed to perform their fiduciary responsibility of protecting the assets of the enterprise in a reasonable and prudent manner. For example, multi-tenancy is likely a regulatory concern and on the surface Amazon Web Services appears to fail this test. Dig a little deeper and it turns out that Amazon Web Services allows a customer to avoid virtual machine co-residency. Now the probability that a “cross-channel attack” will result in data loss is questionable. The purpose of a risk assessment is to quantitatively or qualitatively make that risk decision and approval to operate.

Get to know your network -- where ever it is

Skybox, Redseal, Cauldron are examples of Enterprise Risk Modeling (ERM) vendors. The tools filter noise, prioritize actions and put the attention on relevant exposures. Here is vulnerability reduction use-case:

Overlay the vulnerability results for a subnet or a set of host machines with a network scan. Then visualize the network topology instead of using VISIO diagrams. It is then easier to zoom in and group zones and classify hot-spots. You can track a SQL Injection vulnerability to inform remediation decisions such as applying a software patch. Cartography of the network is akin to a Google Map. You can spot quick wins such as an expansion of vulnerability scanning coverage. Another type of improvement can be to reduce or avoid false positives. You can look at a high vulnerability score’s and determine whether it will cascade into a worse problem. Finally visualization is a powerful way to present and communicate data in a meaningful way to the right audience. A picture speaks a thousand words.

What you can do with these tools, depend on what you feed it. You can automate firewall and network access compliance. You can inventory assets. You can grab vulnerability data.

    Thursday, November 12, 2009

    Regulated Industry and Cloud Computing. Just A Note.

    I’ve had some experience with regulated industry and cloud computing. It’s important to start by defining the issue. I’ve come across a variety of significant concerns from contractual arrangements, trans-national transactions, co-location of virtual machines, placement of data and transparency. There will be plenty of government rules and compliance check-lists that are at logger heads with the inherent set-up of a shared infrastructure or personnel that have not passed a minimum back-ground check.

    The prominent cloud providers (e.g. Google, Amazon Web Services) are already making architectural and infrastructure changes. Google has announced its GovCloud SaaS offering with Google hosting that is solely dedicated to US Federal, State and Local Government – bounded within North America. It’s a work in progress.

    Federal government agencies are mandated by law (Federal Information Security Management Act) to implement security protection commensurate with risk. The mandate is to develop and maintain minimum controls and to ensure independent testing and evaluation of those controls.

    Those public cloud providers (IaaS, PaaS, SaaS) that offer a communal IT model are opening up a new threat profile. More than one organization hosted on the same physical server, saving data in the same storage device and co-mingled traffic passing across the same interconnects and network edge. They will make compliance claims and do not supply policy an security documentation. All will not be hack-proof and the burden of proof of compliance will fall on the system owner.

    It may seem blindingly obvious, but there are certain industry segments that fall into the hard, medium and simple category for cloud computing. No export control. Easy. Healthcare is medium/hard depending on the application. Legal council is a stakeholder and advisor.

    The classification and sensitivity of data will dictate the acceptance of risk and the choice of a cloud provider. But if we are talking about PII/PHI and high-impact systems then certainly government agencies are going the route of a private cloud model. Now there are ways to create a secure virtual environment. The security must encompass the physical machine and attendants of those machines. Any regulated industry client will (or has) performed their own PII risk assessment (e.g. NIST 800-122). A risk assessment from a regulatory point of view to inform whether it makes sense to move to the cloud is also necessary.

    If the contract is inflexible and non-negotiable then one simply has to walk away and look for another cloud provider that is willing to negotiate. And if the cloud provider claims to be more secure than a regular old data-center – then it should be OK for them to prove it to the customers satisfaction.

    Sunday, October 25, 2009

    Amazon Web Services: file this under growing pains...

    About a week (~ Oct 14th) Amazon Web Services (AWS) EC2 servers attempting to deliver business-critical emails were blocked or fatally rejected because AWS IP addresses were added to a blacklist by Spamhaus.org. Problem resolved.

    Not very pleasant for companies providing business-class mail server hosting on AWS.

    Oct 15th AWS worked with Spamhaus to remove all EC2 ranges from their PBLs.

    The latest from Amazon Oct 21st:

    “It is our intention to make it easy to reliably send email from the EC2 environment. As a result of our experience last week, we have released some changes to improve the ability of valid users to send email from EC2. We have started a new thread with the details of the improvement we have made: http://developer.amazonwebservices.com/connect/thread.jspa?threadID=37650. Please let me know if you have any further issues or questions”

    Saturday, October 24, 2009

    The workhorse technology behind cloud computing is virtualization. Get to know it well.

    The magic pixie dust that makes a cloud a cloud is virtualization technology. The trick is to decouple the physical world of fixed hardware where one computer can behave as though it were many. Where your workspace is in the cloud and all you need is a Netbook (maybe an exaggeration).

    One of the more curious aspects of virtualization is the “virtual machine”. It is most affiliated with data-center server virtualization. A virtual machine is nothing more than a file that represents its physical counterparts. No hardware to purchase. No shipping fees. No wires to plug-in. (For those readers that are experts on virtualization, please forgive the oversimplification.)

    Hundreds of virtual machines are likely working in earnest inside your own organization. And yes, you are likely your very own cloud provider.

    All those virtual machines are important to your business. They can run your email system, your expense reporting application or your customer portal.

    So let’s briefly look at some of the ways that the virtual world of servers is vastly different than the physical one.

    We are familiar with our laptops going to sleep. (and waking up with a hang-over). How about if 10, 20 or 30 virtual machine go to sleep and wake up at varying times. Will all occurrences of a virus be identified across running, suspended and shutdown virtual machines? Not likely a big deal issue. But its worth thinking about the implications of appropriately configuring the virus scan.

    Relocating a physical server is back-breaking work. You pick it up, twist your neck and fall down. A virtual machine (after all it’s a file) can be made to zip across a network. Let’s think about that for a moment. What if it gets intercepted and lands in the wrong hands? A physical machine has to be carried into a facility. Is it easier for a virtual machine file that is not legit to find its way into your network? Not if you have policies in place to have a master or gold copies.

    Another interesting property in the virtual world is time. A virtual machine has to keep time, if nothing else than to remind you of mum’s birthday. Time is important. It is used to time-stamp transactions. However timestamps written in log files can also be stomped upon by a perpetrator to mask their activities.

    There are plenty of best practices to implement a safe and sound virtual infrastructure. Take a look at your policies and procedures to make certain they are available and executable. Some examples:

    · Continue to protect the physical environment.
    · Control who creates virtual machines
    · Quality control must include real-time configuration management
    · Consider encryption as an extra layer of protection for high-risk assets
    · Get to know your virtualization technology and how it can be exposed

    You can’t get into the virtual world without stepping through the physical world. However, things that happen in the virtual world are not a direct reflection of the physical world. Get savvy.

    Wednesday, October 7, 2009

    Google Apps: Here I Am

    At Tech Labs we are constantly working to get to know all the major Cloud Computing providers and thier virtual wares. Microsoft, Salesforce.com and of course Google.

    And Google is well on it’s way to building a reputation and trust that an enterprise can live with. The Google Apps web site already claims more than 1 million businesses running on the platform.

    I sat down with one of our consultants to understand some of the details behind Google Apps and what it takes to properly implement the product for an enterprise.

    Some of our conversation:

    1. What is Google Apps - in your words?
    Google Apps is a suite of products. You get Gmail, Talk, Calendar, Docs, and Sites - all of which are part of the $50/user/year licencing fee. Storage allocation is 25GB per user. The first foray for most clients is likely Gmail and Calendar and its not unusual to see "silent rollouts" of Google Docs and Googles Sites as collaboration tools.

    2. Security is one of the benefits touted by using Google Apps? Explain.
    Gartner estimates over 20,000 to 30,000 samples of potential malware are sent for analysis each day. And more than 5 million U.S. consumers lost money to phishing attacks during the 12 months ending in September 2008, a 39.8% increase over the number of victims a year earlier.

    Gmail is likely to stay more up-to-date with email filters that can spot malicious file attachments and URL filters to inspect for exploits are vital. However even that line-of-defense will suffers from the delay in finding and blocking zero-day attacks. Other cyber security capabilities will be needed.

    More than half of employees who left their companies in 2008 took some sensitive corporate data with them. Nearly 80% of these employees said that they knew it was against company policy to take the data, but they did it anyway (source: Ponemon Institute & Symantec). One source of data leakage is email messages that are used to exchange files loaded with hyper-sensitive information.

    Google Apps store documents in the 'Cloud' and instead pass around hyperlinks which point to documents that can only be shared with those that you previously granted permissions. Google Message Discovery and Google Message Security offer security and archival features that advance compliance requirements.

    Still questions abound such as government and regulatory compliance and service levels

    2. Where do you think Google Apps is headed in the enterprise?
    Google Apps lineage is of course consumer-focused, however it is evolving rapidly with each major release.

    At the sametime it is still not as feature rich as existing offerings by mainstay vendor such as Microsoft.

    Microsofts Business Productivity Online Suite (Microsoft BPOS) is appealing because it is available in both a pure SaaS model and a dedicated version. The advantages include custom security, adherance to compliance mandates and the ability to tailor features.

    Google Apps is advertised is a SaaS offering ideally to avoid one-off deployments. Users only have the option to get the same release. A pure SaaS offering has to carely balance the desire to quickly mobilize new features and get them safely deployed into production.

    Finally a key success factor to the roll out of Google Apps within an enterprise is to have a solid training and communications plan and strategy to allow for a smooth user adoption.

    Thanks Jonathan Hsu!

    Thursday, September 3, 2009

    Cloud: Finding True North

    I recently presented a workshop on cloud computing to a fairly large pharmaceutical company.
    The discussion rolled and swayed across all ports. IT is still relevant. Cloud computing is a component of the business service management strategy. Virtualization and IT automation are stepping stones. We shared our insights from working with many large enterprises.

    Towards the end of the session, you could tell the audience was eager to start searching for their own "true north" when it came to Cloud Computing. What's the best way to oriented with all the pundits, research and facts?

    Joe Tobolski (Global Lead of Infrastructure at Accenture Technology Labs) hit the spot with these closing remarks and guiding principles:

    1. Cloud Computing Strategy is one component of your IT’s Business Service Management strategy -- they are not separate and distinct.
    2. There is no single approach to Cloud Computing – the market will remain highly fragmented.
    3. Carefully evaluate candidate applications and IT services that can take advantage of cloud computing. Applications that don’t horizontally scale internally will not give you cost savings if hosted externally on a cloud.
    4. There is an “asymmetrical cost” to go into a cloud and then transition out. Still, carefully plan your exit strategy.
    5. Security and compliance are not portable across clouds and internal IT – but don’t let that slow your approach to Cloud Computing. Pick a suitable application and get going.

    Walid

    Thursday, August 20, 2009

    Hey Cloud - That's Mine, Now Give it Back


    The specter of vendor lock-in by cloud service providers is clear as the driven snow: modified programming languages, proprietary API's and non-portable cloud services.


    A recent speech by Vinton Cerf:
    “…Each cloud is a system unto itself. There is no way to express the idea of exchanging information between distinct computing clouds because there is no way to express the idea of “another cloud.” Nor is there any way to describe the information that is to be exchanged. Moreover, if the information contained in one computing cloud is protected from access by any but authorized users, there is no way to express how that protection is provided and how information about it should be propagated to another cloud when the data is transferred.” – from a recent speech by Vinton Cerf


    Cleanly extracting oneself from the clutches of a cloud service provider varies in pain. User lock-in is more of a compelling force as application functionality, code idioms, APIs, and aspects of the information system start to increasingly depend on Cloud provider specific services, such as transaction management , non-standard messaging and proprietary storage data formats.


    Exit strategies will depend on the type of cloud service provider, and the underlying technologies used to provide those services. For infrastructure clouds, application code and configurations are self-provided; these applications are the property of the consumer. In some IaaS implementations, virtual machine portability provides migration of running application loads from Cloud provider to internal resources or to another Cloud provider.



    The question of application portability becomes murkier as PaaS or SaaS offerings are used. In these cases, the cloud service provides the application’s basic architectural framework, which is usually tightly coupled to the underlying technical and operations infrastructure. De-coupling those applications is a difficult proposition, and may not be possible given intellectual property considerations. Its going to be tough to imagine these vendors agreeing on exposing thier black-box software generator with a standard programming model and interface. It will be important to consider data and process portability when utilizing PaaS and SaaS providers, and to allow for re-platforming or re-hosting if a transition is needed.

    The steps to reclaim a system that has been designed, coded, tested and deployed using one or more cloud services will depend on where you plant your system components.

    Here are some things to keep in mind as you seek a flexible relationship:

    • Typically, Cloud providers do not own Intellectual Property for artifacts developed or hosted within their IaaS platform; however, clear delineation of ownership is required to avoid potential future litigation.

    • Cloud services, by definition, should be loosely coupled

    • To reduce vendor dependency explore a hybrid approach whereby internal and external resources are implemented to fulfill a business requirement for mission critical or even secondary applications

    • Encapsulate provider specific integration points into core management and provisioning systems to isolate changes introduced by altering the sourcing model

    • Deploy run-time applications in a manner abstracted from underlying infrastructure and machine image

    • Build custom, standards based images capable of running on variety of standard platforms

    • Backups should also be machine independent

    • Carefully design application architecture and development techniques to minimize
    • Compliance to government mandates is not portable and neither are System certifications

    • Promote open standards

    • Put together a transition migration plan and test your cloud migration theories ...


    Monday, July 20, 2009

    Cloud Computing: The New Normal

    Let's get the awkward moment out of the way and define cloud computing for the rest of our time together. Until it changes of course. Here goes. The top 3 cloud styles "of computing that provides on-demand access to a shared set of highly scalable services” (skip the drum roll)...will be:

    • Software-As-A-Service which gives you and I ready-made (fully finished) applications, on-line social networks and unified collaboration tools without the restriction of firewalls. Its what i call Power to the people.
    • Platform-As-A-Service means to shield the developer from installing anything and deploying nothing because the programming platform is out-of-sight and out-of-mind. If you've ever written code then this paradigms mantra is: Power to the developer.
    • Infrastructure-As-A-Service (Infrastructure Cloud) outsources servers, networks and ultimately the data-center and the icing on top is next generation technology for mass-web-application delivery. In other words: Power to the enterprise.
    Accenture's point of view of cloud computing adds a fourth and necessary strata that bakes in the billion dollar business of business process outsourcing (BPO) [insert image here]

    In this blog entry I wanted to talk a bit about Infrastructure clouds. These providers are at the bottom of the IT rung and have gotten a lot of attention in FY09. A CIO organization that I am intimately familiar with estimates more than $400,000 per year savings if they were to relocate a single seasonal application to an unnamed pay-per-use cloud provider whose name starts with the letter 'a'. Savings were calculated based on the server landscape and servers sizes (development and test, staging and production).

    Infrastructure Clouds will happen incrementally and will eventually be the new normal. Dig deeper into the feasibility of moving an infrastructure capability outside of the enterprise and a lot is revealed in terms of pros & cons, pitfalls, issues & considerations. Substantial savings are possible and you can minimize your dependency on a static infrastructure. Amazon Web Services brings with it a pioneering pricing model that flexes with your variable capacity.

    However ... opting for such an external Infrastructure Cloud route is more than a mentality - its about a business case and buy-in from stake-holders. Here are the 8 (they were 10 at some-point) issues & considerations list as you explore and assess your infrastructure computing play:

    1. Calculating "hosting costs" of infrastructure clouds are the understatement of this year. A detailed cost of ownership is needed and one that honestly reveals the cost-benefit. Switching to anything new introduces complexity. And ongoing run/operational costs don't go away. After all "Linux (or Windows) boxes don't manage themselves". You must break-down the costs bottom-up.

    2. Hidden constraints and technical cloud architectures. Memory hogging applications will more than frown upon standard virtual machine settings. Databases may require expensive (effort and cost) "re-partitioning" to fit the cloud providers paradigm. There are new data management formulas in town that maybe better suited. NimbusDB, Hadoop to name a few. To truly and honestly capture the benefits of elasticity (scale-at-will) you will also have to re-architect or "re-factor" your application for parallelism.

    3. Cloudy Performance. Testing and benchmarking is needed to validate that your applications will run without loss of characteristics and won't drift into no-mans-land overtime. We have conducted internal evaluations of elastic load-balancing capabilities and I will share the results in a separate blog entry.

    4. Understand liability of outsourcing the handling and processing of data. Uncertainty, fear and the veritable "not-invented here syndrome" can very swiftly put a wrench in any public cloud adventure. Early input from the security teams and legal organizations is a must. Do you know what your requirements are in the first place. Maybe its something from NIST or an ISO Standard. Does your risk & security models fit in a virtual, multi-tenant and massively scalable setting. If not a net-new risk assessment is the only sure-fire method to reveal show-stoppers and understand strategies to overcome concerns of storing and handling Personally Identifiable Information, your compliance obligations and on-going status. Lots more here in another blog post.

    5. Bridge over troubled waters. Sorting out your options to talking back to the enterprise is dizzying. Licencing issues do not vanish with a public cloud. Appliance-based Virtual Private Network, software based virtual private clouds and other secure or dedicated network solutions are viable options. In another blog I will address some of the findings and observations vis-a-vis software and hardware Cloud VPNs.

    6. In Cloud provider we "maybe" trust. One quickly realizes that a short duration, non-sensitive application won't get anybody fired it its loaded up on an infrastructure cloud. Start-up companies are running thier business on clouds every day. But not every organization is in risk-averse start-up mode, and most likely you author/owner, controllor or processor of intellectual property or private customer data. In any relationship one will have to determine what is gained and what is lost. Trust is about confidence on either side of that equation. You form and determine your degree of trust in a variety of ways including: credentials or identify of the provider, behaviour (e.g. transparency, qualifications) reputation in the market-place, history and track-record in the business, binding commitments in Service Level Agreement & terms of use.

    7. Control & Governance . Sand-boxes, code promotion, deployment, service provisioning, service commissioning/De-Commissioning, service monitoring, backup, Disaster Recovery, charge backs. Just a smattering of the capabilities, services and tasks - in no particular order.

    8. Security, Privacy and Compliance. Suffice to say, security will have a strong voice of its own in the decision to move to a Cloud. Willingness to trust the vendor is will get you to the starting gate. Standards and requirements will factor into both your risk tolerance and dictate if safeguards limit your exposure. New cyber-threats see an larger attack surface (lots of computers in one place). The attacks themselves are more sophisticated and looking into rear-view mirror is not sufficient. You will have to more like the new E-Class - anticipatory to the situation at hand. Cloud computing is neither brand new, nor a leap of faith.

    Accenture believes the future IT infrastructure will be a combination of traditionally managed infrastructure and services sourced from IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS) cloud providers. Bridging the gap between existing environments and operating models and differing Cloud offerings will require the enterprise to support highly virtual, dynamic environment conducive to cloud computing, while enabling automated provisioning and orchestration. An internal Cloud is not imaginary (more on that later) and the benefits but can be extended to external Cloud providers to obtain capacity, services, and data to reduce cost, increase speed to market, address capacity demands.

    Our experience shows embracing a cloud model is substantially different than executing traditional Data Center models, from planning and acquisition through management and operations. It is generally counterproductive to re-engineer cloud infrastructure models to directly mirror traditional Data Center structures. While executing and integrating traditional management elements is essential, sourcing an external cloud computing model means ceding some architecture and infrastructure control. However, significant financial and flexibility benefits can be achieved by effectively harnessing and integrating the strongest aspects of traditional and cloud operation models.

    Until we meet again....Walid Negm

    Tuesday, July 14, 2009

    A Social and Technical Phenomenon

    Cloud computing is collaboration without the constraint of the firewall. It’s frictionless communication ala Twitter. Cloud computing is start-up experimentation at extremely low cost. First and fore-most an organization will need to determine the degree of trust it wants to place on a cloud provider. Will the cloud provider run the latest anti-virus software definitions? Do their routers and firewalls stand up to an onslaught of denial of service attack and network intrusions. Are the personnel given just the right administrative privileges and actions logged and audited?

    CIO and CISO’s will have to grapple with the age old dilemma of access versus accessibility.

    Cloud Insecurity

    One of the appealing draws of cloud computing is “at the flip of a dime” rent a set of servers or quickly subscribe to a practical Web service. The problem: attackers are looking for just this sort of business exception. Any temporary business need translates into an opportunity to search for holes in unprotected laptops, documents or anything.

    Cyber attackers are particularly interested in assets that briefly appear such as test or staging servers or data depots where they can get a foot-hold and set up their command and control

    And so the story will go that cloud computing may very well have a bull’s eye on it and the bad-guys are looking to get in, stay put and get what they want.

    Certainly the appeal of lower monthly or hourly billing will get my attention. I've been trying to out-smart Vonage, T-Mobile@Home, AT&T Cable and T-Mobile Wireless but to no avail. I still feel I have waste in my spending.

    And so do medium, small and large enterprises see the winds of change in how they spend thier money and are keen to understand thier own on-line/all-the-time destiny.

    To me, the different flavors of clouds flushes out into something like this:

    • SaaS – Power to the people: Twitter, portals, apps-on-tap
    • PaaS – Power to the developer: programming in the cloud, Cloud-based systems integration
    • IaaS – Power to IT: Hardware and low-level plumbing paid-by-the-clock

    SaaS is already a conventional choice for small, medium and large organizations alike. Its cousin Platform-as-a-Service (Google AppEngine or Microsoft Azure) is set to usher an era of productivity where feature-rich web sitesand complex systems are modeled, built and operated semi-automatically - without complex (and costly) software programming. PaaS may very well be that a once in a generation platform birth that will bringwith it imagination-led possibilities for the enterprise. And IaaS offers fully outsourced data-center facilities where more intricate workflows and workloads can run without loss of fidelity and the organization is left to focus on the line of business.