Security in the Cloud


Security is one of (if not THE) top concern companies and users have with cloud computing. The issue of cloud security, however, is much more complex than simply “is the cloud secure or not”. A cloud-based application can be hosted in a secure environment, with properly encrypted data and everything, and an attacker can still get access to your information through social engineering. On the other hand, you can have the most secure password policies in the world, but if the hosting environment gets hacked, you are still going to lose your data.
Any proper solution that tries to address the cloud security issues that exist today must take into account the three sides of the security issue: technology, processes, and responsibility. Another important factor to take into account is that the details and the importance of each one of these, relative to the others, change according to where in the cloud stack we are. Building secure cloud software is very different from security at the cloud platform level, and from secure infrastructure as well.

Technology

The first step is to employ the proper technology to secure applications and data. “Proper technology” varies widely depending on what layer of the cloud we are talking about. For cloud applications, security can be as simple as deploying proper security certificates and encryption. All sensitive information needs to be properly encrypted, so that even if an attacker gains access to your systems, any data that gets stolen will still need to be decrypted to be gotten at. And it’s not enough to simply encrypt passwords: if you know that people commonly employ their birthdays as passwords, encrypt that as well. As much as possible, technology should protect users from themselves without inconveniencing them.
A very interesting solution in this space is Porticor’s Virtual Private Data. It’s basically an encryption layer that sits transparently on top of any cloud data store, performing dynamic data encryption/decryption as data gets accessed. I recommend that anyone interested in securing cloud applications take a look at their solution.
On the lower layers of the cloud stack, security is much the same as it was before the cloud. Cloud platforms need to be secure just as operating systems are secured, avoiding malicious code from taking over other execution sessions or stealing data, and so on. In the infrastructure layer, security is both about maintaining a secure virtualization environment and about physical security. Fortunately, most top-tier cloud infrastructure providers already are very security minded, reducing risks on this side.

Process

All the technology in the world can’t save you if an attacker can call your receptionist and get her to install malware on your corporate network using her network administrator password. This is as true for the cloud as it is for private networks, and while something like this probably wouldn’t happen at a large enterprise, there is a surprisingly large number of small- and medium-size businesses where it just might.
If a company is deploying a Windows cloud server from Rackspace, for instance, it will come with a pretty complex password, automatic updates enabled, firewall-activated, and so on. Many times, though, the first step that people take is to change the password to something easier to remember – usually “password”, or “Pass1234” because a secure password must always include capital letters and numbers – and create an unprotected FTP tunnel to that server, “just to copy a few things”. What started as a reasonably secure server is now a security breach waiting to happen. It’s not enough to have the proper security tools. Companies need to build processes that actually put those tools to use.
Companies also underestimate the power of having proper information security policies communicated to all employees. When everyone in the company is security conscious, proper security comes much easier. The process side of security doesn’t start with technical processes, but with people, so proper and constant communication is fundamental.
Responsibility
So far, the two aspects we explored are pretty standard. While cloud applications need to be much more security conscious than traditional in-house applications, the technology needed to deploy the extra security is pretty standard. The same thing goes for securing cloud servers. The greatest differences between cloud security and traditional security lie in the matter of responsibility.
When a company deploys traditional software, IT knows its responsibilities. The software is inside the data centers it operates and controls, and anything that happens – data being stolen, servers being hacked, and so on – is their responsibility. Since IT has full control over the environment, they are comfortable with taking on the burdens that come with this control.
When things are moved to the cloud, however, IT departments lose control over the environment. It is understandable, then, that they are unwilling to take responsibility for problems that might happen. Having clearly separated responsibilities helps: hosting providers need to ensure the security of the underlying platform (virtualization layer, physical security, and so on). The rest would fall to the customers. But it is not enough. Providers need to offer guarantees in case something happens, and understand where internal IT departments are coming from, to improve relations and reduce their concerns.
All together
These three perspectives need to be taken into account together, or we run the risk of creating an even more complex environment than what already exists. In some ways, the cloud has the potential to make things more secure, by providing incentives or automating the management of common security tasks that many small businesses forget about. On the other hand, the concentration of data in the hands of a few service providers can make for very attractive targets, increasing the responsibility of these companies. No technology, process, or contract can, alone, remove the security concerns over the cloud; and everyone that has concerns about the cloud should look at the whole security package, and not technology or processes alone.




What is Flame Malware and What can we do about it?

Known by the names Flame, Flamer, and sKyWIper, the malware is significantly more complex then either Stuxnet or Duqu — and it appears to be targeting the same part of the world, namely the Middle East.
Preliminary reports from various security researchers indicate that Flame likely is a cyberwarfare weapon designed by a nation-state to conduct highly targeted espionage. Using a modular architecture, the malware is capable of performing a wide variety of malicious functions — including spying on users’ keystrokes, documents, and spoken conversations.
Vikram Thakur, principal research manager at Symantec Security Response, told eSecurity Planet that his firm was tipped off to the existence of Flamer by Hungarian research group CrySys (Laboratory of Cryptography and System Security). As it turned out, Symantec already had the Flamer malware (known to Symantec as W32.Flamer) in their database as it had been detected using a generic anti-virus signature. “Our telemetry tracked it back at least two years,” Thakur said. “We’re still digging in to see if similar files existed even prior to 2010.”
Dave Marcus, Director of Security Research for McAfee Labs, told eSecurity Planet that Flamer shows the characteristics of a targeted attack.
“With targeted attacks like Flamer, they are by nature not prevalent and not spreading out in the field,” Marcus said. “It’s not spreading like spam, it’s very targeted, so we’ve only seen a handful of detections globally.”
While the bulk of all infections are in the Middle East, Marcus noted that he has seen command-and-control activity in other areas of the world. Generally speaking, malware command and control servers are rarely located in the same geographical region where the malware outbreaks are occuring, Marcus noted.
The indications that Flamer may have escaped detection for several years is a cause for concern for many security experts.
“To me, the idea that this might have been around for some years is the most alarming aspect of the whole thing,” Roger Thompson, chief emerging threats researcher at ICSA Labs, told eSecurity Planet. “The worst hack is the one you don’t know about. In the fullness of time, it may turn out that this is just a honking great banking Trojan, but it’s incredibly dangerous to have any malicious code running around in your system, because it’s no longer your system — it’s theirs.”
Complex and Scalable Code
Although it is still early days in the full analysis of Flamer, one thing is clear -– the codebase is massive.
“Flamer is the largest piece of malware that we’ve ever analyzed,” said Symantec’s Thakur. “It could take weeks if not months to actually go through the whole thing.”
McAfee’s Marcus noted that most of the malware he encounters is in the 1 MB to 3 MB range, whereas Flamer is 30 MB or more.
“You’re literally talking about an order of complexity that is far greater than anything we have run into in a while,” Marcus said.
Flamer has an architecture that implies the original design intent was to ensure modular scalability, noted Thakur: “They used a lot of different types of encryption and coding techniques and they also have a local database built in.”
With its local database, Flamer could potentially store information taken from devices not connected to the Internet.
“If the worm is able to make it onto a device that is not on the Internet, it can store all the data in the database which can then be transferred to a portable device and then moved off to a command and control server at some point in the future,” Thakur said.
Portions of Flamer are written in the open-source Lua programming language, which Thakur notes is interesting in that Lua is very portable and could potentially run on a mobile phone. Flamer also uses SSH for secure communications with its command-and-control infrastructure.
Thakur noted that Symantec’s research team is trying to trace Flamer back to its origin, but cautioned that it will be a long analytical process. Symantec researchers will dig through all of their databases in an attempt to find any piece of evidence that may be linked to any of the threats exposed by Flamer.
“It’s a very difficult job and it’s not an exact science,” Thakur said.

Evaluating the Enterprise Risk
While Flamer is an immense piece of malware, the risk to most enterprise organizations appears to be moderate. McAfee’s Marcus stressed that chances of a U.S.-based enterprise IT shop encountering Flamer aren’t all that high.
“In an attack that is as specific to a geography as Flamer looks to be, there is very little chance of this particular variant hitting a wide number of people,” Marcus said.
There is however a more sinister side effect that may come as a result of the discovery of Flamer. Marcus stressed that one thing malware writers do exceptionally well is that they learn from other malware writers.
“We can expect in the future for someone to learn from Flamer and use it in a future malware variant,” Marcus said.
On a positive note, security researchers for the “good guys” can also learn from Flamer to help protect enterprises and consumers from similar and future threats.
“You take the things the enemy gives you and you learn what you can,” Marcus said. “That’s not to say that malware is ever a good thing, but we try and learn from it.”

Art of Entrepreneurship: Who to Listen to and Why

The art of entrepreneurship and the science of customer development is not just getting out of the building and listening to prospective customers. It’s understanding who to listen to and why.
I got a call from Satish, one of my ex-students last week. He got my attention when he said, “following your customer development stuff is making my company fail.” The rest of the conversation sounded too confusing for me to figure out over the phone, so I invited him out to the ranch to chat.

When he arrived, Satish sounded like he had 5 cups of coffee. Normally when I have students over, we’d sit in the house and we’d look at the fields trying to catch a glimpse of a bobcat hunting.
But in this case, I suggested we take a hike out to Potato Patch pond.

Potato Patch Pond

We took the trail behind the house down the hill, through the forest, and emerged into the bright sun in the lower valley. (Like many parts of the ranch this valley has its own micro-climate and today was one of those days when it was ten degrees warmer than up at the house.)
As we walked up the valley Satish kept up a running dialog catching me up on six years of family, classmates and how he started his consumer web company. It had recently rained and about every 50 feet we’d see another 3-inch salamander ambling across the trail. When the valley dead-ended in the canyon, we climbed 30-foot up a set of stairs and emerged looking at the water. A “hanging pond” is always a surprise to visitors. All of a sudden Satish’s stream of words slowed to a trickle and just stopped. He stood at the end of the small dock for a while taking it all in. I dragged him away and we followed the trail through the woods, around the pond, through the shadows of the trees.
As we circled the pond I tried to both keep my eyes on the dirt trail while glancing sideways for pond turtles and red-legged frogs. When I’m out here alone it’s quiet enough to hear the wind through the trees, and after awhile the sound of your own heartbeat. We sat on the bench staring across the water, with the only noise coming from ducks tracing patterns on the flat water. Sitting there Satish described his experience.

We Did Everything Customers Asked For

“We did every thing you said, we got out of the building and talked to potential customers. We surveyed a ton of them online, ran A/B tests, brought a segment of those who used the product in-house for face-to-face meetings. ” Yep, sound good.
“Next, we built a minimum viable product.” OK, still sounds good.
“And then we built everything our prospective customers asked for.” That took me aback.

Everything? I asked? “Yes, we added all their feature requests and we priced the product just like they requested. We had a ton of people come to our website and a healthy number actually activated.” That’s great I said, “but what’s your pricing model?,'”came the reply. Oh, oh. I bet I knew the answer to the next question, but I asked it anyway. “So, what’s the problem?” “Well everyone uses the product for awhile, but no one is upgrading to our paid product. We spent all this time building what customers asked for. And now most of the early users have stopped coming back.”
I looked at hard at Satish trying to remember where he had sat in my class. Then I asked, “Satish, what’s your business model?

What’s Your Business Model?

“Business model? I guess I was just trying to get as many people to my site as I could and make them happy. Then I thought I could charge them for something later and sell advertising based on the users I had.”
I pushed a bit harder.
“Your strategy counted on a freemium-to-paid upgrade path. What experiments did you run that convinced you that this was the right pricing tactic? Your attrition numbers mean users weren’t engaged with the product. What did you do about it?
“Did you think you were trying to get large networks of engaged users that can disrupt big markets? ‘Large’ is usually measured in millions of users. What experiments did you run that convinced you could get to that scale?”

 realized by the look in his eyes that none of this was making sense. “Well I got out of the building and listened to customers.” The wind was picking up over the pond so I suggested we start walking.
We stopped at the overlook a top of the waterfall, after the recent rain I had to shout over the noise of the rushing water. I offered that it sounded like he had done a great job listening to customers. And better, he had translated what he had heard into experiments and tests to acquire more users and get a higher percentage of those to activate.
But he was missing the bigger picture. The idea of the tests he ran wasn’t just to get data – it was to get insight. All of those activities – talking to customers, A/B testing, etc. needed to fit into his business model – how his company will find a repeatable and scalable business model and ultimately make money. And this is the step he had missed.

Customer Development = The Pursuit of Customer Understanding

Part of customer development is understanding which customers make sense for your business. The goal of listening to customers is not please every one of them. It’s to figure out which customer segment served his needs – both short and long term. And giving your product away, as he was discovering, is often a going out of business strategy.
The work he had done acquiring and activating customers were just one part of the entire business model.
As we started the long climb up the driveway, I suggested his fix might be simpler than he thought. He needed to start thinking about what a repeatable and scalable business model looked like.

I offered that acquiring users and then making money by finding payers assumed a multi-sided market (users/payers). But a freemium model assumed a single-sided market – one where the users became the payers. He really needed to think through his revenue model (the strategy his company uses to generate cash from each customer segment). And how was he going to use pricing, (the tactics of what he charged in each customer segment) to achieve that revenue model. Freemium was just one of many tactics. Single or multi-sided market? And which customers did he want to help him get there?
My guess was that he was going to end up firing a bunch of his customers – and that was OK.
As we sat back in the living room, I gave him a copy of The Startup Owner’s Manual and we watched a bobcat catch a gopher.

Lessons Learned

  • Getting out of the building is a great first step
  • Listening to potential customers is even better
  • Getting users to visit your site and try your product feels great
  • Your job is not to make every possible customer happy
  • Pick the customer segments and pricing tactics that drive your business model

Enterprise risk management strategies for Chief Information Officer (CIO)

Risk management is critical for any enterprises embarking on new IT projects and plans. There’s the risk of offshore outsourcing — how do you ensure your data is safe in the hands of a worker in another country? There are also risks in managing compliance efforts especially in offshore business operation. These include closing down your company or losing your position if the job isn’t done correctly. How do CIOs calculate and management risk? Take a look at the enterprise risk management strategies in this CIO Briefing for insight and advice on this important topic.
This share CIO Briefings series, which is allow to give IT leaders strategic guidance and advice that addresses the management and decision-making aspects of timely topics.
Managing operational risk
The common news headlines continue: systems failures, data breaches, project delays, troubled products, trading failures, money laundering through mobile networks. These are just some of the sinkholes in operational-risk land related to information technology. The question is, why? Why do they keep coming despite efforts to prevent them?
Why can’t I just get a single view of risk to the business, especially a particular business activity or process? What makes this so difficult? Most exasperated CIO asked at an executive briefing held by a chapter of the ISACA IT security organization after I discussed IT-related business risk.
One bad business-IT decision killed our company! Griming reality, right?
Analyzing IT-related risk in silos leaves gaps and frustrates business leaders. Responding to IT risk in silos increases cost, creates prioritization errors and unleashes other gremlins. Silos can lead to both fundamental errors (such as thinking that IT security equals IT risk management, or that IT compliance equals IT risk management) and more complex errors (such as missing the ways risks in a shared infrastructure affect business processes).
Every organization should be able to articulate how IT threats can harm a business. How a five-step risk management strategy, based on a risk management standard like ISO 31000, makes it easier to explain how IT threats become business threats.
How risk management standards can work for enterprise IT
IT security and risk professionals have historically had a hard time articulating how IT threats might negatively impact the business. That needs to change. Attacks on government sites, substantial fraud, and massive privacy breaches continue to expose to the world the high level of risk connected to our corporate and national IT infrastructure. Executives and managers will need to rely more on IT security data and analysis in order to better protect their corporate interests.
As internal and external pressure intensifies, IT professionals must adopt more sophisticated risk management practices so they can better articulate risks, mitigation plans and overall exposure. This means combining both security and risk mentalities, which can be difficult to translate into practical tools and processes.
Rather than start from scratch, security professionals should utilize the standards and guidance available in the enterprise risk management (ERM) domain. The fundamental risk management processes that should be applied to IT risk management, based on the new, streamlined risk management standard from the International Organization for Standardization (ISO): ISO 31000. The following five steps provide guidance for building a formal, ISO 31000-based IT risk management program that communicates well with, and adds value to, the rest of the organization:

Step 1: Establish the context

This step may seem esoteric or even irrelevant, but without clear definitions, there will be organizational confusion and arguments over responsibilities later on. Begin by identifying individuals with risk experience (internally or externally) to help formalize tools and methods for identifying, measuring, and analyzing risk. Once formal roles have been established, risk professionals should document the IT organization’s core objectives and define the ways in which IT risk management supports them.
Establishing risk appetites and tolerance during this first stage will help prioritize risk mitigation efforts later on. Conversations with risk management clients have indicated that most organizations initially choose to rank certain categories of risk for which they have less tolerance, rather than trying to develop quantifiable risk appetites. This is a good first step, but these organizations will eventually need more granular criteria to make informed decisions about which specific risks to focus on.
Step 2: Identify the risks
Risk managers will need to tap into their creativity to create a comprehensive list of potential risks. Risks not identified at this stage will not be analyzed or evaluated later on, so having an overly exhaustive list is preferable to one that is overly limited. Start by conducting workshops with relevant stakeholders, identifying the broad range of issues that could impair their objectives, processes and assets.
Forrester clients that have been using IT control frameworks, such as Control Objectives for Information and related Technology (COBIT) or ISO 27002, often find them to be useful guides for categorizing their risks. Note that risks should be specific to your organization, not a generic list. Plan to reexamine your full list of risks at least on an annual basis to identify any new or emerging risks.
Step 3: Analyze the risks
Security professionals typically have a good understanding of events and issues that might undermine IT processes; however, it’s often harder for them to determine what the impact will be to the IT department or the organization as a whole. Work closely with business stakeholders to understand criticality and impact. It may even be possible to leverage the business impact analysis work done by the business continuity team to fill in some of the gaps.
Many organizations have found it helpful to create a scale by which to approximate the level of likelihood and impact. For example, some companies create a matrix to measure the likelihood of risks based on characteristics such as exposure or attractiveness of target, and impact based on characteristics such as potential financial costs or reputation damage. The result is a “heat map” that helps prioritizes mitigation efforts on the set of risks with the highest combined likelihood and impact ratings.
Step 4: Evaluate the risks
Levels of risk after controls have been accounted for (i.e., residual risk) that fall outside of the organization’s risk tolerance will require treatment decisions. The risk appetite and thresholds previously defined will provide guidelines for when to avoid, accept, share, transfer, or mitigate risks. The decisions themselves should be made by individuals who are granted authority or accountability to manage each risk, with input from others who may be positively or negatively affected.
For some risks, the initial analysis may only allow you to determine that your exposure is potentially high enough to warrant further investigation. Make sure to conduct further analysis when necessary.
Step 5: Treat the risks
If the treatment decision involves the mitigation of risk, organizations need to design and implement controls to reduce threats to the organization’s achievement of objectives. Many risks will require more than one control (i.e. policies, training, prevention measures, etc.) to decrease their expected likelihood and/or impact. Conversely, some controls may mitigate more than one risk. It’s a good idea to consider multiple reevaluations during implementation.
Look out for peripheral effects caused by risk treatments that introduce new risks and/or opportunities. For example, the decision to transfer risk to a business partner may increase risk of that partner becoming disloyal.
Very few organizations have fully adopted risk management standards in any aspect of their business, and IT departments are no exception. Forrester recommends providing common guidance for all risk groups, collaborating with peers in functions such as audit and compliance, and settling on policies and procedures before turning to risk management technologies. These steps should help IT risk management programs improve their ability to work closely with the business and achieve a level of commitment in line with the level of risk they’re expected to address.
Strategic risk management includes risk-based approach to compliance
What is strategic risk management for compliance? and the answer will depend on who’s talking. But the gist is this: Rather than allowing the ever-multiplying regulatory mandates to determine a compliance program, an organization focuses on the threats that really matter to its business — operational, financial, and environmental and so on — and implements the controls and processes required protecting against them.
Focusing on protecting the business will result in a strategic risk management program that, in theory, will answer compliance regulations but in some cases go well beyond the mandate. A risk management approach, say advocates, also saves money by reducing the redundant controls and disparate processes that result when companies take an ad hoc approach.
The scope of protection against threats and degree of compliance depends on an organization’s risk appetite. The appetite for risk can wax and wane, depending on externalities such as a data breach, a global economic crisis or an angry mob of customers outraged by executive pay packages. When companies are making big profits, they can spend their way out of a compliance disaster. In financially rocky times, however, there is much less margin for error.
IT pros like Alexander and a variety of experts suggest that while a risk-based approach to compliance might be the right thing to do, it is also difficult, requiring that the organization:
• Define its risk appetite.
• Inventory the compliance obligations it faces.
• Understand the threats that put the various aspects of the business at risk.
• Identify vulnerabilities.
• Implement the controls and processes that mitigate those threats.
• Measure the residual risk against the organization’s risk appetite.
• Recalibrate its risk appetite to reflect internal and external changes in the threat landscape.
A risk-based approach to compliance requires a certain level of organizational maturity and, some experts hasten to add, is ill-advised for young companies.
Strategic risk management for compliance can be managed manually or by Excel spreadsheets, but vendors promise that sophisticated governance, risk and compliance (GRC) technology platforms will ease the pain. Meantime, those baseline compliance regulations still need to be met to an auditor’s satisfaction.
Do you know what level of risk your organization can tolerate?
The assumption in a risk management approach to compliance is that the business knows best about the risk level it can tolerate.
When it comes to risk management, getting your head around a tolerance level is extremely difficult.
Then there’s the dirty little secret of every organization: For hundreds of years, businesses have been managing risk intuitively: which perceive there’s to be a risk; therefore we build control. But most controls are built to a perception of the risk and a perception of the scope of the risks, without really stopping to consider what is the real risk and is this the right control.
By not doing the risk-benefit analysis, companies get the controls wrong. Spending $1 million control mitigating a $100,000 risk – not making any sense at all.

The short end of the cost-benefit analysis

Back in the 1970s, Ford Motor Co. was sued for allegedly making the callous calculation that it was cheaper to settle with the families of Pinto owners burnt in rear-end collisions than to redesign the gas tank. The case against Ford, as it turns out, was not so cut and dried, but the Pinto lives on in infamy as an example of a company applying a cost-benefit analysis and opting against the public’s welfare.
Regulations introduce externalities that risk management itself would not have brought to bear and Regulations make it a cost of doing business.
A recent example concerns new laws governing data privacy. For many years in the U.S., companies that collected personally identifiable information owned that data. In the past, losing that information didn’t hurt the collector much but could cause great harm to the consumer,  hence the regulations.  But the degree to which a business decides to meet the regulation varies, depending — once again — on its tolerance for risk. Organizations must decide whether they want to follow the letter of the law to get a checkmark from the auditor, Henry said, or more fully embrace the spirit of the law.
Is your philosophy as an organization minimal or maximal? And if it is minimal, you may decide that it is worth it to get a small regulatory fine rather than comply.
Indeed, “businesses now are cutting costs so narrowly that some know their controls are inadequate and are choosing not to spend that $1 million to put the processes, the people and infrastructure in place for that $100,000 fee.  They calculate they’re still $900,000 ahead but don’t expect a business to own up to that. They never let that cat out of the bag.
Sarbanes-Oxley drives risk management strategy
Compliance is expensive. It is hardly surprising that companies are looking for ways to reduce the cost of regulatory compliance or, better yet, use compliance to competitive advantage. According to Boston-based AMR Research Inc.’s 2008 survey of more than 400 business and IT executives, GRC spending totaled more than $32 billion in 2008, a 7.4% increase from the prior year.
The year-over-year growth was actually less than the 8.5% growth from 2006 to 2007, but the data shows that spending among companies is shifting from specific GRC projects to a broad-based support of risk. In addition to risk and regulatory compliance, respondents told AMR they are using GRC budgets to streamline business processes, get better visibility to operations, improve quality and secure the environment.
In prior years, compliance as well as risk of noncompliance was the primary driving force behind investments in GRC technology and services. GRC has emerged as the new compliance.
Folding regulatory mandates into the organization’s holistic risk management strategy gained momentum in the wake of the Sarbanes-Oxley Act of 2002 (SOX), one of the most expensive regulations imposed on companies. SOX was passed as protection for investors after the financial fraud perpetrated by Enron Corp. and other publicly held companies, but it was quickly condemned by critics as a yoke on American business, costing billions of dollars more than projected and handicapping U.S. companies in the global marketplace.
Indeed, the law’s initial lack of guidance on the infamous Section 404 prompted many companies to err on the (expensive) side of caution, treating the law as a laundry list of controls. In 2007, under fire from business groups, the Securities and Exchange Commission and Public Company Accounting Oversight Board issued a new set of rules encouraging a more top down-approach to SOX.
There are certain areas mandated you wouldn’t want to meddle with — it is legal and no exceptions — but instead of checking every little box, companies were advised to take a more risk-based approach.
Risk management frameworks and automated controls
Risk management frameworks are not new, and neither, really, is a risk-based approach to compliance. But the strategy has been gaining ground, driven in large part by IT as well as by IT best practices frameworks such as COBIT and the IT Infrastructure Library.
Fifteen years ago at any well-managed organization, 75% of controls were manual. Today, the industry benchmark is the other way around. IT drives about 90% of the controls and 10% are manual. The endpoint is to move the 10% manual controls to automated controls.
Two fundamental building blocks are essential to adopting a risk-based approach to compliance. A stable systems and processes, and a strong business ethos. If a company has absolutely diverse processes, it is not a good choice it’s more like crisis management than risk management for those guys — compliance Whack-a-Mole.
Formulating a strategic risk management strategy also requires a clear definition of the values and principles that drive the organization’s business — in other words, a certain level of maturity. If the ethos is loosely defined, then it is not safe to take a holistic approach to compliance.
Companies that make the grade, that give consistent guidance to investors indeed any that operate successfully in the SOX arena are probably ready for a risk-based approach.
Navigating social media risks
Developing corporate social media policies is an ongoing experiment akin to the struggle enterprises endured when the Internet and email were introduced as business tools. Enterprises should not assume, however, that the policies they developed over many years for Internet and email use are a perfect fit for social media.
Companies are making a mistake when they say social media is the same as email and chat. there’s enough that is different about social media that you need to be blunt and state the [rules of behavior] again, even if they’re the same words [used for older e-communications polices] — which I doubt they will be.
For starters, e-discovery polices will change, given the free-for-all nature of social networking, according to Stew Sutton, principal scientist for knowledge management at The Aerospace Corp., a federally funded research and development center in El Segundo, Calif. His organization has no limits on email retention, but with “social conversations, wikis, blogs and tweet streams, the mass of data sitting out there becomes a problem,” he said. The issues can make e-discovery “extremely costly.”
CIOs weigh use of social media against security concerns
One of the Medical Center, a private hospital center affiliated with one U.S. University, blocks access to all social media websites using security software from Websense Inc. Users who attempt to use such sites as Facebook, YouTube or Twitter are shown a page indicating that their destination is off-limits. Nevertheless, the debate about whether to open up access to such sites or to keep blocking them remains contentious.
In fact, the discussion comes up “practically on a daily basis,” said Brad Blake, director of IT at BMC. “As you can imagine, we have a lot of users who want access to these sites, but for a variety of reasons we do not feel comfortable opening them.”
If BMC created a Facebook account and asked its patients to be friends, that would constitute a security breach, senior management has felt it easier just to block these sites rather than trying to police and manage them.
CIOs faced with the use of social media as a business tool are hard-pressed to balance that business need against security concerns. Some are so hard-pressed, in fact, that they begged off being interviewed for this story, asserting they are too new to the game to speak knowledgeably about security tools for social media. Other CIOs were pressured by their public relations people not to broadcast their thinking, for security reasons. Even those who agreed to describe their strategy for securing social media were hesitant about providing details about their IT tools. And others were in a position similar to Blake: As their companies wrestled with how the business should use social media, the default position was to simply block access.
We are finding that a lot of these policies are disallowing use of social media, even when there is a business need. Companies have people bringing in social media and using it faster than the policies and the security groups can keep up with.
Not so long ago, the notion seemed absurd that employees would use a social media website like YouTube for business purposes. Now, many marketing departments are putting videos on YouTube, as well as tracking videos that competitors post. But protecting the business from the risks of social media while facilitating a legitimate business need — at least on a proactive basis — remains outside the grasp of many businesses.
People are not there yet. A lot of the tools — access controls being one — are coarse and crude. Implementing nuanced, automated rules that, for example, allow a marketing department to use YouTube as long as it takes up only so much bandwidth, or is used only during a certain time, is “very difficult.
Companies need to monitor their networks and desktops, as well as their social networks, to find out what employees and outsiders are saying about the company. In such situations, however, often the best that can be done with existing technology is to detect problems after the fact.
Most security professional encourages CIS to track company information that shows up on social media sites. There are numerous analytic tools for Twitter, including TweetStats,Twitter Grader and Hootsuite. Such Web and content filtering tools as Websense’s SurfControl cover the Internet and email. Indeed, internal tools for monitoring employees’ Internet use have been in place for a long time.  Most good firewalls will spit out variances — a red light alerting this person is uploading 2 GB of data.
Security tools aren’t that smart, however. “Intrusion prevention systems aren’t smart enough to shut off connections based on the content or syntax of something that people are posting,” Baumgarten said. A clear policy on the use of social media is still the first line of defense against social media threats.
Avoiding cloud computing risks
Following the recent downtime and data breaches at top-tier cloud service providers including Amazon Web Services LLC, Sony Corp. and Epsilon Data Management LLC, the risk deck has been shuffled at enterprises looking to move to hybrid cloud computing. Two risks that lurked in the middle of our top 10 list — liability and identity management — have floated to the top.
Once again, enterprise executives are talking about the need for cloud insurance, or at least a discussion about who is responsible when the cloud goes down. Presently, public clouds offer standardized service-level agreements, or SLAs, that offer remuneration for time — but not for potential business — lost during the downtime. Recent events could be opportunities for providers and CIOs to negotiate premium availability services, according to experts.
Why is cloud computing so hard to understand? It would be an equally fair question to ask why today’s Information Technology is so hard to understand. The answer would be because it covers the entire range of business requirements, from back-office enterprise systems to various ways such systems can be implemented. Cloud computing covers an equal breadth of both technology and, equally important, business requirements. Therefore, many different definitions are acceptable and fall within the overall topic.
But why use the term “cloud computing” at all? It originates from the work to develop easy-to-use consumer IT (Web 2.0) and its differences from existing difficult-to-use enterprise IT systems.
A Web 2.0 site allows its users to interact with other users or to change content, in contrast to non-interactive Web 1.0 sites where users are limited to the passive viewing of information. Although the term Web 2.0 suggests a new version of the World Wide Web, it does not refer to new technology but rather to cumulative changes in the ways software developers and end-users use the Web.
World Wide Web inventor Tim Berners-Lee clarifies, “I think Web 2.0 is, of course, a piece of jargon; nobody even knows what it means. If Web 2.0 for you is blogs and wikis, then that is ‘people to people.’ But that was what the Web was supposed to be all along. The Web was designed to be a collaborative space where people can interact.”
In short, Web 2.0 isn’t new technology; it’s an emerging usage pattern. Ditto for cloud computing; it’s an emerging usage pattern that draws on existing forms of IT resources. Extending Berners-Lee’s definition of Web 2.0, the companion to this book, Dot Cloud: The 21st Century Business Platform, helps clarify that cloud computing isn’t a new technology:
“The cloud is the ‘real Internet’ or what the Internet was really meant to be in the first place, an endless computer made up of networks of networks of computers.”
“For geeks,” it continues, “cloud computing has been used to mean grid computing, utility computing, Software as a Service, virtualization, Internet-based applications, autonomic computing, peer-to-peer computing and remote processing — and various combinations of these terms. For non-geeks, cloud computing is simply a platform where individuals and companies use the Internet to access endless hardware software and data resources for most of their computing needs and people-to-people interactions, leaving the mess to third-party suppliers.”

Cloud’s birth in the new world

Again, cloud computing isn’t new technology; it’s a newly evolved delivery model. The key point is that cloud computing focuses on the end users and their abilities to do what they want to do, singularly or in communities, without the need for specialized IT support. The technology layer is abstracted, or hidden, and is simply represented by a drawing of a “cloud.” This same principle has been used in the past for certain technologies, such as the Internet itself. At the same time, as the Web 2.0 technologists were perfecting their approach to people-centric collaboration, interactions, use of search and so on, traditional IT technologists were working to improve the flexibility and usability of existing IT.
This was the path that led to virtualization, the ability to share computational resources and reduce the barriers of costs and overhead of system administration. Flexibility in computational resources was in fact exactly what was needed to support the Web 2.0 environment. Whereas IT was largely based on a known and limited number of users working on a known and limited number of applications, Web 2.0 is based on any number of users deploying any number of services, as and when required in a totally random dynamic demand model.
The trend toward improving the cost and flexibility of current in-house IT capabilities by using virtualization can be said to be a part of cloud computing as much as shifting to Web-based applications supplied as services from a specialist online provider. Thus it is helpful to define cloud computing in terms of usage patterns or “use cases” for internal cost savings or external human collaboration more than defining the technical aspects.
There are differences in regional emphases on what is driving the adoption of cloud computing. The North American market is more heavily focused on a new wave of IT system upgrades; the European market is more focused on the delivery of new marketplaces and services; and the Asian market is more focused on the ability to jump past on-premise IT and go straight to remote service centers.

How the cloud shift affects front-office activities?

There is a real shift in business requirements that is driving the “use” as a defining issue. IT has done its work of automating back office business processes and improving enterprise efficiency very well, so well that studies show the percentage of an office worker’s time spent on processes has dropped steadily. Put another way, the routine elements of operations have been identified and optimized. But now it’s the front office activities of interacting with customers, suppliers and trading partners that make up the majority of the work.
Traditional IT has done little to address this, as its core technologies and methodologies of tightly-coupled, data-centric applications simply aren’t suitable for the user-driven flexibility that is required in the front office. The needed technology shift can be summarized as one from “supply push” to “demand pull” of data, information and services.
Business requirements are increasingly being focused on the front office around improving revenues, margins, market share and customer services. To address these requirements, a change in the core technologies is needed in order to deliver diversity around the edge of the business where differentiation and real revenue value are created. Web 2.0 user-centric capabilities are seen as a significant part of the answer.
The technology model of flexible combinations of “services” instead of monolithic applications, combined with user-driven orchestration of those services, supports this shifting front office emphasis on the use of technology in business. It’s not even just a technology and requirement match; it’s also a match on the supply side. These new Web 2.0 requirements delivered through the cloud offer fast, even instantaneous, implementations with no capital cost or provisioning time.
This contrasts to the yearly budget and cost recovery models of traditional back office IT. In fact many cloud-based front office services may only have a life of a few weeks or months as business needs continually change to suit the increasingly dynamic nature of global markets. Thus the supply of pay-as-you-go instant provisioning of resources is a core driver in the adoption of cloud computing. This funding model of direct cost attribution to the business user is in stark contrast to the traditional overhead recovery IT model.
While cloud computing can reduce the cost and complexity of provisioning computational capabilities, it also can be used to build new shared service centers operating with greater effectiveness “at the edge” of the business where there’s money to be made. Front office requirements focus on people, expertise and collaboration in any-to-any combinations.
According to Dot Cloud, “There will be many ways in which the cloud will change businesses and the economy, most of them hard to predict, but one theme is already emerging. Businesses are becoming more like the technology itself: more adaptable, more interwoven and more specialized . These developments may not be new, but the advent of cloud computing will speed them up.”
There are many benefits to the various cloud computing models. But for each benefit, such as cost savings, speed to market and scalability, there are just as many risks and gaps in the cloud computing model.
The on-demand computing model in itself is a dilemma. With the on-demand utility model, enterprises often gain a self-service interface so users can self-provision an application, or extra storage from an Infrastructure as a Service provider. This empowers users and speeds up projects.
The flip side: Such services may be too easy to consume. Burton Group Inc. analyst Drue Reeves, speaking at the firm’s Catalyst show last week, shared a story of a CIO receiving bills for 25 different people in his company with 25 different accounts with cloud services providers. Is finance aware of this, or will it be in for a sticker shock?
Lack of governance can thus be a problem. The finance department may have to address users simply putting services on a credit card, and there’s also the issue of signing up for services without following corporate-mandated procedures and policies for security and data privacy. Does the information being put in the cloud by these rogue users contain sensitive data? Does the cloud provider have any regulatory compliance responsibility, and if not, then is it your problem?
There are several other big what-ifs regarding providers. For example, do they have service-level agreements (SLAs)? Can you get an SLA that covers security parameters, data privacy, reliability/availability and uptime, data and infrastructure transparency?
The main issues are you can’t see behind the [cloud providers’] service interface so you don’t know what their storage capabilities really are, what their infrastructure really is … so how can you make SLA guarantees [to users]?
Furthermore, would the provider be able to respond to an e-discovery request? Is that on the SLA, and is that information classified, easily accessible and protected?
For some companies, a lack of an SLA is not an issue. For CNS Response Inc., a psychopharmacology lab service that provides a test for doctors to match the appropriate drug to a behavioral problem, not having an SLA with Saleforce.com Inc. was a moot point.
But is this good enough for a large enterprise? That question remains, and experts said it will be up to customers to push vendors to provide appropriate SLAs.
In fact, a big message at the show was pushing vendors to do such things as:
Have open application programming interfaces (APIs). There is an inability to monitor and manage APIs on many levels. Customers cannot see where their data resides at their cloud provider, and more importantly, there is no application or service management layer to gain visibility into the performance and management of the application.
There has to be a management layer so customers can see what and where their assets are for the cloud, what systems are used by which applications. Just think of the cloud as your own data center.
Create fair licensing schemes. Enterprises should be pushing cloud providers to move away from licensing based on physical hardware and compute resources to licenses based on virtual CPUs, managed or installed instances and user seats.
Which brings up another significant what-if: What happens to your data in a legal entanglement?
What if you miss paying a bill, or decide not to pay a bill for various reasons, like dissatisfaction with the service? Do you lose your data? Is access to your data put on hold?
There are a lot of questions as to who ultimately owns the data for e-discovery purposes, or if you decide to switch providers. Will you have to start all over if you didn’t put the code in escrow, for example?
Cloud computing touts many benefits, but Burton experts at the show said enterprises need to be aware of the what-ifs: What does this really mean for my bottom line, how do I govern this, who really has access to my data and what do the cloud computing providers really have to offer?

Introduction to Security as a Service

The mission statement of the Cloud Security Alliance is “… a non-profit organization formed to promote the use of best practices for providing security assurance within Cloud Computing, and provide education on the uses of Cloud Computing to help secure all other forms of computing.” In order to provide greater focus on the second part of our mission statement, the CSA is embarking on a new research project to provide greater clarity on the area of Security as a Service. A whitepaper will be produced as a result of this research, which will also be considered to be a candidate new domain for version 3 of the CSA guidance.
Numerous security vendors are now leveraging cloud based models to deliver security solutions. This shift has occurred for a variety of reasons including greater economies of scale and streamlined delivery mechanisms. Regardless of the motivations for offering such services, consumers are now faced with evaluating security solutions which do not run on premises. Consumers need to understand the unique nature of cloud delivered security offerings so that they are in a position to evaluate the offerings and to understand if they will meet their needs.
The purpose of this research will be to identify consensus definitions of what Security as a Service means, to categorize the different types of Security as a Service and to provide guidance to organizations on reasonable implementation practices. Other research purposes will be identified by the working group.

[PDF Paper]