12 December, 2007

ROSI

The folks at Intel posted a very interesting paper on ROSI (Return on Security Investment). As I mentioned in my blog on metrics, trying to determine your value for your security investment is very difficult. Some may argue its not even possible. What is so interesting with the Intel report is its both easy to read and its based on past implementation. In other words, they talk about what they did. I think this paper is a wonderful start. However, I'm not sure just how effective the method would be. Overall their approach is you measure all the incidents that happened in the past (they used two years of data), estimate the average cost per incident and then total up the total cost. Then implement your security controls and measure how many incidents you have. The delta is your savings. While a very effective starting point, I have several questions I can't figure out.
  1. What happens with this method when your new security program mitigates incidents you never detected in the first place? For example, lets say you counted 400 incidents in your organization last year, but there were really 500. When you implemented your new security measures your incidents drop to zero. Your delta is off by 100 incidents. I'm nit picking here, but your security program actually has far greater ROI. The reason I'm concerned about this is because a good security program mitigates threats/vulnerabilities you did not know about.
  2. However, what I'm even more concerned about is good security includes good detection. Now, what happens if you start with 400 incidents, then implement security controls which includes good detection. Now all of the sudden you are detecting many more incidents you never would have detected before. Even though the total incidents could have gone down, because of your improved detection capabilities management perceives they have gone up.

I commend Intel for what appears to be a great start. However, I just can't believe ROSI is as simple as counting incidents and measuring deltas. Just look at TJX, all they have had is just one security incident in the past 3 years (that we know about).

1 comment:

Anonymous said...

Lance,

Thanks for taking the time to read the whitepaper and provide comments. In response to your questions:

The method is not intended to be hyper-precise. In fact it was designed and wielded to achieve accuracy necessary for better business decisions. In your example, you indicate a number of incidents which were never detected (100 quantity). If we take a pragmatic view, it would probably be safe to assume the collective impact of the un-noticed incidents were not material. If they were material, they should have been noticed at some level. So, given the fact we are trying to evaluate the value of something attempting to affect change, the 100 incidents really have little weight in the big picture.

Additionally, this method is purposely conservative and defensible. You cannot measure what you don’t know. A ‘good’ program is relative to the impact it has based upon the cost to deliver. How can it be ‘good’ if a program has no measurable affect on the environment? The effect of a program on hidden/immaterial incidents will not be taken into account by the audience as they are unaware of a value impact anyway.

I too believe Detection is a critical piece of any defense-in-depth strategy. We have successfully embraced such a posture as it applies itself to the technical as well as behavioral aspects of cyber security. I have a blog on defense-in-depth here - http://communities.intel.com/openport/blogs/it/2007/10/29/defense-in-depth-information-security-strategy or you can listen to Malcolm Harkins, General Manager Intel Risk Security talk about it here - http://blogs.intel.com/it/2007/09/intels_layered_approach_to_inf.php.

One limitation of this method is, it is only applicable to programs which reduce occurrences of incidents versus programs which reduce effects of those incidents. Detection halfway falls into this category, as detection is worthless if the downstream capabilities cannot use the information to reduce the losses caused by the incidents. Investment in better detection is only worthwhile if you can act on the information provided. So if your investment makes that happen, then the number of successful incidents will reduce or the average losses associated with an incident will be recalculated based upon the better data to show an overall benefit, therefore value will become evident in the ROSI methodology.

In the end, this is but one methodology and does not fit all (or even all that many) situations, given its high demand of data and overall effort to produce. The industry has a long way to go, and I can only hope this is just one modest step in the right direction.

Cheers,

Matthew Rosenquist