Quantcast
Channel: SecureITExpert » Compliance
Viewing all articles
Browse latest Browse all 15

Compliance: Measuring Risk Under PCI 3.0 (Part II)

$
0
0

Measuring Risk Under PCI 3.0 (Part II)

If you missed Part I of this series, you can read it here.


I talked a LOT about 'problems' in the first part of this series.  Now I want to talk about solutions - or at least the concept that I hinted at before: "The basic idea is to leverage the Verizon DBIR as a data source, VERIS as a taxonomy for identifying threat scenarios, the Sans Top 20 Critical Controls as a common control library, Binary Risk Analysis as the analysis method itself, and a properly framed risk-register as the output. I’m also considering various visualization techniques that can help drive the process, and make it palatable on the business side."

Here's a deeper explanation...

1. We Need Good Data:

Real risk measurement is a highly data-driven process. Crap in equals crap out. So how do we avoid the crap, and find the data we need to fulfill our common PCI-related risk assessment needs? One answer is the annual Verizon Data Breach Investigation Report (DBIR). http://www.verizonenterprise.com/DBIR/2013/

Whether or not you ‘like’ Verizon is pretty much irrelevant. They’ve been at the breach statistics game for a while now. They’ve been able to mature their processes, penetrate the veil of breach reporting, and build a solid reputation as an industry leader in this particular area. The list of organizations supporting their efforts grows longer and longer each year, their pool of data grows larger and larger each year, and their comparative analysis capabilities get better and better each year.

Best of all, they rely on hard data, on real security incidents and breach reports, on what’s actually happening day in and day out on the front lines. They aren’t measuring perceptions and making a bunch of wild-ass guesses. They’re doing solid data analytics work. Like it or not, ignoring what they have to offer could turn out to be a pretty piss-poor career move.

2. We Need a Common Taxonomy:

When it comes to discussing the relationship between PCI requirements and the types of security breaches that impact cardholder data, I stand firm in my conviction that the Vocabulary for Event Recording and Incident Sharing (VERIS) taxonomy is a powerful tool. It doesn’t hurt that VERIS is at the heart of the DBIR as well, offering up some pretty nice symmetry. http://www.veriscommunity.net/doku.php

I’ll provide more details a bit later on, but for now I want to focus on the value of the VERIS schema and why I think it’s such an important part of the risk assessment framework I’m suggesting. As discussed in all of their literature, there are at least four critical data points that MUST be captured in order to perform proper data analysis against a multitude of breach reports:

• Actors: Whose actions affected the asset?
• Actions: What actions affected the asset?
• Assets: Which assets were affected?
• Attributes: How the asset was affected?

Here I’ll draw from NIST SP 800-30 (Rev 1) for a bit of an added explanation. It’s a simple, but highly accurate portrayal of how risks manifest within an information ecosystem.

image1

For any risk assessment methodology to work, we need it to include these elements. Since they’re already present within the VERIS taxonomy (and since VERIS is an openly shared, community-driven, widely adopted standard), it makes more than just a little sense that we adopt part of it to help us frame up our risks. Now we just need a way to represent our current defensive posture (.e. control strength).

Let’s also return briefly to our conversation about the importance of a data-driven assessment model. We’ll be using the Verizon DBIR, matched up with VERIS in order to establish our threat scenarios (I’ll get to that soon enough). When it comes to controls though – our focus needs to be slightly different. The data we’re looking for here is all about control efficacy, but this one is a tough nut to crack.

Most control mechanisms in use today are reflective of ‘best practices’ within the industry. And there are many many different options to choose from. For the most part, they are far too broad to be of any real use at a granular, breach-oriented level. Many of them are also based on perceptions, not data – and adhere to a ‘one-size-fits-all’ consensus approach.

To continue with our theme so far, we need to choose a control library that is, itself, based on actual data – not perceptions. In other words, if we were to look deeply at the threat data in the DBIR and map our threat scenarios to a control framework that was actually built using the same data (or data similar to it), we’d be right on target.

As of right now, the most prescriptive, data-driven control library available to us; one granular enough to meet our needs at a risk analysis level, is the SANS ‘Critical Controls for Effective Cyber Defense’ (version 4.1 being the most current). http://www.sans.org/critical-security-controls/

Why the Critical Controls above all other potential resources? Because, as stated in their document:

“The strength of the Critical Controls is that they reflect the combined knowledge of actual attacks and effective defenses of experts in the many organizations that have exclusive and deep knowledge about current threats... Top experts from all these organizations pooled their extensive first-hand knowledge of actual cyber attacks and developed a consensus list of the best defensive techniques to stop them. This has ensured that the Critical Controls are the most effective and specific set of technical measures available to detect, prevent, and mitigate damage from the most common and damaging of those attacks.”

Not to mention what the Verizon DBIR team had to say about it. Just read the section on conclusions and recommendations from the 2013 report. They’ve come to the realization that these top 20 critical controls are in direct alignment with the types of steps an organization must take in order to defend against attackers – again, based on real data! They even include a high level map:

image2

Personally, I expect to see the 2014 DBIR demonstrate even tighter ties with the critical controls approach. Again, there’s a relationship between all of these different components – the taxonomies and tools; that can be leveraged to simplify a complex problem. No let’s get to the really fun part.

3. We Need a Simple Methodology

I’ve already alluded to a few problems with some of the commonly adopted risk assessment methodologies in use today. Chief among these being the resource overhead that’s often attached to them; and the limited value of the output that results from them. Your mileage will vary of course – depending on several factors. But most seasoned security professionals will agree with the sentiment that risk assessments can be a serious pain in the ass bordering on the brink of also being a complete waste of time.

We need speed, accuracy, and simplicity if we’re going to make risk assessments work in the real world. And there’s no single risk assessment/analysis technique better suited to our particular task than the Binary Risk Analysis method offered up by Mr. Ben Sapiro. While not perfect (no assessment methodology is), it hits on every key factor that we’ve identified so far. It gets us where we need to go in a timely, effective, and efficient manner. What more could you ask for?

Ben has even gone so far as to boil the entire risk analysis process down so far that you can fit the entire process and all its steps on one sheet of paper. Seriously, you should check it out: https://binary.protect.io/

The only two things we really need are a well-defined threat scenario and an understanding of our defensive posture as it relates to that threat scenario. So far we’ve baked in the Verizon DBIR, VERIS, and the Critical Controls to make this possible. Now all we need to do is push the data down the chute and see what comes out on the other end.

Part III of this series will actually go through the process and demonstrate how everything fits together – so stay tuned for more info.

4. We need an Actionable Outcome

This is where the rubber meets the road. All too often THIS is where risk assessments fail! Within our industry, we’re used to seeing a final ‘report’ at the end of an assessment that provides ‘findings and recommendations’. Sometimes there’s a prioritized list of actions that should be taken – sometimes not. MOST times, what we end up with though is a laundry list of ‘to do’ items ranked via some sort of prioritization voodoo.

So what happens to most risk reports? They end up on someone’s desk, usually in a pile of other action items they’ll never have the time, resources, or budget to get to. Think I’m lying or painting a ‘darker than truth’ picture here? Okay, go back through your annual risk assessments and count up the number of items that show up again and again year after year. You may have several, or you may just have one or two that everyone ‘acknowledges’, but no one has a clue how to fix. Yes, we ALL have them!

So instead of killing trees, ratcheting up shareholder value with the paper companies, and rewarding assessors who ‘charge by the pound’, how do we go from where we are to where we should be? Well first of all, the process itself is largely to blame – but we’ve already outlined some of the steps we’re going to take to get past that little hurdle.

The way we build and deliver the report itself is what I think causes most problems. Not only do you have to read through 30 to 300 pages of background material, but the ‘prioritized remediation plan’ that may or may not be included in the back is often enough to make anyone’s head spin. It’s overwhelming and unrealistic to achieve. The whole thing is largely arbitrary (depending on who did the assessment, what methodology they used, and what experience/biases they brought with them into the process).

When I’m doing risk assessments myself, and especially when I am the recipient of one, I want to see clear, concise, meaningful, and actionable tasks that I can assign and track against. I want them to be simple, measurable, achievable, realistic, time-based, and include ways to evaluate and reevaluate progress throughout the entire process. Yep, I want it to be SMART – SMARTER than SMART even!

This sounds like it has all of the hallmarks of a ‘project’. So who better to consult than project managers? If you talk to a project manager, they’ll probably give you a whole lot of great advice based on the PMBOK or PRINCE2 methodologies, but again, going simple, I want to zero in on an action tool. And that’s what we’re going to find present in what’s called a ‘risk register’.

A risk register is a simple tool used to explain and track project risks. It’s not limited to projects though – and can be easily adopted to meet our needs. If we realign the standard format to include our threat scenario, applicable controls, risk rating criteria from the BRA, and a few other basic odds and ends, we can actually dump the traditional ‘report’ altogether and go straight to action!

We’ll cover the risk register and how it works a bit more in-depth below

5. We Need a Self-Correcting Model

Not just a self-correcting model, but a self-sustaining one that grows and matures with us as we get better at risk management. This is only possible if we contribute something back into the fold. If you add up everything we’ve discussed so far, there’s a clear path we’ve been laying out. We use breach data and a breach-related taxonomy to initiate the entire process. We then feed those things into our risk assessment methodology to generate our risk register and prioritized remediation plans.

A process like this can only mature if there’s a continuing influx of new data. In other words, if you aren’t contributing your own incident and breach data as part of the annual DBIR process, then you’re doing yourself, and the security community as a whole, a huge disservice. New data helps us identify new threats, vulnerabilities, and risks; new trends and new challenges. If our data becomes stagnant – or falls below the threshold of being ‘statistically significant’ then the whole process begins to degrade and collapses in on itself.

In essence, we get out of it what we put into it. By working together and sharing our data, we’re also leaping over another huge hurdle that we’ve faced for years (though things have been getting a bit better recently) – all the guesswork that goes into deciding things like ‘likelihood’ and ‘impact’. You need data to make these kinds of statistical guesses. The more data you have to draw from (relevant data that is), the better your guesses will be. Obviously working in isolation defeats this process – so take part and add to the conversation!


...Well that about covers things for now (which really means that I have a lot more to say but this article is already too long ;-).

In Part III of this series I’ll actually walk you through the process step by step and help shed some light on how this whole thing works.

In Part IV I’ll add some additional tidbits regarding other benefits that can be derived from this approach. I’ll also talk about other tools and techniques that can be added if you want to take a deeper dive – or if you need to wrap a business case around addressing a particular risk.

Stay tuned….
 

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter
Print Friendly

Viewing all articles
Browse latest Browse all 15

Trending Articles