Quantcast
Channel: SecureITExpert » Compliance
Viewing all articles
Browse latest Browse all 15

Compliance: Measuring Risk Under PCI 3.0 (Part IIIc)

0
0

Compliance: Measuring Risk Under PCI 3.0 (Part IIIc)

After working on this series of articles for a while now, I've decided that my original part III is simply too lengthy for a single post.  It provides a lot more in terms of details - background explanations, etc.  With that in mind, I am breaking each step of the process out into its own separate post. 

If you've missed the first few parts, you can find them here:

Part I - Overview
Part II - Tools and Techniques
Part IIIa - Getting Your Priorities in Order
Part IIIb - Defining Your Threat Scenario(s)


Step 3. Estimate the Strength of Your Controls.

This part might get a little hairy for some people.

Depending on how far down the rabbit hole you want to go, a conversation about controls can get pretty in depth. What we really want is a quick and easy way to determine if the security controls we have in place are adequate to the task.

Sadly my research hasn’t turned up a good mechanism for a quick reference control framework – so… I’ve cobbled one together here. It requires some explanation though, so bear with me for a moment.

Continuing with the theme of using existing tools and techniques (and wanting to incorporate the SANS Top 20 Critical Controls for Effective Cyber Defense), this control framework is an initial attempt to insert the Critical Controls into a VERIS-based taxonomy.

I won’t be laying out the entire list of controls here. I’ll just be providing a conceptual approach for those that are interested. Perhaps at some point in the future I (or someone else) will build on this idea and produce a complete reference list, for all 20 Critical Controls, in language similar to the structure of VERIS.

Towards that end, there’s already a tool that may be of some use. SANS has provided a mapping worksheet that cross references each threat identified under VERIS (as part of the annual DBIR) with the controls that would best mitigate that threat.

More: http://www.sans.org/critical-security-controls/critical_security_controls_v4.0.pdf

It’s worth taking a look at – even just as a basic point of reference. As an added bonus, there’s also a relational mapping made between the various threats and how they relate to the ‘Human Risks’ that SANS has identified as part of its Securing the Human project.

More: http://www.securingthehuman.org/

For the purpose of our discussion here, we’ll continue to focus on spyware – building off of the threat scenario we defined in step 2. If we look at the Critical Controls map mentioned above, there are several applicable controls (and sub-controls).

image5

This is where things can get hairy and complicated; mainly because there’s a serious problem that arises when we fail to delineate between direct and indirect control systems. If we were to include all of the critical controls that are mapped to spyware, we’d start to get bogged down and mired in unimportant details.

Personally I have no interest in managing a spreadsheet (or even a database) that goes to this level of detail:

image6

Sure, it’s fine when you’re talking about a single threat scenario, but what about 100, 500, or 1000 different permutations of action-->asset-->vector? It’s just not sustainable.

Instead what we need is a SEPARATE mechanism for identifying and evaluating the strength of our controls. One that we can manage at a detailed level, but draw composite scores from for our analysis process.

If you actually model out all of the critical controls and their constituent parts, evaluate each sub-control as a separate action item, and assign it a ‘strength value’, you can then use the resulting resource as a key part of the process outlined in this series.

So then the question becomes “What should we use as our rating criteria?” And “What rating scale should we use to measure it?” The first question is answered by looking at the sub-controls under each Critical Control. They’re pretty well defined (which is one of the reasons I’ve chosen to include the Critical Controls as a core component of this process).

The second question can also be answered pretty easily – though you may find that more appropriate measurement tools already exist within your organization. If so, you should probably use them – especially if they are already well socialized within the organization. If not, there’s an alternative I’ve been using for a while now that works quite well.

As a part of this overall framework, and for the sake of simplification, I’d actually suggest an evaluation of each sub-control under each Critical Control using the ISO 15504 rating scale (modified slightly for consistency in language):

  • N (Not achieved)—There is little or no evidence that the primary objective of the sub-control has been met (0 to 15 percent achievement).
  • P (Partially achieved)—There is some evidence of an approach to, and some achievement of, the primary objective of the sub-control. Some aspects of the sub-control, as currently implemented, may produce unpredictable or unreliable results (15 to 50 percent achievement).
  • L (Largely achieved)—There is evidence of a systematic approach to, and significant achievement of, the primary objective of the sub-control. Some weakness may still be present in how the sub-control has been implemented, but are not material to the overall effectiveness of the sub-control itself (50 to 85 percent achievement)
  • F (Fully achieved)—There is evidence of a complete and systematic approach to, and full achievement of, the primary objective of the sub-control. There are no known weaknesses associated with the implementation of this sub-control at this time. (85 to 100 percent achievement)

When we get to our discussion regarding the Binary Risk Assessment Process, you’ll see why I’ve chosen this particular rating scale. Now let’s jump into an example – harkening back to our discussion about malware.

If we’ve sat down and built out our rated control library, then we should have a pretty good idea of where we’re at with each sub-control and each of the main controls as well. Since Critical Control 5 deals with Malware, let’s grab one of its sub-controls at random and see what this might look like.

  • CSC 5.4: Configure systems so that they conduct an automated anti-malware scan of removable media when it is inserted.

This one is pretty simple, but there ARE some sub-controls that have several parts to them. It’s up to you whether you want to break the more complicated sub-controls down even further in order to increase focus and accuracy, but that’s largely a personal preference issue (though I’m assuming most of us are dealing with pretty substantial time and resource constraints).

Assuming we’ve fulfilled all of the pre-requisite sub-controls (yes, there are parallel and sequential steps present within all of the Critical Controls) – mainly that we have actually installed an anti-virus solution; then it’s a usually a pretty simple question of whether or not this feature is turned on.

Let’s think a bit bigger though – and incorporate the standard security paradigm of people, process, and technology into our equation. Is there a policy statement that requires this? Is there a procedure for enabling and maintaining it? Is it a feature that the end-user could disable? If it were disabled, would an alert be generated? If a virus was detected, and the end-user was notified, would they know what to do?

It’s a lot to consider – and you should be thinking holistically about each sub-control, but you don’t need to make a science of it.

What you’re really trying to assess is whether or not the primary objective of the sub-control has been met. Let’s assume that it has, and we’ve rated this sub-control as ‘Fully Achieved’.

Now what about the rest of our applicable sub-controls? What state are they currently in? What’s the target state and deadline for completion? Is there an existing workplan to reference for details? Etc.

At this point we may have a spreadsheet that looks a bit like this:

image7

If we look at the control-set that applies to our well-defined threat scenario (as identified in the mapping document we used as a reference), and we establish a composite score for it; we may end up with a final value that indicates an overall control rating of ‘3’. That means, taken on as a whole, we have ‘Largely Achieved’ what’s necessary in order to prevent spyware from being directly installed on a database server.

Is this indicator good enough? For our immediate purposes, probably so – but personally I’d strive for more accuracy in my own model. I’d probably include a weighting factor to use as a multiplier for scoring each sub-control.

For instance, I may look at the data from the DBIR, my own organizational needs, the risk appetite of the company, other considerations; and add a weighting factor of some sort, like:

1. Optional; nice to have.
2. Desired; should have.
3. Required; must have.

Maybe there’s another rating for ‘critical’ that you add into the mix, but a multiplier of 1 to 3 should be sufficient for most folks.

This way you’re not just getting a general average score that fails to take your unique organizational context into account. You’ve prioritized your controls a bit – perhaps as the result of a previous risk assessment; leading you to place a certain level of emphasis on key control objectives.

I'd also look to incorporate a few other rating systems as well.  If I put the time and effort into building a rated control library that I can use as an authoritative source for risk measurement, I'd want the thing to be useful in other ways to. 

Maybe you include a CMM-type rating.  Maybe you include a measure of actual control strength (like low, medium high) or use more quantitative metrics.  Maybe you add a Reliability Rating, a Criticality Rating, a Business Impact rating, etc. 

Maybe you insert elements of the CIA Triad or the Parkerian Hexad in order to map to the underlying protective quality of the control.  Maybe you include a mapping aspect that lets you tie each control back to a specific PCI requirement, or some other internal or external security expectation.    

The list goes on...  It really all depends - and you can add anything you want.  Just note that you have a good resource here that you may want to use more broadly than our current prescribed scenario. 

After this step has been completed, across all of the Critical Controls, you’ll need to make sure you maintain it. I’d suggest quarterly reviews (it’s just a simple spreadsheet after all). But, at the very least, I’d make sure there was an annual update cycle that takes place BEFORE my risk assessment needs to be performed. Otherwise your rated control library could become a blocker.

The last thing I want to cover here is adapting the critical controls to resemble our VERIS taxonomy.

Using enumerated values to represent our controls and control states lends itself well to the goal of a common framework. It’s also a way that we can start discussing and automating the measurement of control states in a consistent manner. As I said earlier, I’m not going to map them all out, but here’s the structure I’m using:

1. Identify the control:
    • control.malware.media-scanning
2. Determine its ‘state’:
    • control.rating.fully-achieved
3. Determine its ‘strength’:
    • control.weakness.not-present

This is the minimum amount of information that you’ll want to associate with a particular sub-control/Critical Control. It’s pretty simple stuff. You may want to add some numeric variables as well, but I haven’t found them to be all that useful in this context.

As a side note, that last one (‘control strength’) will become clear very shortly. Just to set the stage for our next topic, it’s representative of a binary condition:

    • control.weakness.present
    • control.weakness.not-present

With that said, let’s talk about the Binary Risk Assessment methodology and what we’re going to do with all of this data we’ve been collecting.

'Till next time...

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter
Print Friendly

Viewing all articles
Browse latest Browse all 15

Latest Images

Trending Articles





Latest Images