Measuring Risk Under PCI 3.0 (Part I).
Release time for PCI DSS 3.0 is just around the corner; and, by all accounts, it really looks like the PCI Security Standards Council (SSC) has come to realize the importance of a consistent, living process vs. a ‘snapshot in time’ approach to risk management.
According to Troy Leach (CTO of the PCI SSC), "We have incorporated policy and ongoing risk assessment throughout the standard. There is no requirement for more reports than an annual validation, but that's just a snapshot in time. What we're hoping with this is that, through the process, there is more regularity of checking by the merchant as the environment changes."
From the sound of things, we’re about to get a shot in the arm and a huge push towards lifecycle-compliancy. I’m excited by this prospect, but it also brings up a number of questions. For one, what does a ‘risk assessment’ look like in this new 3.0 vision for PCI compliance? Will we be seeing more in-depth guidance? How will validation occur? And so on. I’m definitely curious...
Currently covered by requirement 12.1.2, risk assessments are kind of nebulous and open to interpretation. It calls for a policy and process for risk assessments, and a review of documentation to ensure that the risk assessment process is performed at least annually. That’s all well and good, but other than a minor blurb about suggested options (“Examples of risk assessment methodologies include but are not limited to OCTAVE, ISO 27005 and NIST SP 800-30”), there’s not much more to it.
There’s also another problem here. The topic of risk management and risk assessment practices is a hotly debated one. There’s no real industry consensus – and there are literally dozens of different ways to go through the process. Some are just way too complex; some are way too simple. Some are plain silly, while others are arbitrary and poorly thought out. How then will the PCI Council sort out this mess and establishing a meaningful expectation for measuring risk?
Realizing that things like organizational culture and environmental context play a huge role in deciding which direction to move in would be a pretty good start. Then again, that’s probably one of the reasons they left the whole thing so nebulous to begin with. As it reads right now, they really don’t care WHAT you do – just that you do SOMETHING. It’s up to your assessor to decide whether or not you meet the bar for ‘due diligence and due care’.
Perhaps what we really need is a sensible framework for risk assessments that can be dropped into nearly any organization. Something that is quick, accurate, and easy; a methodology that strips out the nonsense and gets right to the heart of the matter. Here at SecureITExperts, we place tremendous emphasis on four key criteria that underlie everything we do - the security guidance we offer has to be Clear. Concise. Meaningful. and Actionable. The same four criteria need to be applied to risk assessments as well.
We should also be working to adopt methods that are transparent, community based, and data-driven. There’s no need to recreate the wheel. There’s no need for a ‘cool new methodology’ that someone can brand as being ‘unique’. A common taxonomy based on open standards and shared ideas can greatly simplify the process of conducting risk assessments – and can also help enable better conversations with business leaders, key stakeholders, and even amongst ourselves.
Sure, OCTAVE, ISO 27005, and NIST SP 800-30, are commonly accepted best practices within the industry for conducting risk assessments, but do they meet the bar for being SMART security practices? For being Clear? For being Concise? For being Meaningful? That all depends….
Actionable? Maybe if the outcome you’re looking for is a laundry list of best guesses and hopeful finger-crossing.
Let me be clear on one important point though – the standards themselves are, for the most part, perfectly fine. It’s just that in most cases they’re overly burdensome and require far more time, effort, energy, and resources than your security department probably has available to burn. The result? A well-meaning, but half-hearted attempt to list the problems you already know you have.
Now it’d be pretty easy for me to just sit here and chastise the PCI council for its lack of guidance, the industry for its lack of consensus, and security professionals for our lack of insight. It’s really easy to be a backseat driver, or an armchair quarterback. In other words, it’s really really easy to be a dick! Being a dick doesn’t really help anyone though – it just calls attention to known issues without offering a solution or an idea for consideration.
So let’s shift the conversation, throw open the kimono, and talk about our options. What I’m going to propose here is a very simple idea – one that cobbles together a few other people’s even better ideas. It meets all of the goals and objectives I previously stated, and it’s a pragmatic approach to addressing a complex problem. It creates a framework that others can build upon, enhance, and mold to best fit their needs. No, it’s not a golden unicorn or a silver bullet, but perhaps it can help move us forward a bit.
The basic idea is to leverage the Verizon DBIR as a data source, VERIS as a taxonomy for identifying threat scenarios, the Sans Top 20 Critical Controls as a common control library, Binary Risk Analysis as the analysis method itself, and a properly framed risk-register as the output. I’m also considering various visualization techniques that can help drive the process, and make it palatable on the business side.
Now that I have your attention, stay tuned for Part II.