Friday, December 8, 2017

How about CIP-013? Is that Auditable?

My last two posts have been about plan-based CIP requirements and standards. This isn’t an academic question, since it seems clear that these are the wave of the future. Every important new standard or requirement developed since CIP version 5 has been plan-based (meaning: The entity isn’t required to perform certain actions or to achieve a certain objective, but to develop and implement a plan to achieve the objective). I doubt this will change anytime soon.

My last post was about how, and even whether, a plan-based CIP requirement or standard can be audited. Although I didn’t state this specifically, my answer essentially was that it depends on how the requirement is written. The requirement for a plan needs to include criteria for topics that need to be in the plan; the auditor can examine the plan to see if it has addressed those topics. An entity that doesn’t include one of the topics would potentially be non-compliant.

But the fly in the ointment here is the level of detail that is provided in the criteria. For example, if a criterion just reads “user authentication”, there are lots of ways that could be addressed, including just requiring an unchanging four-character password. So the criterion might be “strong user authentication”, and there might be a separate guidance document providing examples of what this would be. With this, it would be very hard for an entity to argue that a four-character password is passable.

Of course, at the same time you don’t want to provide so much detail in the criteria that they become prescriptive requirements of their own. If you say the plan must cover “strong authentication using passwords that contain letters, numerals and characters and are changed every 30 days”, you have definitely just done that. And as my original post in this series discussed, once you provide that level of specificity in the criteria, you’re virtually inviting the auditors to require that you document every instance of when you have complied with those criteria. And then you lose the whole benefit of having the requirement be plan-based in the first place.

I concluded my last post by looking at CIP-014, which is definitely a plan-based standard. I went through each of the requirements and said whether I thought it was auditable – and they all were. The requirement for the actual plan (called the physical security plan, since CIP-014 is a standard for physical security of certain key substations) is R5, which lists four criteria that must be addressed in the plan:

5.1. Resiliency or security measures designed collectively to deter, detect, delay, assess, communicate, and respond to potential physical threats and vulnerabilities identified during the evaluation conducted in Requirement R4. 
5.2. Law enforcement contact and coordination information.
5.3. A timeline for executing the physical security enhancements and modifications specified in the physical security plan. 
5.4. Provisions to evaluate evolving physical threats, and their corresponding security measures, to the Transmission station(s), Transmission substation(s), or primary control center(s).

I like these criteria because they are quite comprehensive – you could say they’re mini-plans in themselves. 5.1 essentially says that the physical security plan needs to tell how the entity will mitigate all of the threats and vulnerabilities identified in the assessment required by requirement R4. R2 says the plan needs to cover law enforcement coordination. R3 says the plan must include a timeline for executing all this. R4 says there must be provisions in the plan to evaluate new or changing physical security threats, as well as new or changing security measures that are available to address those threats.

I feel these criteria provide enough specificity to be auditable, but certainly not enough so that these would suddenly become four prescriptive requirements that would need to be audited as such – with documentation of every particular instance, available for the auditors on demand.

But this post is really about CIP-013. Is that auditable? Actually, this is a question I asked in a previous post. Now I want to ask the question again, in light of the framework for answering the question that I laid out in my last post.

CIP-013 R1 is the requirement to develop a supply chain cyber security risk management plan. Here is the entire requirement:

R1. Each Responsible Entity shall develop one or more documented supply chain cyber security risk management plan(s) for high and medium impact BES Cyber Systems. The plan(s) shall include: 

1.1. One or more process(es) used in planning for the procurement of BES Cyber Systems to identify and assess cyber security risk(s) to the Bulk Electric System from vendor products or services resulting from: (i) procuring and installing vendor equipment and software; and (ii) transitions from one vendor(s) to another vendor(s).

1.2. One or more process(es) used in procuring BES Cyber Systems that address the following, as applicable:

1.2.1. Notification by the vendor of vendor-identified incidents related to the products or services provided to the Responsible Entity that pose cyber security risk to the Responsible Entity;
1.2.2. Coordination of responses to vendor-identified incidents related to the products or services provided to the Responsible Entity that pose cyber security risk to the Responsible Entity;
1.2.3. Notification by vendors when remote or onsite access should no longer be granted to vendor representatives;
1.2.4. Disclosure by vendors of known vulnerabilities related to the products or services provided to the Responsible Entity; 
1.2.5. Verification of software integrity and authenticity of all software and patches provided by the vendor for use in the BES Cyber System; and
1.2.6. Coordination of controls for (i) vendor-initiated Interactive Remote Access, and (ii) system-to-system remote access with a vendor(s). M1

Let’s look at this. R1 itself simply says the entity has to develop the plan. R1.1 describes the objective of the plan. It is to identify and assess[i] three types of risks:

  • Risks from procuring vendor equipment and software;
  • Risks from installing vendor equipment and software; and
  • Risks from transitions between vendors.

Of course, it’s important to state the objective of the plan, but these aren’t criteria for what should be included in the plan.

R1.2, however, lists six criteria that have to be in the plan. These are exactly the specific items that FERC ordered when they approved Order 829 in 2016. They are certainly specific, and in my opinion they should all be auditable. However, does this mean that R1 itself is auditable? No, I don’t think so. This is because the three objectives of the plan (from R1.1) are much more comprehensive than just these six items. In fact, the six items all relate to just one of the three plan objectives: risks from procuring vendor equipment and software.

So is CIP-013 R1 auditable, meaning that it provides specific criteria that need to be addressed in the plan? Let’s look at each of the three plan objectives in R1:

  • For procurement risks, there are six specific criteria. These are auditable, but there are many other procurement risks which also should be addressed[ii] in the plan. For example, there’s the risk that the vendor product, when shipped to the entity, will contain malware right out of the box. There’s the risk that information about the configuration of devices which have been installed on your site will be stolen from the vendor (which has happened in North America, more than once). There’s the risk that a vendor employee will inadvertently disclose information about your equipment, or its configuration, on social media. These should all be at least considered in the entity’s plan.
  • For installation risks, there are no criteria.
  • For risks of transitions between vendors, there are no criteria.

Now we can ask the question whether CIP-013 R1 is auditable. Out of three risk areas that are supposed to be addressed in the plan, there are specific criteria provided for only one of those areas, and not even comprehensively for that area. Overall, I would say R1 isn’t auditable.

How about CIP-013 R2? That requirement reads (in its entirety) “Each Responsible Entity shall implement its supply chain cyber security risk management plan(s) specified in Requirement R1.” What would be required to make R2 auditable? Let’s go back to the last post, where I discussed CIP-014. Requirement 5 of that standard mandates that the entity “develop and implement” the physical security plan. Moreover, it says that the plan should be “executed according to the timeline specified in the physical security plan(s).” In other words, if in CIP-013 the supply chain cyber security risk management plan from R1 provides a specific timeline, then the entity’s implementation of that plan in R2 should be auditable.

But there is a big difference between CIP-014 and CIP-013 in this regard. One of the criteria for what should be covered in the plan in CIP-014 is that there should be “A timeline for executing the physical security enhancements and modifications specified in the physical security plan.” (CIP-014 R5.3) So it’s almost certain that the entity will have a timeline in their plan. In CIP-013, that criterion isn’t there. One would hope the supply chain plan from R1 would include a timeline, but it might not.

More importantly, since there aren’t criteria covering most of what should be in the supply chain plan, the plan itself for the most part isn’t auditable, as we’ve just said. It’s not at all clear to me that the implementation of the plan would be auditable if the plan itself wasn’t. So I’m going to say I’m undecided on whether CIP-013 R2 is auditable or not.

How about R3? It reads “Each Responsible Entity shall review and obtain CIP Senior Manager or delegate approval of its supply chain cyber security risk management plan(s) specified in Requirement R1 at least once every 15 calendar months.” Is this auditable? Not in any meaningful sense. Yes, the auditor can verify that the right person signed off on the plan (whether revised or not) after 15 months. But that surely isn’t why this requirement is here.

I wrote a post about this requirement this summer. I pointed out there that, while the requirement itself seems close to meaningless (it just requires a review of the plan. It doesn’t provide any criteria for that review, nor does it require the entity to revise the plan if they find something needs to be added to or changed in it), it was actually different in the first draft of CIP-013, where it read (at the time it was R2, not R3):

R2. Each Responsible Entity shall review and update, as necessary, its supply chain cyber security risk management plan(s) specified in Requirement R1 at least once every 15 calendar months, which shall include:

2.1. Evaluation of revisions, if any, to address applicable new supply chain security risks and mitigation measures; and
2.2. Obtaining CIP Senior Manager or delegate approval.”

Here you see there are actual criteria for the review of the plan. Specifically, the entity needs to address new supply chain security risks, as well as new mitigation measures for those risks (in fact, this language is similar to the language in CIP-014 R5.4, which I quoted above. In CIP-014’s case, this language became one of the criteria for what needs to be in the plan, while in CIP-013’s case – at least in the first draft – it appeared as a separate requirement).

But the fact is that this language wasn’t in the final version of CIP-013-1, so it doesn’t officially have any status. This isn’t even mitigated by the fact that the Implementation Guidance prepared by the SDT does provide some good criteria for review of the plan (page 9); but as we all know by now, guidance of any sort isn’t auditable.

So how auditable is CIP-013? Let’s look at the box score:

  1. R1 is only partially auditable.
  2. It isn’t clear whether R2 is auditable or not.
  3. R3 isn’t auditable.

So CIP-013 isn’t very auditable, although it isn’t completely un-auditable (the only part of CIP-013 that is definitely auditable is R1.2). I could stop here and leave you with the impression that I don’t think very much of CIP-013. However, I have said that I think CIP-013 is a good standard. In fact, I’ve said that it comes closest to what I would call the ideal of any of the CIP standards. So what gives? Am I just being inconsistent?

I’m not being inconsistent because I no longer think that auditability is how the CIP standards should be judged – at least, not how the CIP standards and requirements that are based on plans should be judged. And why do I think this? This post has already gotten too long. I’ll discuss this in the next one.


The views and opinions expressed here are my own, and do not reflect those of any organization I work with. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

[i] I just noticed for the first time that R1 just says the plan has to have processes to identify and assess risks, but not to mitigate them! R2 says the entity has to implement the plan, but if the plan is just to identify and assess risks, that means that’s all the entity has to do when they implement it! Obviously, I don’t think the SDT would have bothered to list the risks in R1 but only require the entity to identify and assess those risks, not mitigate them. If anybody has an idea about this – maybe I’m missing something important – I’d appreciate your letting me know.

[ii] When I say addressed, I’m not necessarily saying the entity will have to implement policies, procedures or technologies to mitigate all – or even most of - these risks. Since CIP-013 is risk-based, the entity needs to rank all the risks it faces (to be more exact, all of the threats it faces, ranked by the degree of risk of each), and determine the amount of resources it will devote to each one (with the understanding that the highest risks receive the most resources). The lower-risk threats will receive little or no attention. But the entity is required to at least look at all supply chain risks initially.

Thursday, December 7, 2017

How do you Audit a “Plan”?


My previous post tried to explain why almost every new CIP standard or requirement since CIP v5 has been based on a “plan”. Clearly, these standards/requirements are the wave of the future with NERC CIP. In this post, I will discuss how – or rather, if – such standards and requirements can be audited by NERC.

For clarity, I’m going to start with a simple hypothetical example. Let’s say we develop a new standard for cyber security of programmable thermostats. We want it to be objectives-based (non-prescriptive) and we’re purists, so we just write two requirements:

R1: Develop a plan for securing your house’s programmable thermostat.
R2: Implement that plan.

Now suppose that, several years later, an auditor shows up to audit your compliance with this standard. Starting with R1, he or she asks to see your plan, and you show it to them. The first thing they will look for is whether you actually have a plan, as opposed to say a bunch of random words generated to fill up a few pages. If you have the latter, the auditor will most likely issue you a Potential Non-Compliance finding on the spot.

Now the hard part comes in. Suppose the auditor takes a read through your plan and thinks it really isn’t a good one? For example, you may have completely omitted the topic of how you will protect unauthorized individuals (either in the house or coming in through the Internet) from accessing the thermostat. The auditor believes (with good reason) that this is a very serious omission, perhaps a fatal one. But what are the auditor’s options? The requirement just says you have to have a plan, not a good one; it doesn’t say anything about what should be in the plan. In this case, I would say that the auditor has no option but to pass you. They might also offer you some friendly advice (which translates into an Area of Concern in NERC-land) about correcting this omission. They will also recommend that you correct it before the next audit; however, as with NERC Areas of Concern, you can’t be held in violation even if the next audit shows you still haven’t corrected this omission.

The same consideration will hold true when the auditor considers R2. If you have implemented what you said you would implement in your plan, they have to pass you. It doesn’t matter that your thermostat may still be riddled with security holes; as long as you have done what you said you would do, you have to pass. I think you’ll agree with me that the concept of “audit”, in the case of a standard like the one I just described, is close to meaningless; the standard provides no criteria by which to judge a good from a bad plan. Whoever drafted this standard may have felt that there shouldn’t be any constraints placed on your plan; but in doing this, they made the standard effectively un-auditable.

So let’s say a new drafting team drafts version 2 of this standard. As part of their preparation, they ask auditors about their experience auditing version 1. The auditors point out that they really can’t audit it in any meaningful sense; the requirement must list some sort of criteria that need to be met by a good plan (not just a separate guidance document, since those are never binding on the auditor or the entity, even if they happen to be called Implementation Guidance).

So now version 2 of the standard reads:

R1: Develop a plan for securing your house’s programmable thermostat. The plan must include:
·         Controls on physical access to the programming interface; and
·         Controls on remote access to the programming interface; and
·         Etc.
R2: Implement that plan.

When the auditor comes back the next time, he or she is much more likely to feel good about their job. They will no longer be providing a meaningless “pass” to everybody they audit, who has even the most minimal plan. Instead, they will comb through the plan to determine whether it contains each of the bulleted items listed in R1. If it is missing one or more of these, the auditor will probably issue a PNC. They may also order a mitigation plan (enforceable this time), so that you will have to include these in your plan within say one year.

Clearly, there need to be criteria for what should be in the plan. Even more importantly, these criteria need to be listed in the requirement or standard itself, not in a separate guidance document. This is why CIP-010 R4 Attachment 1 (discussed in the previous post) goes into so much detail on what should be included in the plan for TCAs and Removable Media. Each of the items listed needs to be in the plan, or the entity risks a PNC.

But there’s a limit to this. Obviously, general criteria like “Authorize use of Removable Media by both user and location” don’t allow for any nuance in how the entity interprets them. Suppose the entity decides that anybody in the entire IT department should have authorization to use removable media in the Control Center, and the auditor thinks that is far too broad. Just as was the case in version 1, the auditor will have to swallow his or her objection and give the entity a pass, while most likely issuing an Area of Concern. There’s nothing in “Authorize use of Removable Media by both user and location” that would rule out what the entity has done.

For the next version of the standard, the auditor might provide advice to the drafting team to the effect that it would be better to perhaps add the words “based on need”, or something like that. And maybe another auditor would decide the next version should really include a provision for removing access after the person has left the company, perhaps within 24 hours or even instantaneously. As a result, this standard could really become a set of backdoor prescriptive requirements. There always needs to be a balance struck between auditability and prescriptivity.

But what if these auditors decide they don’t want to wait for the SDT to change the requirement, and they simply start acting as if these criteria were in it now? What if an auditor starts issuing PNCs for entities that don’t within 24 hours revoke use of Removable Media by users who have changed jobs or left the company? Clearly, at that point they have overstepped their authority. They shouldn’t be allowed to do that.

How does CIP-014 fit into this?
A recent post described the experiences of two entities with CIP-014 audits. How do those experiences relate to the imaginary standard I’ve just described? Both of these entities fell afoul of CIP-014 auditors by making a similar mistake: They noted that CIP-014 R1, R4 and R5 are completely focused on threats to, and vulnerabilities of, the substation as a whole. These requirements never mention threats to individual elements like transformers.

Both of these entities built their threat and vulnerability assessment (required by R4) on the assumption that the only threats and vulnerabilities it needed to consider were those to the whole substation. And the one entity that had developed a physical security plan (required by R5) had built that on the same assumption. However, both of them were reprimanded by their auditors for not considering threats to their individual transformers. Plus they both were initially threatened with PNCs for not doing this. In one of the cases, the entity did receive a PNC.

As I pointed out in the post, I think the auditors were right in saying that the entities should consider threats to individual transformers. But this is clearly not in any of the requirements. If the auditors want to admonish the entities, they should issue an Area of Concern and ask the entity to fix the omission in their plan. I would be very surprised if the PNC that was issued survives to become an actual Violation.

Aside from this, how auditable is CIP-014? Let’s look at the individual requirements. R1 requires a risk assessment, which establishes whether or not a particular substation is in scope for this requirement. The criterion for deciding this is relatively clear: The substation has to be such that “if rendered inoperable or damaged could result in instability, uncontrolled separation, or Cascading within an Interconnection.” I believe the risk assessment is auditable.

I’ll skip over R2 and R3, both of which seem completely auditable to me. Let’s look at R4. As I’ve just said, R4 requires an assessment of threats and vulnerabilities to the substation. As I already pointed out, the requirement just talks about threats and vulnerabilities to the whole substation, so any attempt by an auditor to issue a PNC to an entity for not including transformers (or other elements like buses) in their assessment should be met with a firm “Please show me where in the requirement it says this.”

But setting this aside, how auditable is this requirement? Fortunately, the requirement lists three criteria that must be considered in the assessment: the unique characteristics of the substation, history of attacks on similar facilities, and intelligence received from the E-ISAC and other organizations. So the assessment can be audited to make sure these three criteria were considered. All three are fairly broad, so a lot of considerations would fall into them. But any other criteria that the auditor might like to see in the assessment would have to be dealt with as an Area of Concern.

R5 requires the entity to develop and implement a physical security plan to mitigate the threats and vulnerabilities. There are four specific (yet comprehensive) criteria for what needs to be in the plan, including a timeline for completing it. This seems to me to make the plan auditable.

How about implementation of the plan? Is that auditable? The requirement says that the plan should be “executed according to the timeline specified in the physical security plan.” In my opinion, this makes implementation auditable, although as I noted in this post, one region’s auditors were trying to go beyond that and make up their own criteria to judge how well the entity was making progress on implementing its plan. Of course, it’s certainly their prerogative to look at other criteria than simply the timeline that the entity listed in their plan[i], but anything beyond that timeline should strictly be a matter for an Area of Concern.

Overall, I think CIP-014 is quite auditable, in the sense that there are specific criteria that make it possible for auditors to make specific determinations of whether the entity has complied with the requirements. Were this one of the prescriptive CIP standards, that’s all that would matter. However, with objectives-based standards like CIP-014 and CIP-013 (and CIP-012, now in development), I think there is a bigger consideration than auditability: it’s partnership. How can NERC and the regions partner with the entities to develop and implement good plans, rather than be locked in an eternal hands-off stance that benefits nobody, especially not those of us who rely on the electric grid for our daily existence? I will return to this theme in a later post.

My next post will discuss whether CIP-013 is auditable. I suspect that answer may be different from what it was for CIP-014.

Note from Tom: The follow-on to this post is here. In starting to write the follow-on, I realized my discussion of CIP-014 in this post was incomplete, so I continued it in that post.


The views and opinions expressed here are my own, and do not reflect those of any organization I work with. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] One lesson from this is that, if an entity is falling behind the implementation schedule listed in its physical security plan, it should amend the plan to reflect the new schedule, and document why this change was made.

Wednesday, December 6, 2017

What’s the Deal with all these “Plans”, anyway?


You may have noticed that I have been writing a lot about how CIP-013 and CIP-014 have been and will be audited lately. And guess what? I see problems ahead for both standards. The two standards differ in important ways, but there is one common element to both of them: They require that the entity develop and implement a “plan” to achieve the objective of the standard.

These two standards are the only CIP standards that require a plan. However, there are two CIP requirements that also mandate the entity to develop and implement a plan. CIP-003 R2 requires that an entity that owns Low impact BES Cyber Systems shall “implement one or more documented cyber security plan(s) for its low impact BES Cyber Systems that include the sections in Attachment 1.” And CIP-010 R4 says an entity with High or Medium impact BCS “shall implement, except under CIP Exceptional Circumstances, one or more documented plan(s) for Transient Cyber Assets and Removable Media that include the sections in Attachment 1.”

Also, CIP-011 R1 mandates that “Each Responsible Entity shall implement one or more documented information protection program(s) that collectively includes each of the applicable requirement parts in CIP-011-2 Table R1 – Information Protection.” I don’t see much difference between how the word “program” is used in this requirement and how “plan” is used in the other two requirements, as well as how it is used in CIP-013 and -014. So I’m also going to anoint CIP-11 R1 a “plan” requirement.

Why has there been this proliferation of “plan” requirements? And what is common to them all? The most important common trait of all of these standards and requirements is that they’re objectives-based (I used to call them “non-prescriptive”. However, I think a non-prescriptive requirement inherently has to be objectives-based, and vice versa. So I will use the terms interchangeably, even though their dictionary definitions would be different).

Just to make sure everybody understands what I’m talking about, an objectives-based requirement is one that simply states an objective and allows the entity complying with it to determine the best means to achieve that objective. The entity is then audited on whether or not they have achieved the objective. This contrasts with a prescriptive requirement, which prescribes a certain set of steps that must be taken (often within a certain timeframe). The entity is audited on whether they have executed that set of steps by the required times; they aren’t judged on whether they have achieved some specific objective (in fact, IMHO the “objective” of a prescriptive requirement can only be truthfully described as the set of steps prescribed in the requirement, even though the requirement may well have a stated objective).

Why does CIP sometimes require a plan?
But where does “plan” come in with all this? If you want to write an objectives-based requirement or standard, do you always have to write it so that it requires a plan? For that matter, do all objectives-based requirements require a plan?

I can confidently answer no to both of these questions. In fact, there are currently objectives-based requirements in the CIP standards that don’t require a plan. For example, CIP-007 R3, anti-malware, is often my poster child for an objectives-based requirement; it simply requires that the entity “Deploy method(s) to deter, detect, or prevent malicious code.” This is about as non-prescriptive as you can get. But CIP-007 R3 doesn’t mandate a “plan” or even a “program”.[i]

Given that mere logic doesn’t require that an objectives-based requirement be based on a plan, why do these five standards or requirements mandate a plan? Was it that a particular Standards Drafting Team went “plan”-crazy and decided to make everything a plan? That definitely isn’t the case, since these five standards or requirements were drafted by four different teams![ii] There must be some reason why these four different groups of people all thought that it was preferable for their objectives-based requirements (or standards) to mandate that the entity have a plan, not just that they should achieve a particular objective.

I attended some meetings of all four of the drafting teams in question, and I can’t remember any discussion about this question (although I’m sure there was one in each case). But I think I can guess what may have been the reason why they all did this: Each of the objectives of these five standards or requirements is pretty broad: respectively, physical protection of key substations (CIP-014), supply chain security (CIP-013), electronic and physical access control for Low impact assets (CIP-003 R2), mitigation of the threat posed by Transient Cyber Assets and Removable Media (CIP-010 R4), and BCS information protection (CIP-011 R1).

When you’re drafting a requirement to achieve a broad objective, you want to provide some guidance on what the entity should do to achieve that objective, without listing prescriptive requirements which then become the objective themselves. The best example of this is CIP-010 R4, which requires a plan for securely managing Transient Cyber Assets and Removable Media used with Medium and High impact BCS. However, it doesn’t just require a plan, but it lists in Attachment 1 what needs to be in that plan.

Attachment 1 lists three types of devices that must be in scope for the plan, as well as between two and five “topics” (my term) that must be addressed for each of the three types. For example, under the Removable Media device type, there are two topics: Removable Media Authorization and Malicious Code Mitigation. Each topic lists two criteria that must be included in the plan. For example, under Removable Media Authorization, the entity is required to authorize use of Removable Media (thumb drives) by both user and location. Under Malicious Code Mitigation, the user is required to scan the device for malicious code before using it with a Medium or High impact BES Cyber System.

Note that I use the word “required” here, because that is the case. You might ask “Since these are requirements anyway, why does it matter whether or not there’s a plan involved?” However, in the NERC world there is a very important difference between requiring that certain elements be in a plan (as is the case with this requirement) and actually requiring the entity to perform those activities. If the requirement is for a plan, then the auditors will have to audit the plan itself (and also how well it has been implemented). But if the plan elements discussed above (such as “authorize by user and location”) are direct requirements in themselves, then the entity has to maintain records of every single use of removable media with High or Medium impact BCS, to be able to document that they were compliant with these two requirements. That would be a huge paperwork nightmare, and would do very little to advance the cause of cyber security.

The moral of this story is that it seems clear (to me anyway) that “plans” are here to stay in CIP. If you look at the major standards and requirements that have been developed since CIP v5 (including the two standards and three requirements listed above and in end note ii below), all of them require the entity to develop a plan to achieve a particular objective, rather than to achieve the objective itself. And this isn’t an accident, since in the latter case there would be a huge paperwork burden placed on the entity for maintaining compliance evidence.

In my next post, I will delve into the question of how “plan”-type requirements can be audited.


The views and opinions expressed here are my own, and do not reflect those of any organization I work with. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.


[i] The “preamble” to CIP-007 R3 reads “Each Responsible Entity shall implement one or more documented process(es) that collectively include each of the applicable requirement parts in CIP-007-6 Table R3 – Malicious Code Prevention.” You might ask – and if you might not, I’ll ask it for you! – “How would substituting the word “program” or “plan” for “processes” change the meaning of this sentence? If it wouldn’t change the meaning, then you should include this as one of your “plan” requirements, right?

You’re absolutely right when you say this, but I do think there is a real difference between a process on one hand and a plan or program on the other. A plan or program is inherently a document that lays out an objective and describes how that objective will be achieved. The document can describe different means of achieving the objective, and perhaps conditions that would dictate when one or the other means would be preferred. But a process, in my mind, is a listing of a set of required steps. The process itself doesn’t prescribe a goal, or discuss alternate means of achieving that goal.

Now that I think of it, CIP-007 R3 would probably be better off if it did require a plan or program. The objective of malicious code prevention can be achieved in a number of ways; I doubt there’s any fixed methodology (even with a whole lot of “If…Then…Else” loops) that would be able to encompass all those ways. In any case, in CIP-007 R3 the entity won’t be audited on the basis of a plan or program, but on whether they’ve achieved the objective of mitigating the threat of malicious code. And this post is about how a plan/program can be audited.

[ii] CIP-014 was developed by the team expressly convened to address FERC’s order for substation physical security. CIP-013’s team was assembled to address FERC’s supply chain security management order. CIP-003-6 R2 and CIP-010 R4 were developed by the “CIP v6” drafting team, and CIP-011 R1 was developed by the “CIP v5” team.

Friday, December 1, 2017

A Third Lesson from CIP-014


I recently wrote two posts – here and here - on lessons that can be learned from CIP-014, the standard for physical security of critical substations, which came into effect two years ago. I’m interested in CIP-014 because it was the first objectives-based CIP standard; it was followed by two other such standards: CIP-013 and now CIP-012. There are also at least three objectives-based requirements: CIP-003 R2, CIP-007 R3 and CIP-010 R4. In fact, all of the standards or requirements that NERC has developed since CIP version 5 have been objectives-based (and this was mostly because FERC has made it clear in their orders for new standards that they wanted them to be objectives-based. I am quite sure that FERC will order all new standards going forward to be such). I think it’s very helpful to take account of these lessons, since it will help not only entities that have to comply with CIP-014, but also all entities (a much larger group) that have to comply with CIP-013, the new supply chain security standard.

In the previous two posts, I discussed two things I learned from talking with a CIP physical security compliance specialist at a large utility, both based on experiences they had while getting ready to comply with CIP-014. However, I recently attended a meeting of one of the NERC regional entities and talked with two entities that had already been audited on CIP-014 and had some interesting experiences in their audits. In this post I’ll discuss an important fact I learned from those two conversations (without identifying the two entities, of course). And I’ll draw on other things I learned from these conversations in an upcoming post on the question of how standards and requirements that are based on the entity’s developing and implementing a plan can be audited (actually, on whether they can be audited in any meaningful sense. You’ll have to stay in suspense on this point until I write the post).

You can download CIP-014 and read about the standard, which has six requirements. What’s most important for this post are requirements 4-6, which seem to be where a number of NERC entities are running into trouble. Requirements 1-3 are about determining which substations (and control centers) are in scope for the standard, but R4-R6 cover:

  • R4: For the substations and control centers that are in scope, conduct an assessment of those facilities’ “potential threats and vulnerabilities” to physical attack;
  • R5: For each facility in scope, develop and implement a physical security plan that “covers” the substations and control centers in scope; and
  • R6: Have a qualified third party validate both the assessment in step 3 and the plan developed in step 4. The third party may recommend changes in either document; the entity must change the plan to reflect those recommendations, or document why it did not. And since the plan has to be implemented, these changes will also need to be implemented

Both entities reported a serious issue with the auditors based on the following:

  1. CIP-014 R1 requires the entity to conduct a risk assessment of all of their “Medium impact” substations and perform an analysis to determine which of those “if rendered inoperable or damaged could result in instability, uncontrolled separation, or cascading within an Interconnection.” Note that this is a holistic criterion: It looks at what will happen if the entire substation is taken out, not if any particular Facility within the substation is rendered inoperable (e.g. individual transformers or buses).
  2. In R4, the entity is required to perform an “evaluation of the potential threats and vulnerabilities of a physical attack to each of their respective Transmission substations…” Note again that this is a holistic criterion. The attacks in question are ones on the substation itself, not on any individual Facilities in the substation.
  3. R5 requires the entity to develop and implement “documented physical security plan(s) that covers their respective Transmission stations…” Once again, this is a holistic requirement; the plan is for the whole substation, not any of the Facilities found at it.

You may suspect where I’m going with this. It seems pretty clear that the requirements as written are only addressing the substation as a whole. Specifically, the evaluation of threats and vulnerabilities and the physical security plan seem to only apply to the entire substation. But guess what the auditors are looking for? You’re right – they’re looking for the security plan to address protection of the Facilities at the substation, not just the substation as a whole!

In fact, both the entities I talked to said they were specifically called out on the fact that their R4 threat and vulnerability assessments and their R5 physical security plans didn’t address the transformers and other Facilities (like buses) at the substations. One entity received an Area of Concern because of this, while the other received an actual Potential Non-Compliance finding. Yet, as you’ve just seen, nowhere in the CIP-014 requirements is there any mention of anything but the substation as a whole!

Now, I’m certainly ready to admit that the auditors weren’t unreasonable in telling both entities that they need to address this issue. After all, the snipers who carried out the Metcalf substation attack, which prompted FERC to order this standard, didn’t destroy the substation; they just attacked a number of the transformers. But they were almost able to have a major impact on the grid in that area. So it’s not unreasonable to expect that NERC entities that own critical substations should take some steps to protect the individual Facilities there.

It may not be unreasonable, but it’s also not required by the language of CIP-014! So this is one area where the auditors need to simply issue an Area of Concern and stress that this is something the entity should strongly consider doing for grid security, even though it’s not in any way required.

And why isn’t it required? I think the drafting team must have simply made a mistake.[i] After all, FERC only gave NERC 90 days to draft and approve the standard – a mere blink of an eye in NERC-dom. 


The views and opinions expressed here are my own, and do not reflect those of any organization I work with. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

[i] I also heard about another mistake the SDT made. The risk assessment required in R1 is specifically mandated (in R1.1) to be re-performed every 30 months. However, the threat and vulnerability assessment required in R4 doesn’t have to be re-performed at all! Yet I’ve heard auditors are asking utilities whether they will re-perform that assessment every 30 months. In this case also, the auditors should make clear that, while it isn’t required by the wording of the standard, it certainly makes sense to re-perform both assessments every 30 months, not just one of them.

Monday, November 27, 2017

Breaking the New Threat Logjam


My previous post, as well as a post from September, pointed to probably the biggest problem with the NERC CIP standards today: To address a new cyber threat through CIP, NERC has to go through its standards development process. And the time from when a new standard or requirement is requested (usually by FERC) to when the new standard comes into effect is almost always multiple years, and very often more than that (in the example I used in the previous post, the time was between 5 ½ years and 7 ½ years, depending on how you measure it).

There are two primary consequences of this:

  1. There are a number of important cyber threats – phishing, ransomware, “not-Petya”-type attacks, cloud-based threats, etc. – that aren’t currently addressed in CIP at all; moreover, there is no serious effort now to incorporate these into CIP.
  2. A great weariness with the process of developing new CIP standards, and trying to interpret them once developed, seems to have settled on the NERC membership since the CIP version 5 implementation experience. It is highly unlikely that any new cyber threats will be addressed in CIP going forward, unless ordered by FERC.

Of course, NERC entities are, for the most part, still investing a lot of resources in addressing ­new cyber threats outside of the CIP compliance process. But as I’ve pointed out multiple times, including in my last post, the fact that some threats must be addressed in order to comply with NERC CIP and are subject to potentially huge fines (this includes threats like malware, firewall misconfiguration, lack of proper network segmentation, etc.), while others are strictly optional, means there is inevitably a tendency to overfund controls against threats that are part of CIP, and underfund controls against threats that aren’t part of CIP.[i] And this discrepancy will only get much larger, since new threats are appearing more rapidly all the time.

Yet, as I’ve also pointed out, the industry needs mandatory cyber security standards, since it is only by having those in place that cyber security efforts will be well funded. How do we break this logjam, in which the current CIP standards suck up a greatly – and increasingly – disproportionate share of the resources available for cyber security, while still having mandatory standards?

The answer to this question flows almost directly from what I’ve just said: A new CIP standards framework that will address this problem would need to replicate, as closely as possible, the process that the entity would naturally follow on their own if they a) didn’t have any mandatory cyber standards to comply with, but b) they still had the same budget for cybersecurity that they have in the presence of the mandatory CIP standards.

And what would that process be? It would be one in which the entity

  1. Ranks all of the cyber threats it faces by their degree and probability of impact – in other words, by the degree of risk that each threat poses.
  2. Determines approximately what steps are required to mitigate each threat;
  3. Determines the degree of mitigation that would be achieved by taking those steps;[ii] and
  4. Allocates its cyber budget so that a) all of the threats above a certain minimum risk level are mitigated to some degree, and the more risky threats to a higher degree; and b) the more risky the threat, the more it is mitigated

What kind of standard would be required to implement this process? I can tell you right now that the current CIP standards won’t work! The problem is that some of the current CIP requirements are excessively prescriptive. And even though a small number of the requirements aren’t prescriptive (and I consider objectives-based requirements like CIP-007 R3 to be the opposite of prescriptive requirements like CIP-007 R2), the NERC compliance and enforcement process (embodied in CMEP and the Rules of Procedure) is itself very prescriptive. Both the CIP standards and the compliance/enforcement process will ultimately need to be changed in order for what I’ll outline below to work.

But let’s say I were given the power tomorrow to put in place what I think is needed; what would I do? I’m very glad you asked that question. First, I would scrap the existing CIP standards and put in place what is in effect a single requirement[iii]: “On a risk-adjusted basis, address the cyber security threats on the current list.” And where does this “current list” come from? I’m also very glad you asked that question. When this new standard is drafted, the drafting team will draw up an initial list of what they consider the most important threats.

However, this list would have to be maintained on an ongoing basis. There will need to be some group designated to meet regularly (I would think quarterly would be appropriate) and do the following:

  1. Review current cyber threats and determine which ones should be added to the list.
  2. Decide if any threats currently on the list should be removed.
  3. For each threat on the list, determine a set of “criteria” that should be addressed in the plan the entity develops. I hope to have a post out very soon on what a “plan” is and how it could be audited in my desired scheme of things, but for the moment I’ll just point out that CIP-003 R2, CIP-010 R4, CIP-013 and CIP-014 all speak of a plan. The criteria are topics that must be addressed in the plan, regarding each threat. For example, for the threat of malware infection from transient electronic devices, the criteria could include items such as “The plan must address devices owned by third parties as well as by the entity”; “The plan must address how access to transient electronic devices will be managed”; etc.
  4. Develop guidance on how each threat can be mitigated, and update it in the light of real-world experience addressing these threats (and not just experience of the electric power industry, but of other industries as well. After all, almost none of the threats on the list will be unique to electric power). This is probably the most important task that this group will be faced with, and it is certainly the one that will take the most effort.
  5. Develop written materials that will enable smaller, less-sophisticated entities to determine whether and how a particular threat applies to them, and how much of a risk it actually poses. This is necessary in order to prevent such entities from investing a lot of time and resources toward addressing threats that probably pose very little risk to them.[iv]

Who would comprise the members of this group? It will need to be a diverse group, representing the different types of organizations subject to CIP: investor-owned utilities, Independent Power Producers, Generation and Transmission coops, distribution-only coops, large municipals, small municipals, ISO/RTO’s, US government agencies, etc. And it will need to include representatives of the E-ISAC, since it is their business to constantly identify and evaluate new threats to the electric power industry.[v]

Who would run this group? I’ll say right off the bat that it shouldn’t be run by NERC itself, since this might be perceived as a conflict with their role as the regulator. Obviously, NERC will continue to be in charge of the CIP standards, but it shouldn’t be in charge of the committee that identifies threats, since if it were this might taint the list of threats as being somehow the equivalent of a new standard, which it certainly is not.

I could see this group perhaps being organized by the trade associations: EEI, NRECA, APPA and EPSA. Or maybe the Transmission Forum and Generation Forum would get together to organize this group from among their members. I could also see the NERC CIPC doing this, although it would be a big expansion of their mandate and would thus require a large additional time commitment from a significant number of its members.

So why is it important to have this group, and to rewrite CIP so that it simply refers to the current threat list, rather than simply address particular threats, as it does now? Because that is what it will take – as far as I can see – to remove the identification of new threats from the standards development process. Instead of taking somewhere between three and eight years to address a new threat in CIP (as is currently the case, given the cumbersomeness of the standards development process), CIP will potentially within a few months “address” new threats, as soon as they are identified by this group.

Before I go, I want to point out that I’ve raised this issue before, although in a different context. In this post from August, I brought up the issue (first raised in the previous post) of compliance with CIP-013 R3. That requirement mandates that each NERC entity that is subject to this standard, once every 15 months, review their supply chain cyber security risk management plan to determine whether it adequately accounts for the current supply chain cyber risks, as well as whether it takes account of new developments in mitigation techniques for those risks.

In the previous post, I had wondered whether some new body could be constituted to review new supply chain threats and mitigations, since a lot of NERC entities wouldn’t have the in-house resources to do this review themselves. I suggested that a committee of industry representatives could do this on behalf of the whole industry, although individual entities would be free to remove or add particular threats when they drew up their own list of risks, based on their own unique circumstances. I had concluded that this would never be allowed by the current NERC CMEP.

In the post last August (referenced above), I discussed an email conversation I’d had with an auditor, who said that he didn’t see any obstacle to such a body being put together; it wouldn’t involve any conflict with the current wording of CIP-013 or with CMEP. So I think such a body should be put together. It isn’t technically needed until a year after CIP-013 comes into effect, which probably means around the end of 2020, but I really think this body would be helpful even now, completely divorced from any particular CIP purpose – but simply for the general purpose of raising awareness of current cyber threats among NERC entities. As CIP-013 comes into effect, and as CIP is rewritten in accordance with my suggestions (and I’m absolutely sure this will happen, of course!), then this body could segue into these two roles, as discussed above.


The views and opinions expressed here are my own, and do not reflect those of any organization I work with. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

[i] For a fairly long discussion of why this is the case, see this post.

[ii] This is without a doubt very hard to determine in any sort of scientific way. For example, if you are going to mitigate the threat posed by phishing and you decide that training – including sending out phishing-type emails to see who clicks on them - is the best mitigating step you can take, how can you know how successful it will be in reducing the number of malicious phishing attempts that succeed in getting someone to click on them? Well, you might put this program in place for six months or a year, and monitor statistics like number of outside phishing emails that get clicked on, number of test emails that get clicked on, etc. At that point you would be able to decide whether just continuing the current program will provide enough mitigation long-term; whether it needs to be augmented with an automated anti-phishing tool or some other mitigation method; or whether it’s been totally ineffective and you need to drop it and try something else.

In general, it will be very hard to determine up front how much mitigation a particular control might provide for a particular threat; it will usually have to be an educated guess, which can later be updated as experience (both the entity’s experience and that of its industry peers) allows.

[iii] It isn’t really a single requirement, and there will be more to each requirement than just one sentence. But in principle, what I am proposing isn’t too far from this single sentence. By the way, as I’ve said before, I am working on a book, with two co-authors, that will discuss this idea in much more detail – as well as justify it much more thoroughly – than I ever could in this blog. But the book is still a long way from appearing in print (or electrons), so at the moment this explanation, as well as others that are scattered around my posts from the past year or so, will have to suffice.

[iv] I’m assuming that the larger entities will have the necessary expertise on staff to determine whether particular threats apply to them or not, and also to estimate the risk that each of these poses. But it’s possible that larger entities would also need some of this help as well.

[v] However, it’s important to remember that the E-ISAC, at least as currently constituted, only addresses what I would call technical threats. This includes new varieties of malware, new attack vectors, etc. The E-ISAC doesn’t address threats that can only be addressed through procedural means, such as the threat of malware being introduced from transient cyber assets and removable media. Those threats are sometimes addressed in the CIP standards, but increasingly are not, for reasons already discussed in this post.

Sunday, November 26, 2017

FERC’s New NOPR, Part V: The Big Problem


This is the fifth and last in a series of posts on FERC’s October NOPR. The NOPR was issued in response to NERC’s filing of two changes to CIP-003-6 R2, regarding Low impact assets; these changes were both included in CIP-003-7, which was approved by the NERC ballot body and the NERC Board and submitted to FERC early this year.

While saying they intended to approve CIP-003-7, FERC in the NOPR proposed to order two further changes to it; these changes will be incorporated in a new version of CIP-003, most likely called CIP-003-8. I discussed the first of these changes in my last post. This post discusses the second change that FERC is proposing to order, which deals with transient cyber assets and removable media used at Low impact assets. Unlike the first change, which took a long time for me to explain, the second proposed change is fairly easy to explain.

The new requirement for Transient Cyber Assets and removable media used at Low impact assets is found in Section 5 of Attachment 1 to CIP-003-7 R2. There are three parts to Section 5. Section 5.1 lays out the requirement for TCAs that are “managed by the Responsible Entity”. FERC doesn’t seem to have any problem with this part.

However, FERC does have a problem with Section 5.2. This section applies to TCAs “managed by a party other than the Responsible Entity”. ­­It lists a set of actions that the Responsible Entity can take prior to allowing a third party to connect a TCA to a Low impact BES Cyber System (which, of course, also includes simply putting the TCA – typically a laptop – on the same network as the BCS. It doesn’t have to be physically attached to a BCS). The first five of these all start with the word Review – “Review of antivirus update level”, etc. The sixth simply says “Other method(s) to mitigate the introduction of malicious code.”

In Section 3­­­9 of the NOPR (pages 24-25), FERC points out that Section 5.2 of Attachment 1, while requiring review of third-party procedures, never requires the Responsible Entity to take any particular action if their review determines that the third party’s procedures aren’t up to snuff. In FERC’s words from Section 39, “Specifically, as noted above, proposed Reliability Standard CIP-003-7 does not explicitly require mitigation of the introduction of malicious code from third-party managed Transient Cyber Assets, even if the responsible entity determines that the third-party’s policies and procedures are inadequate.” They propose to order a change to CIP-003-7 to require mitigation of the risk of malicious code posed by third-party TCAs, not just “review” of it.[i]

I don’t have a problem with what FERC is proposing here. My concern is with the timeline in which all of this is taking place. What do I mean by that? Let’s look at the bigger picture of what’s going on:

  1. FERC ordered NERC to develop a requirement to address a particular threat: the introduction of malicious code to Low impact BES Cyber Systems caused by Transient Cyber Assets that have somehow become infected with malware, most likely due to defective practices of the organization that operates them. FERC intended that this new requirement would adequately address all TCAs used at Low impact assets, whether owned by the Responsible Entity or by a third party.
  2. NERC developed a requirement part that adequately addresses this threat as it applies to TCAs owned by the Responsible Entity. However, FERC doesn’t believe that NERC’s proposed remedy for TCAs owned by a third party is fully adequate. Therefore, they have ordered them to develop an improved requirement, while at the same time proposing to approve the requirement as it stands in CIP-003-7.
  3. FERC mandated that NERC develop a requirement for TCAs used at Low impact assets in January 2016 in Order 822. NERC’s response to that mandate was CIP-003-7, which FERC is likely to approve in 3-6 months (my guess). That will then set off the 18-month implementation period, meaning it will probably be late 2019 before CIP-003-7 comes into effect; this is a little less than four years after Order 822.[ii]
  4. But as I’ve just discussed, FERC felt that, while CIP-003-7 does address what you might call the “sub-threat” of malicious code introduced to Low impact BCS by Transient Cyber Assets operated by the Responsible Entity, it needs further work before it adequately addresses the related sub-threat of malicious code introduced by TCAs operated by a third party.
  5. To address that sub-threat, FERC proposes to order NERC to develop a new version of this requirement. What’s the timeline for that? Assuming that FERC will issue their order in 3-6 months (it would likely be the same order that approves CIP-003-7), the first step will be for NERC to write a Standards Authorization Request (SAR) for this revised requirement (the SAR will most likely also include a revised requirement for Low impact electronic access control, which I discussed at length in my previous post).
  6. Rather than putting together a new Standards Drafting Team to address these two items (and perhaps others that FERC might order), my guess is NERC will simply add them to the SAR of the existing CIP Modifications SDT, which drafted CIP-003-7. Let’s say that SDT starts work on this in the third quarter of 2018 and takes 15 months to develop and ballot a new requirement and drop it on FERC’s desk (which is approximately what the SDT took to develop CIP-003-7, although they were under a deadline from FERC when they did that). That means FERC will have the NERC-approved requirement at the end of 2019 or early 2020.
  7. Let’s say they then take six months to approve the new requirement (perhaps first issuing a NOPR); this means they’ll approve it around mid-year 2020. Let’s also assume the SDT gets aggressive and develops only a one-year implementation schedule (vs. 18 months for CIP-003-7). This means the new requirement (part of CIP-003-8, presumably), will come into effect in the middle of 2021.
  8. The middle of 2021 is 5 ½ years after Order 822, when FERC originally ordered this. And – as I noted in end note ii below – if you make the case that FERC actually ordered a requirement to address the threat of malware introduced by transient devices used with Low (and Medium and High) BCS in Order 791 in November 2013, this means the requirement that addresses this threat took 7 ½ years to develop!

To make a long story short, FERC identified a serious threat and ordered NERC to develop a CIP requirement to address it, but it won’t come into effect until between 5 ½ and 7 ½ years after FERC’s order. And this isn’t very different from the experience with other new standards or requirements that FERC has ordered. Developing a new NERC standard or requirement, going through the process of new ballots and revisions, waiting for FERC to approve the standard once it’s been submitted to them, then following whatever implementation plan was developed and approved with the standard – all of this is at best a multi-year process, and can often take longer.

Now, I assume all of my readers know that things move very quickly in the cyber security field. A new threat can appear one day, and if an organization doesn’t deploy defenses against it within say two weeks, they are putting themselves at serious risk (think of the Apache Struts threat and Equifax). So let’s look at the threat of malware from transient devices. This threat had already been around for probably ten years when FERC ordered that NERC develop a requirement that would apply to Lows. Adding five years from FERC’s order until implementation of the new requirement (and of course it was actually more than that), this means this particular cyber threat took about fifteen years to be addressed in the CIP standards.

And this is a case where a threat has been included in CIP (or will be, anyway). As I pointed out in a post in September, there are a number of important current cyber threats – phishing being the best example, but also including ransomware, Not Petya, cloud- and virtualization-based threats, etc. – that aren’t addressed in CIP at all. Moreover, there is no movement now to include them in CIP.

And the reason for this (as I discussed in the September post) is simple: As I’ve just shown, the process for incorporating a new threat into the CIP standards takes somewhere between three and eight years, and that’s once NERC (or someone else) decides a new standard is needed and writes a SAR for it. And because the process is so long and convoluted, it seems there are now no further proposals for addressing threats not already addressed by CIP[iii]. The only exception to that is FERC orders for new standards (like CIP-013), which NERC will always comply with. But even FERC isn’t trying to make NERC address every new threat that comes along; as I said above, the threat posed by transient cyber assets had been around for ten years before FERC got up the nerve to order NERC to address it. FERC obviously knows they can’t push the NERC engine too fast, for fear it will simply freeze up.

So the CIP standards framework is simply unable to respond with any degree of alacrity to new cyber threats. Of course, it’s true that most players in the power industry do a good job of protecting themselves against new cyber threats, outside of CIP compliance. And it’s also true that NERC can put out NERC Alerts for serious threats; these don’t mandate any particular actions, although they do require entities to report to NERC on what steps they are taking to address these threats.

But as I discussed in the September post, this system of having some threats addressed with mandatory requirements, while others are addressed strictly on a voluntary basis, isn’t a good one. For one, it is a very inefficient way of addressing threats. More importantly, it results in a severe distortion of how entities spend their resources addressing cyber threats, since NERC entities are strongly incentivized to lavish resources on threats which happen to be addressed in the CIP standards (and thus carry substantial potential penalties for non-compliance) vs. those that aren’t, regardless of the level of risk that each of these threats might actually pose on their own.

What’s to be done about this? My next post will discuss how I would fix this problem, were I given power to change both the CIP standards and the NERC compliance regime embodied in CMEP and the Rules of Procedure. Of course, once I lay this out, I expect the phone to be ringing off the hook (to use a quaint expression, since I don’t know any phone nowadays that still has a hook!) with requests from FERC and NERC to actually implement this. Just you wait!


The views and opinions expressed here are my own, and do not reflect those of any organization I work with. If you would like to comment on what you have read here, I would love to hear from you. Please email me at tom@tomalrich.com.

[i] FERC doesn’t seem to have a problem with Section 5.3 of Attachment 1 of CIP-003-7 R2, which deals with Removable Media managed either by the Responsible Entity or by a third party (although it never mentions who manages the Removable Media). This is most likely because 5.3 requires both detection and mitigation of any malicious code found on those Removable Media.

[ii] Actually, you could make the case that FERC mandated a requirement for TCAs used at Lows in November 2013, in Order 791; using this as the start date, it will have taken six years for the CIP requirement developed in response to this mandate to come into effect. I say this because, in Order 791, FERC first mandated that NERC develop a requirement for transient devices used with BES Cyber Systems. They didn’t say they wanted a requirement that would apply just to Medium and High impact BCS, but that is how the “CIP v6” drafting team interpreted this mandate (when they produced CIP-010-2 R4). FERC probably intended that the requirement apply to Lows as well, which may be why they ordered NERC to develop a Low impact transient device requirement in Order 822.

[iii] I believe there’s another reason for the fact that nobody in the NERC community has much enthusiasm for extending CIP to address new threats: I think the community has been put through the ringer on CIP version 5. With CIP v5, there there were many ambiguities in the requirements and NERC tried using various means to clarify those ambiguities, only  to have all of their efforts founder on the fact that NERC isn’t allowed to provide any real guidance on the meaning of an ambiguously-worded requirement. The result is that the ambiguities haven’t been addressed yet, and probably never will. Entities are responsible for coming up with and documenting their own resolutions of each area of ambiguity, as I first realized when I wrote this post in 2014. I know that the last thing most entities want is to have any more bruising battles over the meaning of new requirements, when these are likely to prove as inconclusive as they did with CIP v5.