The DNP UG recently published a statement regarding the rash of DNP3 advisories from ICS-CERT.  Generally, I agree with their statements. There is nothing wrong with the specification in the perfect world of specifications.  In theory, a developer should be able to write a flawless implementation of the protocol.  In practice, however, something quite different has been demonstrated. What factors account for this disparity, and how can these pitfalls be avoided going forward?

Simplicity is prerequisite for reliability.” -Edsger W. Dijkstra

There are many ways to express this simple design principle.  In software engineering, this is a well -studied phenomena.  Software bugs are proportional to the number of lines of code (LOC).

Bugs ∝ LOC

There are many factors that affect the constant of proportionality.  A short list includes:

  • Amount, quality, and type of testing (unit, functional, integration, negative, etc)
  • Performing static analysis
  • Peer review of code
  • The type of application (i.e., multi-threaded apps are prone to additional types of bugs)
  • Code-base maturity and # of users (users find bugs)
  • Appropriate architecture, modularity, and encapsulation (good design principles)
  • Good internal reusability (no copy/paste coding)
  • Developer skill level
  • Language of implementation

The point is that with all of these other factors being equal, the number of bugs increases proportionally to the number of lines of code.  How can we relate this rubric to protocol specifications?

LOC ∝  Size of Specification

The larger a specification is the more code you have to write to implement it. Proportionality is a transitive mathematical property, which allows us to write the following.

Bugs ∝  Size of Specification

Now we have something illuminating. Specification size and complexity are directly proportional to bugs in implementations.  Let’s put DNP3 in perspective with other protocols.

Specication Document Size (pages) Ratio
Modbus V1.1b3 50 1
DNP3 IEEE-1815-2012 821 16.4
IEC61850 + MMS 1800+ 36+

When you look at it this way, the risk is obvious. The predictions for a full implementation of IEC61850 are rather dire to say the least.

Is there something wrong with the DNP3 specification?

I don’t think the answer is black and white.  The Achilles’ heel of DNP3 is size and complexity.  What standards bodies have to realize is that functionality and robustness are usually competing design concerns.  DNP3 has a lot of stuff in it that most users don’t need.

  • If you’re on IP infrastructure, there is almost always a better alternative to DNP3 file transfer.
  • How many deployments actually use virtual terminal objects or custom data-set extensions (besides WITS)?
  • Do we really need 32-bit addressing?
  • Should level 2 include Common Time of Occurrence and related events?
  • Should level 2 include broadcast?

These are just some examples I think should be discussed in the case of DNP3.  Even if an end user doesn’t use something, bad code stubs may still be lurking in their vendor’s implementation.  How well have they tested it?  Have these seldom-used features been exhaustively put through their paces using a protocol fuzzer?

If we’re going to demand functionality, we must also demand adequate testing.  We can lower the constant of proportionality between bugs and specification size in a number of ways.  There are things that standards bodies can do to help with this issue:

  • Highlight specific areas of the standard where mistakes have been made in the past or are likely to made in the future.
  • Produce protocol subsets that highlight what is core functionality and what you use at your own risk.
  • Consider developing or supporting open reference implementations of standards to provide direct feedback to specification changes or additions.
  • Consider making testing recommendations that include other regimes besides conformance: i.e  unit, functional, soak, and negative testing.  These can purely be high-level recommendations, not detailed specifications as the case is with DNP3 conformance tests.

Those implementing standards have additional responsibilities and pitfalls to navigate:

  • Use appropriate technology
  • Design safe APIs so that protocol libraries are difficult to misuse.
  • Use good design principles that encapsulate complexity and promote modularity.
  • Deciding when you have enough of each class of testing is not easy.  Code coverage helps but this isn’t everything.
  • Provide feedback to standards bodies on hard-to-implement, dangerous, or extraneous features.

Controlling, managing, and mitigating complexity are the fundamental challenges of software engineering.  Part of managing complexity is controlling it’s growth and asking if you really need a feature.  Every standards body needs someone to play the role of Mordac!


Systems that find the right balance between features and security are not built easily.  Mordac can’t always win, but you need to listen to what he has to say.