Aurora vulnerability or how to exploit knowledge of physical processes

Trying to raise awareness of cybersecurity issues among my fellow process & control engineers is a challenging task. We’ve talked about it before, making it clear how the lack of the basic notions on ICT environments and procedures turn the risks and mechanisms of attack almost inconceivable for these engineers. I mean ‘inconceivable’ sensu stricto: not something with a very low assigned probability, but something you cannot even think about because you lack the cultural background and experience to do so.

The most common response is denial, built on several fallacies that often explain this sense of security. One is the confidence in the mechanisms laid to provide physical protection of equipment: i.e. safety interlocks by mechanical or electrical devices that operate autonomously without processing or communication capabilities and, therefore, are regarded as cyberattack-proof. Somehow, in a control engineer state of mind (myself included), these systems are regarded as the last line of defense, absolutely isolated and independent of processor-based systems malfunction (even when those processors are human) and are laid to avoid damage to physical equipment caused by improper process operation.

In my own experience, design of control systems has always relied on a two-fold strategy:

  • Deployment of a higher control level based on electronic instrumentation and processing algorithms which, by their very nature, allow for a finer tuning and higher efficiency. This is a processor-based level.
  • Deployment of a lower level based on relays and electrical and mechanical actuators that enable system operation in case of control system crash-down or severe malfunction. This level is not processor-based and, as has been stated above, prevents the physical system operation under improper conditions. It relies on built-in and hard-wired electromechanical equipment.

This second level supports the claims for the virtual impossibility of physical equipment suffering severe damage, even if a malicious individual or organization takes control of the system. However, there are two facts that undermine this security paradigm:

  • I have noticed that in many brand new control systems safety interlocks are implemented through digital instrumentation readings, communication networks and control network PLCs. The aim is twofold: first, lower costs in wiring and devices regarded as redundant and, secondly, a will to leverage the greater accuracy and adaptability of digital systems. I know of some epic fail cases which rank in the tens to hundreds of thousand Euros because of this practice.
  • Interlocks and protection systems are designed to prevent damage if the process runs beyond the allowable operating conditions. But since physical systems are not explained on a 1 and 0 basis (there is a continuum of intermediate states) one should always allow a regulation deadband to prevent annoying tripping of protection devices and to account for normal measurement variability. This is achieved by setting deadband controls, hysteresis loops, tripping delays, etc…

In the first case physical protection devices are seriously compromised by their being software and network dependant. But even in the latter case it is possible, in principle, to conduct an attack planned to take advantage of this design logic and aimed to force working conditions that result in damage to physical systems. Too complicated? Vain speculation? Not really. There is at least one documented case in which this strategy was used with spectacular results: The so called Aurora vulnerability.

This is an experiment conducted at the INL (Idaho National Laboratory) in 2007 and, as far as I can see, has fallen into that limbo that lies between professionals involved in control systems and those who are engaged in information and communication technologies security: after all, to get a full understanding of the attack one must have, so to speak, a foot in each half of the field. This could be the reason that explains why news of the experiment went almost unnoticed (beyond a video broadcast by CNN that, possibly because of its spectacular nature, triggered the typical reaction of denial in those who may be directly concerned). Even the veracity of the facts shown has been intensely questioned, suggesting that pyrotechnic devices were used to enhance the visual effect!

What is Aurora all about? To put it simply: Aurora is an attack designed specifically to cause damage to an electric power generator. The thing goes like this: all generator units are (or should be) protected to avoid out-of-synchronism connection to a power grid. This is achieved by checking the waveform being generated to asses that it matches that of the power grid (within certain limits). To do that voltage, frequency and phase are monitored. Why? Because connecting to a power grid in out-of-synchronism condition will cause the generator to synchronize almost instantaneously, resulting in an extraordinary mechanical torque at the shaft of the generator, stress this device is not designed to bear. Repetition of this anomalous operating condition will cause the equipment to fail. Let’s imagine someone willing to jump onboard a moving train: We can see him running along the tracks trying to match the train’ speed and then jumping inside. If he’s lucky enough he will get a soft landing on the wagon’s floor. An alternative but no advisable method is to stand beside the tracks and grab the ladder handrail as it passes right in front of you. It is easy to see that the resulting pull is something you don’t want to experience.

However, the protective relays allow for a certain delay between the out-of-syncronism condition recognition and the protection devices action, delay set to avoid annoyance tripping. This offers a window of opportunity to force undesirable mechanical stress in the generator without power grid disconnection. You can find a detailed technical analysis of the attack and possible mitigating measures.

True, for an attack of this kind to be successful a number of pre-conditions must be met: physical system knowledge, remote access to a series of devices, certain operating conditions of the electrical system, knowledge of existing protections and their settings … These are the arguments that will arise in the denial phase. But that’s not the point.

The point is: given the degree of exposure of industrial control systems to cyber attacks (owing to several reasons: historical, cultural, organizational and technical issues), the only thing needed to wreak havoc upon them is knowledge of physical systems and their control devices. Aurora Vulnerability is a very specific case. But it should be enough to show that confidence in physical protection of equipment has its limits, limits waiting to be discovered. Regarding them as our only line of defense is a risk that no one can afford.

Can we?

By the way, the original Aurora vulnerability video can be seen below: