Is your plant controlled by Russia? Are you sure?

A report issued by the US Department of Homeland Security - reported today in the New York Times here: https://nyti.ms/2Dv2gXY - states that there has been a years-long program of cyberattacks by Russian operatives that has extended into the control rooms of US utility plants. The report implies that these attacks have been successful, to the point of the attackers gaining access to operational control systems, with the possible capability of shutting down or even damaging (potentially catastrophically) the plants.

This is pretty shocking given the high profile that cybersecurity has been given for well over a decade by the ISA and other industry groups. Is everyone asleep at the switch?
 
There is nothing better than Fake News to frighten the people ......

'Is everyone asleep at the switch?'
Hardly - Even the movie Die Hard 4 - now 9 years old gave the public an idea of what could happen.
But is a power station going to shut down because a computer command says it has to?

Nowadays many big companies have Teams of Personnel active in cybersecurity - Seminars worldwide are out there educating Managers and Chief Engineers to risks involved. And the reported breaches in cybersecurity over the past few years are generally proportional to complacency and ignorance of those involved.

Those of us as Control Engineers had isolated control systems in years past - but now very much aware that even an exposed USB port on any SCADA system or engineering laptop could be target for possible attack.
 
The suggestion that a plant could be remotely operated in an ongoing basis without detection is ludicrous. Stuxnet, which is by far the largest and most sophisticated attack on industrial control, was only able to accomplish this for very specific equipment for a short period of time in order to damage Iran's nuclear centrifuges. More importantly, it only spoofed the VFD feedback to avoid tripping equipment protection limits. Although this means the feedback would also not be visible to operators, Stuxnet only modified the frequency from time to time because doing it continually would be noticed.

"Experts believe that Stuxnet required the largest and costliest development effort in malware history.[36] Developing its many capabilities would have required a team of highly capable programmers, in-depth knowledge of industrial processes, and an interest in attacking industrial infrastructure.[13][18] Eric Byres, who has years of experience maintaining and troubleshooting Siemens systems, told Wired that writing the code would have taken many man-months, if not years.[51] Symantec estimates that the group developing Stuxnet would have consisted of anywhere from five to thirty people, and would have taken six months to prepare.[76][36] The Guardian, the BBC and The New York Times all claimed that (unnamed) experts studying Stuxnet believe the complexity of the code indicates that only a nation-state would have the capabilities to produce it."

Screwing something up so it breaks in an acute attack, sure, but I can't imagine spoofing operators for prolonged periods of time could be accomplished in industries with more visual indications of malfunction.
 
C

Curt Wuollet

We certainly hope not, but I hope people listen to the canary.

Let's look at our security situation:

The systems that run on embedded processors are not that great an opportunity, they are diverse and tend not to be networked with easy to abuse networks. It's the layers above that couldn't possibly present a better vector for takeover. Everything above the lowest layers is totally dependent on a monoculture, totally dependent on the most often penetrated operating systems known to man. To make matters worse, the complex nature and sheer difficulty of changing anything makes it quite common for these systems to lag in security updates for years. And the "switchmen" often have so little interaction with the software underneath that they are unlikely to notice anything the doesn't stop production. In other words, we have been asleep at the switch and preferring to keep hitting the snooze button since the alternative might be harder and security just hasn't been a criteria.

"Just getting things to work has been hard enough". The military got the email, they are diversifying their systems, slowly, but depending on compromised systems was getting people killed. Diversity isn't the whole solution, but at least at least _considering_ security when selecting platforms would help. It would make it much harder to break in and once you have platforms with the means, application can be redesigned for enhanced security. Yes, it would require effort, but perhaps the very real possibility of terrorism will provide the motivation. A determined, state sponsored, effort by the right people probably can't be entirely denied, but we can make it non-trivial. If you don't think it's trivial in your present situation, hire a white hat to show you. I'm not optimistic that anything will change as long as Redmond is ruling the roost and Android, while ascendant, suffers from some of the same problems, for some of the same reasons. Simple and secure tend not to go together. And "anyone can do it" does not bode well for security.

Regards
cww
 
I don't think this is fake news, though I agree that this is unlikely to represent ongoing control over plants. But when the attackers can provide screenshots from (presumably) SCADA systems controlling a plant, it must at the very least be a call to action. Maybe they only had read access, maybe something greater.

The most likely use of such a threat would be as leverage to have the US back off on sanctions. Given the current size of our military budget, perhaps some funds could be spared to apply to hardening our infrastructure.
 
Top