Novel Algorithm Speeds Discovery of Electric Power Grid VulnerabilitiesMay 18, 2008
If you were in the northeastern United States on August 14, 2003, you might remember things being a little different that day: no getting home from work and flicking on the TV, no microwave dinner, no A/C to take the edge off the hot summer evening. Not even a few minutes clicking through your bookmarked news sites or checking e-mail.
August 14 marked the start of the 2003 Northeast blackout, a massive power outage that spread throughout the northeastern U.S. and into the Midwest and Canada, affecting an estimated fifty million people in and around the involved major cities alone. While power was restored to some areas in a matter of hours, others had limited or no power until as late as the 16th, with additional services, such as public transit, kept out of operation or in limited availability until the 18th.
In all, the Northeast blackout claimed at least 11 lives, left millions of people stranded from Toronto to New York, and caused billions of dollars in damages. And yet the cause of the outage that led to such widespread havoc across the Northeast was determined to be disproportionally simple: three power lines and three respective overgrown trees.
Of course, a variety of complications contributed to the eventual severity of the blackout: An alarm that could have alerted system operators to the problem failed; clues that the alarm was not operational went unnoticed; out-of-date information was unknowingly used to make decisions about the system, leading to courses of action that would almost certainly have been different had the correct information been used.
Yet it can all be boiled down to three trees---or, more importantly, three power lines that failed within operating limits because the lines sagged and hit the trees, causing short circuits. Clearly, the power company's vegetation-management plans were not adequate in this case---not terribly surprising, if you consider the millions of trees that probably sit on transmission line right-of-ways at any given moment, waiting their turn to be trimmed. But what if the power company had known in advance that the loss of those three lines would be enough to essentially cripple such a massive area?
Those right-of-ways almost certainly would have been moved to the top of the vegetation-management priority list, and August 14 would have been just another summer day; at worst, the system operators would have known immediately where to look to fix the problem when it occurred. For the long run, of course, the system could be changed or upgraded to remove the vulnerability entirely. Given that goal, is there some way to determine the smallest combinations of lines in an electrical transmission system that would be enough to cause a failure if they dropped out of service?
If you think that sounds suspiciously like a network inhibition problem, you're on the same page as Ali Pinar of Lawrence Berkeley National Laboratory.
"We started talking about this after the Northeast blackout," explains Pinar, a scientist in LBNL's High Performance Computing Research Department (and the recently elected secretary of the SIAG on Supercomputing). "It was just talk at the beginning. Then we found people in our [Environmental Energy Technologies] division who were interested, and we proceeded." The work gained four more collaborators thanks to MSRI-UP, the undergraduate research program of the Mathematical Sciences Research Institute.
Pinar and his colleagues wanted to develop an algorithm that would predict system vulnerabilities in the power grid: specifically, the smallest number of cut links on a graph that would maximally inhibit such a system's capability.
The difficulty was in combining the discrete problem with that of nonlinear power flows, Pinar says. "Each one was a challenge by itself, and we wanted to handle both at the same time," he recalls. Fortunately, the re-searchers were able to exploit a special structure for a simpler solution.
The group proceeded by examining graph bisections---partitions of the nodes of a graph into two distinct groups or, in this case, of the power grid into two distinct sections.
Bisecting a graph requires breaking enough of its links to divide it into two completely separate sections. A collection of such breaks is called a "cut." The "cut size," the number of links broken, played a part in the researchers' algorithm development: Logic would suggest that a larger cut size would be more likely to lead to a blackout. But logic would also suggest that, at any given moment, loss of a large number of lines is less likely than loss of just one or two. Finding small cuts that could result in extensive damage to the system--that is, small collections of lines whose operation is critical to the functioning of the system as a whole---can help predict system vulnerabilities.
By approximating the problem with a bisection, the researchers were able to transform their system vulnerability problem into a more general minimum cut problem, which can be solved quickly. With the decomposition of the grid system into two sections, the researchers could use the mismatch between power "consumed" (the load) and power generated to approximate the severity of the resulting blackout: The larger the difference between the load and the power generated, the more severe the blackout.
One of the most impressive things about the algorithm is the speed at which it runs. The researchers tested it on a sample grid with 17,000 lines, getting results in just a few seconds on a desktop computer.
The entire U.S. power grid consists of almost 100,000 lines, Pinar says. "We haven't done that yet, but we predict it would take a few seconds."
If and when such an algorithm is used to examine the U.S. power grid, how would the results be used? If that algorithm had identified the three infamous downed lines that resulted in the Northeast blackout, keeping a closer eye on those lines would seem to be an obvious recommendation, for a start.
"That's [just] one way this may be used," Pinar says, but the truly interesting question is, "How do I prevent blackouts?" More than just predicting a potential outage, Pinar and his collaborators want to be able to suggest steps that would remove the vulnerabilities completely.
"There are two ways to do it," Pinar explains. "One is, given the grid, if I can't add lines or generators, how do I make a decision about where the power will be generated?" In the second case, lines or generators can be added; knowing the trends about how electricity is used, "how would I add generators to keep up with the demand?"
"We can't solve this immediately," he continues. "The first stage for us is, how do we find the vulnerabilities? But you don't want to keep telling people, �Oh, there's a blackout, here's the problem. Oh, there's a blackout, here's the problem' . . . they'll hate you!" he laughs. "The second stage is, how do you handle generation so that there is no vulnerability?"
Pinar and his colleagues point out that this research is also relevant to other, similar networks and systems, such as transit systems and gas and water networks---and even medical research.
"We were presenting this in the lab to the division directors, and somebody just raised his hand and said, �we're working on the same problem in cancer networks,'" he recalls.
"What they have is very similar---it starts with process A, and A affects B and B affects C, so it's like a dependency graph," he says, drawing a parallel to the power grid. "But drugs can inhibit signals in the cell, and thus affect the graph and the dependencies. It's like a blackout in the cell, so something doesn't happen in that cell."
Meanwhile, Pinar and his colleagues will continue to try to "cause" blackouts---with the eventual goal of preventing them.
Michelle Sipics is a contributing editor at SIAM News.