The U.S. electrical grid has hardly changed since the War of Currents, the 1880s fight that saw George Westinghouse’s alternating current triumph over Thomas Edison’s direct current as the preferred method for generating and transmitting electricity. It remains a network of long-distance transmission and distribution lines designed to move electricity in one direction: from giant, lumbering fossil fuel plants to faraway households and businesses. This system has endured because it has proved safe, reliable, effective, and affordable.
Until now, that is. Although the grid is still safe, its reliability, effectiveness, and affordability are increasingly being brought into question. The culprits are twofold. First, recent natural disasters, such as Hurricane Sandy, have revealed just how vulnerable the grid is, increasing the political pressure to invest in making it more resilient. Second, the rise of distributed sources of renewable power is adding new stresses to the inflexible grid. Solar power is wreaking particular havoc, as utilities struggle to forecast and react to variable customer electricity demand that, for the utilities, falls when the sun is shining and swells when it is not.
But help is on the way. Utilities are investing in a smarter grid that will provide unprecedented insight into electricity distribution and consumption. The resulting technological improvements will enable grid operators to better manage the rise of renewable power, resulting in a cleaner, cheaper, more reliable grid than Edison and Westinghouse could have ever imagined.
Much work remains to be done, however. The challenge is to convince regulators that these investments not only are necessary to make the grid more reliable and open to renewable power but also can be cost effective. The struggle stems from a lack of forward thinking by both utilities and regulators; in many cases, the regulators responsible for authorizing these investments are content to maintain the status quo when it comes to improving the grid’s infrastructure, 80 percent of which, in some areas, has not been upgraded since the Kennedy administration. If the battle can be won, the people running the grid will have the data to manage it in a smarter, cheaper, more reliable, and cleaner way.
THE GERIATRIC GRID
It’s no secret that much of the U.S. power grid is decades old. The average age of the country’s large power transformers—critical pieces of equipment that transfer electricity between circuits—is around 40 years, and many have been operating for more than 70 years.
But the grid does not owe its vulnerability to age. Indeed, although the risk that a piece of equipment will fail increases with age, utilities counteract this problem by regularly monitoring equipment and replacing it when there are signs of degradation. Instead, the vulnerability of the grid has to do with other factors. Of transformer outages from 1991 to 2010 in which the cause was known, 67 percent were attributed to external causes (such as lightning, overload, or foreign objects) rather than age-related ones (such as moisture penetration or failing insulation). None of the largest outages in the last decade in the United States occurred solely due to an age-related equipment failure.
What really keeps operators up at night is natural disasters. Hurricane Irene in 2011 and Hurricane Sandy in 2012 blew trees onto power lines and flooded power stations, leaving millions of customers without electricity for days. These storms were unusual in their size and strength, but they accounted for just two of the roughly 5,800 electrical outages that occurred in those years, according to the power company Eaton. Natural disasters are the single largest cause of blackouts in the United States, responsible for 32 percent of all unplanned outages in 2013. (The rest were caused by faulty equipment, human error, car crashes, and animals that damaged equipment.)
These storm clouds did have a silver lining: the hurricanes generated the political will to invest in strengthening the grid. Utilities in the Northeast suffered $3.3 billion worth of damages from Hurricane Sandy alone. After they restored power and cleaned up the debris, the utilities asked state regulators to approve passing on the cost of any needed repairs and improvements to their customers, as is typically done for utility expenditures. Con Edison, the utility serving most of New York City and Westchester County, received approval for nearly $1 billion; the Public Service Electric and Gas Company (PSE&G), which serves much of New Jersey, put forth a $3.9 billion plan, of which $1.2 billion was approved in 2014.
Both plans focus nearly 90 percent of the funds on hardening the grid by installing stronger power poles and lines, building flood-proof barriers, and burying cables underground. The other ten percent, some $200 million combined, is the exciting part. It will go toward placing digital sensors throughout the grid to detect risks and monitor power flows. Connected by software, these devices will enable the utilities to pinpoint outages, reroute electricity around them, and alert customers about future repairs. Once these upgrades are in place, operators will have real-time data about the status of the grid. And unlike grid-hardening measures, these technologies will pay dividends during everyday operations, since the resulting data will make it easier to match supply and demand.
While natural disasters gave a much-needed boost to smart-grid spending, even more funding has come from the 2009 stimulus bill. It provided $3.4 billion for such investments and accounts for a full 15 percent of all the money invested in smart-grid projects since 2009. That spending paid off years later, providing the foundation on which many of the post-Sandy grid improvements will take shape. As a result, many smart-grid technologies are no longer considered science projects; they are just business as usual.
THE PITFALLS OF PEAK
A more recent threat to the grid has come from renewable sources of electricity—chiefly, solar power. The price of solar panels has plummeted in recent years, making rooftop panels a cheaper energy source than the grid in many regions. Over the past decade in the United States, the total capacity of residential solar power has grown by a factor of nearly 18, to 3.8 gigawatts. It is expected to reach nine gigawatts by 2017.
Even though rooftop solar panels may give the illusion of independence, they in fact rely heavily on the grid, which acts as a battery and spare power plant. Customers still count on grid electricity to fill in the gaps in their supply—for example, when a cloud passes over their house. With enough solar panels, a house can sell enough power back to the grid such that it zeroes out its electricity bill over the course of a year, becoming what is known as a “net zero” household. But even a system of that size is nowhere near large enough to allow a house to go completely off-grid, because the house still needs power when the sky is dark. Disconnecting entirely from the grid would require four times as many panels as a net-zero household has and tens of thousands of dollars in batteries. As a result, many solar users get a free ride by taking advantage of the flexibility the grid provides without paying anything additional in return.
Solar power complicates utilities’ operations in another way: it makes the demand for electricity more variable. In the United States, a major government push for energy efficiency has succeeded in keeping average electricity demand from ballooning over the past decade. But the gap between the hours of maximum energy use—peak demand—and those of lower energy use has been growing. In 1990, peak demand in the United States was about 55 percent higher than average demand; today, that figure has risen to 75 percent, and by 2030, it is expected to reach 90 percent. A number of trends explain the rise in “peakiness.” First, the decline of the industrial sector and the growth in the residential sector has replaced the steady electricity demand of factories with the more variable demand from houses. Second, the increasing popularity of air conditioning has raised peak demand during the summer. And third, climate change has made the weather more volatile, and electricity demand is higher on extremely hot and cold days.
In regions with abundant solar power, the peakiness phenomenon gets exacerbated in a more predictable pattern: solar panels generate the most electricity during the middle of the day, when the sun is highest, whereas customer demand for electricity is highest at the end of the workday, when people come home and turn on lights and appliances. In places where solar panels are widespread, such as parts of Germany, California, and Hawaii, utilities are faced with massive spikes in demand as solar supply drops with the sun.
The problem with peakiness is that it makes for an inefficient grid. Electrical infrastructure is sized for the few hours a year when electricity demand is forecast to be at its absolute highest—those weekday afternoons in the summer when every air conditioner is on. As a result, on an average day, less than 50 percent of the U.S. grid’s capacity is used. It is the equivalent of designing highways wide enough to avoid traffic jams on even the busiest driving days of the year. The oversized grid requires more capital to build and maintain, and these costs are borne by all customers, not just those who use the most electricity at peak time periods. To return to the highway analogy, the pricing system is like charging car owners monthly minimums for gasoline even if they never drive their cars.
When demand moves from average to peak, utilities tend to fill the gap in an inefficient way. Managing peak demand requires control over supply, demand, or both. With little control over customer demand and no control over the supply from rooftop solar panels, utilities are forced to fill the gap using the one resource at their disposal: by turning on centralized fossil fuel plants called “peakers,” which get used just a few hundred hours per year. In fact, a few hours before the system is expected to reach peak demand, utilities ramp up peakers so that they can immediately be connected to the grid when needed, meaning that during those hours, the plants are burning costly fuel and generating emissions yet powering nothing. Due to their limited output, peakers can cost twice as much as the bigger, more efficient power plants that generate base-load electricity—and generate 40 percent more emissions per kilowatt-hour.
DEMAND AND CONTROL
New technologies are rapidly changing the way utilities react to peak demand. Grid operators are embracing what is known as “demand response” to manage the amount of electricity their customers draw from the grid. Utilities are able to reduce demand directly by remotely controlling large loads such as industrial equipment, pool pumps, water heaters, and communicating thermostats. They can also reduce demand indirectly, by offering pricing incentives for customers to use less electricity at certain times. Several utilities have implemented large-scale pilot programs that pay residential customers to cut their electricity use during peak periods, offering rebates of around $10 per day. Customers can use smartphone apps to control lights, locks, thermostats, and security systems even when away from home—and often in response to a text message or an e-mail from the utility alerting them that peak demand is occurring. But around the world, these programs are few and far between. Demand-response programs have been successful at managing the large electrical loads of commercial and industrial customers but for the most part have not tapped into the residential sector, which is what drives peak demand in most regions.
An alternative approach to managing peak demand involves charging customers “time-of-use” rates. Most households pay the same regional price per kilowatt-hour for electricity regardless of the time of day it is consumed. Time-of-use rates set higher rates for using electricity at peak times and lower rates for off-peak periods. These pricing schemes encourage people to change their behavior—for example, to set the dishwasher to run in the middle of the night or turn the air conditioner down a notch. So far, however, only Ontario and Italy mandate time-of-use rates. (Spain and California are set to join them in 2018.) These programs have demonstrably reduced peak demand, but the difference between peak and off-peak prices has been relatively small, sufficient to shift demand by only a few percentage points.
Putting in place time-of-use pricing in the United States won’t be easy. Two utilities in Arizona, Arizona Public Service and the Salt River Project, have stood out by getting a large group of their customers to sign up voluntarily for time-of-use rates. But to make it mandatory, utilities will have to cut through a web of red tape. According to the Edison Electric Institute, an industry association, the regulatory process in the United States for changing residential electricity rates takes an average of ten months and sometimes over two years. As a case in point, California began the process of allowing utilities to mandate residential time-of-use rates in 2013, but the change won’t take effect until 2018.
One of the most proactive approaches to managing demand has emerged in New York. Through the 2014 initiative Reforming the Energy Vision, the state is encouraging the integration of solar and other renewables using demand management and smart-grid technologies. Although the state has yet to define the particular mechanism, the goal is to bridge the gap between the costs that utilities pay for the grid and the price customers pay for electricity.
TIME TO INVEST
Perhaps the most politically practical way to modernize the grid is to ride the wave of infrastructure investment, especially by drawing on funds earmarked for storm recovery. In terms of their scope and innovation, few of the projects under way rival the billion-dollar plans of Con Edison and PSE&G, but there is still an opportunity for other utilities to commit funds to smart-grid projects to improve reliability.
In part, utilities are motivated by revenue. Most regulators specify a given profit margin that utilities are allowed to earn on investments, which is calculated as a given return on equity—usually around ten percent annually. Once a proposed investment is approved, regulators allow the utility to add these costs to its total expenditures and raise customers’ bills over a number of years. In this way, building new infrastructure offers a major avenue for utilities to grow their revenue streams, which is especially crucial for utilities that are losing money as efficiency gains and renewable power eat into their overall sales. In 2014, to modernize the grid and relieve congestion, U.S. electric and gas utilities invested more than $31 billion—up by nearly seven percent from 2013.
Regulators have proved willing to approve investments that promise higher reliability, although they are not granting utilities a blank slate. After all, the New Jersey Board of Public Utilities cut PSE&G’s original post-Sandy plan by almost 70 percent. Nonetheless, other utilities in future dealings with regulators may do well to point to the investments that the regulators of PSE&G and Con Edison green-lighted to build their own cases for smart-grid projects aimed at improving reliability. Regulators, for their part, can look to replicate the successes of stimulus-funded projects and other recent investments rather than stick with the conventional approach of hardening the grid.
But not all investments in the grid need to come from utilities; state governments can also play a role. Over the last two years, state-level investments in projects aimed at making the grid more reliable have taken off. Most have taken the form of microgrids, which are buildings or campuses that are often connected to the main grid yet have enough on-site power generation to disconnect (or “island”) from it in the event of an outage. Since Sandy hit the Northeast, Connecticut, Massachusetts, New York, New Jersey, and Maryland have together offered over $100 million in microgrid grants to provide reliable safe havens when the next storm hits. And in 2014, California announced $26.5 million in grants for microgrid projects that use renewable energy.
TRUE GRID
The electrical grid itself has evolved little since its inception, yet what has changed dramatically is the way people use it. Electricity has come to affect every aspect of modern life, and society today is more reliant on it than ever. That makes its absence all the more painful. The White House estimated in 2013 that grid outages cost the U.S. economy $18 billion to $33 billion a year. While it’s hard to put a precise value on reliability, whatever the number is, it’s big. A blackout can do more than just delay a nighttime football game; it can ruin a shift at a factory or put a patient on life support at risk.
That’s why a more nuanced approach to upgrading the grid is needed. Investments in reliability, although they should take advantage of the political will generated by recent natural disasters, must also focus on the day-to-day benefits of a modernized grid. And policies on changing electricity demand must focus on not just the benefits of energy efficiency but also the more subtle impact of peak demand. Likewise, incentives for installing solar panels should be accompanied by standards—and fees, if necessary—aimed at ensuring that each new system does not save its owner money at the expense of the neighborhood.
Continued improvements by utilities and policymakers could prove transformational. California’s governor recently proposed mandating that 50 percent of the state’s electricity come from renewables by 2030, a goal almost certainly unachievable without first modernizing the grid. The modern grid operator will have foresight about supply and demand throughout the grid. The result will be improved reliability, increased efficiency, and the seamless integration of renewable power—not to mention more stable prices and lower emissions. The technology to modernize the grid exists, but historical inertia is limiting its adoption.
Electricity now faces a modern-day War of Currents. This time, however, the battle is not between one technology and another. Rather, it is pitting the electrical architecture that was designed 100 years ago against a vision of a redesigned and modernized grid. It is a vision that promises to at last make the U.S. grid, in a word, current.
You are reading a free article.
Subscribe to Foreign Affairs to get unlimited access.
- Paywall-free reading of new articles and a century of archives
- Unlock access to iOS/Android apps to save editions for offline reading
- Six issues a year in print, online, and audio editions