SCE’s Jim Kelly On the challenges of Offering A ‘smarter grid’ In Place of Today’s ‘Dumber’ Grid

Issue: 

Jim Kelly, senior vice president of Regulatory and Environmental Policy for Southern California Edison, delivered the following keynote address to the May 2011 CalTech/MIT Enterprise Forum. In the keynote, Kelly details the fun and frustration of the process of building-out a smart electricity grid for California electricity consumers. In the end, Kelly concludes that public policy is the major impediment to a successful build-out of smarter grid system—not the widgets and infrastructure investments and innovations that the lay public might suspect.

Jim Kelly (SCE): There are a couple of problems with the notion of a smart grid. The first is the presumption, when we say smart grid, that we must have a dumb grid now. We have far from it. The current electric grid is one of the most complex integrated machines on Earth. It is way smart, but it can be way smarter. That’s why I like the title “A Smarter Grid.” The second thing is that when we talk about a smart grid, it tends to be one of those Lewis Carroll things. If you ever read Alice in Wonderland, one of the characters says, “that’s what I like about words, they can mean anything I want them to.”

When you ask, what is a smart grid...the basic answer you hear from a lot of policy makers is, “I don’t really know what a smart grid is, but I want one. And I want mine to be smarter than yours.” That’s human nature.

We hear a lot of responses from vendors about smart grid. “Well, a smart grid means my product.” A smart grid means smart meters, which are kind of a misnomer, right? A smart meter is really the fact that we’re giving every customer a small computer and communication device on the wall of their home that, among its many other functions, measures electricity. We’re going to put smart meters in, and that’s the smart grid. Or smart grid means electric vehicles. Or smart grid means distributed generation. That’s a trendy one right now. Or smart grid means energy storage. Or smart grid means—I like this one better—embedded network sensing: everything talking to everything, everywhere, all the time, and new control systems to more efficiently monitor or manage the transmission system and, particularly, to integrate renewable energy.

One of the great challenges for California because we’re on the leading edge (or lunatic fringe, depending on your viewpoint), with our commitment to 33 percent renewable energy, will mean solar and wind, which are fundamentally stochastic resources that we have to somehow blend to meet the demand for power.
Remember, the physics of our business are fairly unique. Think about a business where we must simultaneously match supply and demand—24 hours a day, 365 days a year. That’s already pretty daunting, but you could say a lot of businesses do that. We have to simultaneously match supply and demand every minute of every day with a product that cannot be stored—not cost effectively, not at scale. You have batteries and so forth, but you don’t have cost effective energy storage.
That means that every time you flip your light switch on, I make it to order for you. Always served hot, right? I can’t inventory the product. Imagine that constant matching of supply and demand in real time over—in our case, a 50,000-square-mile grid serving 14.5 million people. And now you interject one-third fundamentally stochastic and very difficult to predict resources. And you will not be very tolerant of me if you flip the switch on your wall and my response is “at some point, the light will come on.” We take for granted that somehow, instantaneously, I make it for you. Not as easy, of course, as most people think.

The smart grid is about taking this complex machine, with all these new variables and new public policy factors, and figuring out how to make it do them better. I like to say that smart grid really started about 20 years ago for most of us who have been in this space because we were working on the things that people now consider smart grid: highly automated digital relaying devices, embedded network sensing (a little crude back then but we had it), and synchronous phaser measurement to measure the state of the system of the entire grid at any given time—sophisticated engineer kind of tools.

We did them on the blackboard first. Then we did them on the test bench, and they were cool. But we couldn’t give them to you. Why couldn’t we give them to you? What suddenly happened? Did somebody have a Eureka! moment and say, “I know! We need a smart grid!”
Smart Meter
A smart meter in the
home (Source: Digitpedia.com)

No. The reason we couldn’t give a smart grid to you is because to do so requires a three-legged stool, and we needed a convergence of three things. First, the development of sophisticated power electronics—really the province of electrical engineering guys. We had most of that hooked together with the emergence of ubiquitous, high speed, low cost communication. Second, I had to be able to get everything to talk to everything. At one time, the cost of doing that vastly overwhelmed the benefits. I looked at a project not long ago that required $20 million worth of equipment to put phaser measurement units out around the grid. Tremendously high potential. If they had had them in the Northeast, there would have been no blackout. That’s how good it is. 

To put $20 million worth of these high tech units out there that worked, I needed $300 million worth of fiber optic cable to connect them. Nonstarter. How am I going to connect them over the existing public networks of both wireless and wired? I needed this ubiquitous high bandwidth, low cost communication. The third leg of stool was I needed the true emergence of low-cost, readily available, extremely high-power computing, because to make the grid smarter, you have to receive massive, massive amounts of data and be able to act on it rapidly.

Depending on what your field of science is, the timeframes that I’ll now discuss seem either incredibly short or maybe incredibly long if you’re a physicist. But typically, to do the stuff I have to do on the grid to prevent the grid from doing bad things (bad things usually manifest by your lights going out, right?), I’ve got to go through a protocol or a sequence. We’ve been simplifying this for about ten or 15 years.

Ideally, a smart grid that can integrate the renewable resources and do the distributive generation and the storage and allow customer choice and have vehicle-to-grid and allow you to react to pricing signals and pick your own carbon footprint and all

Those things you want to do has to be able to do the following sequence. When something happens on the grid, good or bad, at a very large number of sensing locations—for Southern California Edison’s system, my dream would be to have 150 million embedded network sensors—they are being polled or pushing to me, actively, for most of them, 30 times per second. Do the math...Very very large numbers. Exponents, gigantic.

This data, and that’s all it is, is flowing in. I’ve got to be able to take that incoming data and do these steps. I have to identify anomalies. I have to analyze them. I have to isolate them. That’s a big deal, right? The Northeast blackout was a little tiny deal and it cascaded because each successive piece of the system did exactly what it was supposed to do. “Oops, I sense a fault, open the circuit.” And of course the bigger circuit upstream said, “Oh my god, I sense four faults, open that circuit.” Pretty soon, we had the entire northeast blacked out. Can’t have that. I’ve got to identify, analyze, isolate, box off, remediate (because I want to grid to be self-healing, so re-route the power flows so that the minimum possible number of people are impacted), and then report to humans what I’ve done. That’s the last step, not the first.

The reason is, typically to avoid cascading outages and big problems on the grid I have to do those five steps in four cycles, one fifteenth of a second, over what could be 600 miles. An incredible amount of high-speed processing has to take place to turn vast volumes of data into actionable control information.

Now I can afford to do that, because computer-processing power is cheap, and I can put it everywhere, distributed and centralized all over and everywhere in between. I couldn’t do that a few years ago. The smart grid really is the convergence of those things—the power electronics, the high power, high speed computing, and the ubiquitous high-speed communication coming together to enable me to apply digital technologies across this vast physical grid. What is so hard about that? If you’re a technologist, you say, “Well, you know, I guess that’s pretty tough, but we kind of know how to do all that stuff.” What are the other impediments? Why can’t we do more of it?”

The other impediments go to some of the interesting policy considerations that we face over time. We have a surprisingly balkanized U.S. electric grid. The easiest thing in the world is building a grid in a green space—to walk into a country that has nothing and build a grid, because you can make it just what you want. You can make it really cool for the money you invest.

In the United States, we’ve done an amazing job on the electric grid (make no mistake, when a group of historians got together not too long ago and looked at all of the achievement of modern man, they said that the single greatest achievement of the 20th century was widespread electrification because without that: no computers, no refrigeration, no night shifts, no big industry, and none of the other things that have led to human progress).

But we did it in little, balkanized segments, where everyone got to design their own little grid, and then we connected them all. There’s virtually no uniformity across the grid. Hundreds and hundreds and hundreds of providers were allowed to set their own standards. Believe it or not, because you all think voltage is only what you get out of that plug in the wall, there are hundreds of different voltage levels used around the country. There are hundreds of different circuit designs—the distribution system design in Downtown Los Angeles is entirely different from the circuit design in Pasadena or Arcadia. They are fundamentally different, laid out in a whole different way by engineers who had a different concept. You have these hundreds of balkanized pieces that all have to somehow interconnect.

The fundamental premise in the utility space—why do you have, for example, regulated public utilities? Why don’t we just say, “Ok, competition, everybody who wants to can go out and provide electricity to whoever they want!” There are some of you who probably say, “Well yeah, we ought to do that!” But remember, the reason that electricity is considered a regulated monopoly function or a public good was because of the vast amount of infrastructure that had to be installed to enable it. It’s easy to come afterward and ride on that infrastructure, as we saw in the phone space. The first mover has to have some way, if you invest billions, hundreds of billions, and in the United States, trillions of dollars, to get the infrastructure in place.

The fundamental regulatory compact is that we say, utilities get to do this, and we pay them back over the life of the asset. It’s not a profit margin. They get paid back over the life of the asset as if they’d invested in a bank. That’s the fundamental premise of how utility ratemaking works.

So utilities put in very long-lived, very durable, principally electromechanical, old school stuff that would run forever. Kind of sounds like the old phone business with copper wire, right? That stuff a) has to be paid for and b) is incredibly expensive and messy to replace one its been installed.

We’ve got to take these two paradigms that are crashing head-on. I like to say that this is when Olm’s law meets Moore’s law...you have billions and billions of dollars of imbedded network. To replace it, am I going to tear all of your streets up and let you be without power for days, weeks, months, while I change things out? Or, am I going to do it incrementally?

When I do it incrementally, am I going to replace very reliable, long-lived electro-mechanical machines with very short-lived but very cool digital machines? In my business, it is not uncommon to run a piece of machinery, with very little exception, for 75 years. I have a transformer at Bailey substation right now that has been running continuously for 77 years. I’ve had a spare for when it fails sitting there for 30 years. 30 years! The spare is old! That’s the kind of equipment we put in and the kind of reliability we sought. It wasn’t super efficient, and it wasn’t real smart, although we’ve retrofitted it with some goodies. But it lasted a really long time.

If you look at that kind of a paradigm that says to make those investments and recover your costs over the life of the asset, think how interesting it is for U.S. customers to suddenly think of a world where the electric system is driven principally by computers. Let me ask you this question: How soon after you bought it was your computer obsolete? Two seconds, a year, somewhere in there, right? A pretty narrow band. If I said to you, “In the interests of keeping with the deal we have and conserving costs and being prudent and good stewards, you’re going to have to use your current computer for probably about 20 years, okay?” Why would you laugh, though? I agree with you, it’s laughable, but why? Why is it funny? Because it’s not cool. But more than that, because you know that in about four or five years, nothing will work on it anymore, right? Think about that notion in a world of things that last 50, 75, or in the case of my hydroelectric system, over 100 years. Contrast that with the world of incredibly high performance and obsolescence. How am I going to mesh those?

One of the big takeaways I wanted to give you is if you think about that sort of jarring juxtaposition between the old and the new, it tells us that the real impediments to a smarter grid are not going to be whether or not the widget you’re working on works. You’ll make it work or somebody else will. It’s going to be these policy-driven questions, and questions that are driven by the dynamics of developing digital technologies. How do we enable a grid to get smarter when its fundamental cost-recovery concept is based on very large, very long-lived capital investment? Would you pay, given what you know about the grid wherever you live, if you said you can have a way smarter grid, but it will cost you twice as much on your monthly bill? How smart does it need to be? Your lights just generally come on when you want them to come on, right? Let’s face it, reliability is very high, outages are very infrequent. They are usually caused by external events: car ran into pole or fire burnt down stuff. Most of the time, you don’t think about it; it just works.

If I said that I can give you way smarter, but you have to pay twice as much, most of you would say, “Well, I don’t need that.” See the problem?
The second thing is that in this balkanized grid across the United States, where there were no consistent requirements, whatever you make, your latest widget, you’re going to have to worry about backward compatibility. It needs to plug and play with stuff that’s years old...What are you going to do? Because I can’t replace it all. That’s a 20- or 30-year venture. You need to be backward compatible with everything, forward compatible with everything to come, last a really long time, and if you want to really change the smart grid, get involved with the bodies that are setting standards for protocols and communication between these devices...If we don’t get the standards right, we’ll never make the smart grid work. Certainly not cost effectively.

I’ve been told “Well, that’ll never come to pass. To many companies, too many ideas”...Standardization, backward compatibility, and plug and play—if we don’t get that, we can’t apply all the cool stuff that all of you are working on the a grid in a cost effective way. •••