Category Archives: Why We Need Nuclear Power

WHAT CAN BE DONE TO REDUCE OUR CARBON-INTENSIVE ENERGY ECONOMY?

Can anything be done to avoid such a drastic increase in CO2? The World Energy Outlook 2009 report presents an alternative scenario to the reference scenario described above; this alternative is intended to reduce CO2 emissions to 26.4 Gt, which would give an atmospheric concentration of 450 ppm by 2030. This ambi­tious scenario involves reducing coal usage below 2008 levels, a slight rise in oil,

image020

and a substantial increase in natural gas. This is made up by large increases in biomass, nuclear, and other renewable energy sources (Figure 2.5). The largest reduction in CO2, though, comes from increased efficiency in how energy is pro­duced and used (17). This scenario is expected to cost many trillions of dollars, and keep in mind that 450 ppm CO2 may well be too high a target. As described in Chapter 1, Jim Hansen and others are convinced that we should be aiming for 350 ppm CO2.

I n March 2008 the US National Academy of Sciences held a summit on America’s energy future that involved wide-ranging discussions on the problem of global warming and what can be done to reduce our emissions of CO2 (4). One approach to reduce CO2 emissions to pre-1990 levels by the year 2030 is based on an analysis done by the Electric Power Research Institute (EPRI), an independent, nonprofit organization that conducts research on the generation, delivery, and use of electricity. Greater efficiency would limit the increase in electricity con­sumption to 0.75% per year instead of the 1% increase projected by the EIA 2008 reference scenario (Figure 2.6). Renewable energy would increase to 100 GWe, while nuclear power would increase by an additional 64 GWe from its current 100 GWe. Coal continues to be a dominant source, but efficiency improvements of coal power plants, both existing and new, reduce demand for coal, and carbon capture and storage technology would be widely deployed. There would be a shift to plug-in hybrid electric vehicles (one-third of new cars by 2030) and distributed energy resources, such as solar panels on houses, would contribute 5% of the base electricity load (23).

An alternative analysis of abatement of greenhouse gas (GHG) emissions worldwide was done by McKinsey & Company, a business consultancy firm that provides critical analyses of a variety of issues. Their 2009 report highlights a large number of steps that can be taken to reduce GHGs and estimates the cost of pro­viding each part of that reduction (24). An updated report to account for the reduction in energy demand due to the global recession was published in 2010 (25). The authors examined more than 200 different options in 10 sectors and

image021
3500

21 world regions for reducing GHGs and then calculated the potential GHG abatement and the cost for each category. The result is a widely reproduced chart that illustrates the different options and their cost-effectiveness per ton of GHG abated (Figure 2.7). This analysis shows the potential to reduce GHGs by 35% below 1990 levels by 2030, or 70% below the expected levels in 2030, with business as usual. Capturing the full abatement potential should hold the expected global temperature increase to 2°C, which may be a critical threshold temperature (see Chapter 1).

What are the categories that provide the biggest bang for the buck? The total world annual GHG emissions are projected to be 66 Gt CO2e (carbon diox­ide equivalents) by 2030 with business as usual. This is larger than the num­bers given above because it includes other greenhouse gases, such as methane and nitrous oxide, and it also includes emissions from agriculture and loss of forests and grasslands. Efficiency steps are relatively cheap but provide rela­tively small reductions individually. In aggregate they provide a 14 Gt reduc­tion in GHG emissions. Energy production accounts for about one-quarter of the world total of GHGs, almost entirely as CO2. Changing energy production from carbon-intensive sources to renewable and nuclear provides a reduc­tion of 12 Gt CO2e. Nuclear power is more cost-effective in reducing CO2 than either wind or solar power, though both play an important role. The report assumes that carbon capture and storage technology will exist and will make a big impact in reducing CO2 from coal. Other reductions in the report are

image022

Global GHG abatement cost curve beyond business-as-usual (v2.1) — 2030

Note: The curve presents an estimate of the maximum potential of all technical GHG abatement measures below €80 per tCO2e if each lever was pursued aggressively. It is not a forecast of what role different abatement measures and technologies will play.

Source: McKinsey & Company: The impact of the financial crisis on carbon economics — Version 2.1 of the Global Greenhouse Gas Abatement Cost Curve

F igure 2.7 Strategies to reduce greenhouse gases and the relative cost for each strategy. The width of each bar represents the amount of GHG abatement, and the vertical height is the cost per ton of CO2 equivalent.

source: Reproduced by permission from McKinsey & Company 2010 (Impact ofthe Financial Crisis on Carbon Economics:Version 2.1 ofthe Global Greenhouse Gas Abatement Cost Curve).

associated with agricultural practices and deforestation and will not be dis­cussed further in this book.

In my view, the biggest problem with the scenario for electric power genera­tion is with the carbon capture and storage technology, which will be discussed in the next chapter. This contributes the largest single factor in reducing CO2 in the EPRI analysis. If that does not occur, there would have to be very large reductions in demand for electricity with essentially no increase for 40 years, and natural gas would have to be used to replace the coal that would be phased out. An alternative to this would be to increase the use of nuclear power to an even greater degree so that coal can be phased out. Is this feasible for an industrialized country? France gets 75% of its electricity from nuclear power and has the third lowest per capita CO2 emissions of any Western European country (after Sweden and Switzerland) (16). Both Sweden and Switzerland are slightly lower because they get 40% of their electricity from nuclear power and nearly all the rest from hydropower (26, 27). All three countries emit about the same amount of CO2 per capita as China and about one-third as much as the United States.

Our discussion thus far has provided some background: where our energy comes from, what it is used for, future projections if we don’t make major changes in energy production and usage, and the major possibilities for reducing the pro­duction of CO2 to minimize global climate change. In the following chapters we will explore the various issues associated with each major source of energy, espe­cially coal, natural gas, wind, solar, and nuclear.

NOTES

1. Remarkably, recent events seem to follow history. On April 20, 2010, a mile-deep oil well in the Gulf of Mexico suffered a gas explosion and fire that killed eleven men and led to an environmental disaster.

2 . Power is given in units of watts, kilowatts (kW), megawatts (MW) and gigawatts (GW). Energy is the amount of power produced multiplied by the time over which it is produced. For example, power is in kW and energy is in kilowatt hours (kWh). A given amount of coal or other energy source has a given amount of energy that can be converted into power. Only about one-third of the energy in coal can be converted into electrical power, with the rest going into heat. Power plants are usu­ally rated by the amount of electric power they produce, given in MWe or GWe (for electric). See Appendix B for more details.

3. Nuclear fusion has the potential of even greater energy density, but is not available and is unlikely to be a source of usable energy for the next 50-100 years, if ever (11).

4. BTU (British thermal units) is a unit of energy. 1 quad (quadrillion BTU) equals 1015 BTU, which equals 2.93 x 108 (293 million) MWh. See Appendix B for more information about energy units.

5. The amount of CO2 produced is 3.7 times as much as the carbon burned because of the addition of two oxygen atoms for each carbon atom. Since the coal used in power plants (bituminous and sub-bituminous) is only 35-85% carbon, the amount of CO2 produced by burning a ton of coal ranges from 1.3 to 3.1 tons.

6. Energy intensity is the primary energy use per dollar of GDP (gross domestic prod­uct). It is essentially a measure of energy efficiency. As energy is used more effi­ciently, the energy intensity becomes smaller.

7. The Organisation for Economic Co-operation and Development consists of 34 countries devoted to democracy and the free market and includes mostly western European countries but also the United States, Canada, Australia, New Zealand, Japan, South Korea, Chile, and Mexico.

BLACK BODY RADIATION: THE QUANTUM

Max Planck began the revolution, though he never really believed in the conse­quences of his discovery and, in fact, he was not directly studying atoms. He was instead interested in a problem known as the black body radiation problem, one of the great unsolved problems of physics at the end of the nineteenth century. The problem concerns the spectrum of light emitted from a heated object. We are all familiar with the idea that heated objects give off a color that varies with the temperature. If you have ever stared mesmerized at a campfire as the coals burn down, you are aware of the glowing embers that vary from yellow to red to blue. A black wood stove with coals of a fire burning in it is a crude example of a black body. As described in the theory of global warming in Chapter 1, the earth is also a black body (roughly speaking). The questions for physicists were: What is the spectrum of light emitted from such a black body held at a particular tempera­ture? How much light is emitted at a particular frequency or color?

According to the classical equations developed in the 1860s by James Clerk Maxwell, a Scottish physicist, light is an electromagnetic wave with a wavelength (distance between peaks), frequency (oscillations per second), and velocity. The color of light depends on its frequency f) and its wavelength (X) but they are connected by a relationship that is a constant, namely the speed of light c. The product of the frequency and the wavelength is always equal to the speed of light, which is 3.0 x 108 (300,000,000) meters per second, or 186,000 miles per second. In equation form,

f x X = fX = c

so there is an inverse relationship between frequency and wavelength. Blue light has a higher frequency and shorter wavelength than red light, for example. A spec­trum is the amount of light emitted at each frequency or wavelength. Physicists had already measured the light spectrum and knew that it was independent of the size or shape of the black body. It depended only on temperature of the black body.

According to classical physics, the amount of light (the intensity) emitted at a particular frequency from a black body gets smaller and smaller at low frequen­cies but should approach infinity at high frequencies, leading to what was known as the “ultraviolet catastrophe” (2). This would imply that a black body should emit an infinite amount of energy, and that was clearly absurd. If it were true, you would be fried if you stood in front of your black body wood stove. At the rather old age—for a theoretical physicist—of 42, Max Planck delivered a bombshell in a lecture at the German Physical Society in Berlin on December 14, 1900 (9). He developed an equation that completely described the spectrum of radiation being emitted from a black body based on the statistical thermodynamic distribu­tion of energy in oscillators (what we now call atoms) in the black body. But he made a huge conceptual jump. To avoid the problem of the ultraviolet catastro­phe, he postulated that the oscillators could not have all of the infinite possible values of energy allowed by classical physics, but instead the energy had to be a multiple of a new constant he defined as h, which is now known as Planck’s con­stant. Specifically, the energy of the oscillators could only be multiples of h times f, or hf, where h is Planck’s constant and f is the frequency. This amount of energy is a quantum of energy.

Perhaps this does not seem revolutionary, but it goes against everything that most people think they know about the world. If you made a pendulum by sus­pending a ball on the end of a string and moved it to one side and then released it, you would expect that it would swing back and forth with a particular frequency (known as resonance), and if you moved it farther to one side, it would swing farther, though still with the same frequency. By moving it aside and up, you are giving it gravitational energy, which is converted to kinetic energy as it swings. But you assume that you can move it anywhere you want before you release it. According to Planck’s theory, however, that is not strictly true. You can only give it energy that is a multiple of hf. In the macroscopic world we live in, you don’t notice the quantum effect because it is extremely small. The value of h is 6.626 x 10-34 joule-seconds, where a joule is a unit of energy.3 But in the atomic world, Planck’s constant rules what can happen.

A much younger fellow German scientist, Albert Einstein, made the critical connection that proved the reality of quantum effects in the atomic world. In his miracle year of 1905, when he was 26 years old, Einstein published five papers that changed the world of physics. One of the papers proved the existence of atoms and molecules based on the Brownian motion of very small particles in a liquid; one gave a calculation of the size of a molecule; one set out the theory of special rela­tivity concerning space and time; a follow-up paper on the theory of relativity set out his most famous equation E = mc2, postulating that mass is just another form of energy; and one proposed the quantum nature of light, for which he received the Nobel Prize (3). That is the one he considered the most revolutionary, and it is directly related to the work of Planck.

Einstein posed the question of why there was an apparent difference in the nature of the material world and light. The natural world of matter was considered to be made up of discrete atoms and molecules, while light was considered to be a continuous wave with a particular frequency and wavelength that are infinitely divisible. Einstein strongly believed in a fundamental beauty of the natural world and thought there should not be these differences between discrete atoms and continuous waves of light. He was aware of Planck’s work on black body radiation, and he derived similar equations for the energy spectrum of black body radiation. But he went further conceptually by considering the light in a black body to be similar to a gas of particles, showing that mathematically they follow the same thermodynamic rules. He concluded that light can be considered as a collection of quantum particles each with energy of hf (9).

Einstein went even further by testing his conclusions against experimental results. A Hungarian physicist, Philip Lenard, had discovered an utterly baffling phenomenon known as the photoelectric effect. He found that if he shone ultra­violet light on a metal, it would emit electrons. This could actually be explained by the wave nature of light. What was especially baffling, though, is that if he increased the intensity of light, it did not increase the energy of the emitted elec­trons, as it should if light were acting as waves. But if he increased the frequency of the light, the emitted electrons had more energy. Einstein explained the results by postulating that the light consisted not of waves but of quantum particles (later called photons) that had energy hf. The energy of a photon could be given up in a collision with an electron in the atoms of the metal. Increasing the intensity of the light would only produce more electrons of the same energy, but a higher frequency photon would knock out electrons with a higher energy, in agreement with Lenard’s experiments. Thus, Einstein showed that light had properties of a particle, in addition to its well-known wave properties. In fact, light sometimes acts as a particle and sometimes as a continuous wave, but never both at the same time. And even more mysteriously, what we think of as purely matter, such as elec­trons, sometimes acts as waves. This has come to be known as the wave-particle duality. Welcome to the quantum world, an Alice in Wonderland world full of surprises! We will come back to the photoelectric effect when we consider how radiation causes damage to cells.

The insights ofPlanck and Einstein led to the development of a branch of phys­ics called quantum mechanics, which was unlike anything physicists had contem­plated before. Planck discovered the concept of a quantum, but he did not accept that it had a physical reality. It was more of a mathematical convenience to solve a problem. Einstein took this concept and showed that it had a physical reality, namely, that light itself was a quantum particle. But neither Planck nor Einstein could ever fully come to terms with the revolution in physics that they started (3). When quantum mechanics was developed, it described a physical world that Einstein could never believe in. So it was left to others to fully develop the con­cepts of a quantum atom.

YUCCA MOUNTAIN

To understand the politics, you have to look at the history of the nuclear indus­try. In the early years of the nuclear industry, it was thought that spent nuclear fuel would be reprocessed to remove the plutonium and uranium. But President Gerald Ford halted the reprocessing of commercial reactor fuel in 1976, and President Jimmy Carter shut down the construction of the reprocessing plant in Barnwell, South Carolina, in 1977 because of fears that reprocessed plutonium would be used in nuclear weapons (8, 12). The Nuclear Waste Policy Act of 1982 (13) specified that the federal government was responsible for finding a suitable site for disposal of high level waste from spent nuclear fuel and authorized the US Department of Energy (DOE) to evaluate sites and recommend three. It gave the Nuclear Regulatory Commission the authority to set regulations on the construc­tion and operation of the facility and gave the EPA the authority to set standards for radiation exposure to the public. Thus, three different government bureaucra­cies would be responsible for nuclear waste storage. Furthermore, the act required that nuclear power utilities pay into a Nuclear Waste Fund (NWF) at the rate of one-tenth of a cent per kWh of electrical energy produced to pay for the evalua­tion and development of a waste disposal site, which was to be opened by 1998. According to an official audit of the NWF, as of September 30, 2012, the total nuclear utility payments plus interest totaled $43.4 billion with expenditures of $11.4 billion. The value of Treasury securities held by the NWF totaled $38.7 bil­lion (14). Utilities are suing the DOE because it has not provided a facility to store nuclear waste as provided in the Nuclear Waste Policy Act and should therefore not continue to collect the fee (15).

The DOE began exploring various geological sites for long-term storage of spent nuclear fuel, including sites in Texas, Washington, and Nevada. The deci­sion for the best site was not made by scientists, however, but by “the best geolo­gists in the U. S. Senate” (16), who chose Yucca Mountain in southern Nevada. It happened that the Speaker of the House was Jim Wright from Texas, and the vice president of the United States, George H. W. Bush, was also from Texas, so Texas was not chosen. Tom Foley, from the state of Washington, was the House Majority Leader, so Washington was not chosen. It seemed that nobody wanted a nuclear waste repository in his state. But this left Nevada, and Harry Reid—the first-term senator from Nevada—did not have the political power to oppose it. So Yucca Mountain was chosen by Congress in 1987 as the sole long-term site for com­mercial spent nuclear fuel disposal based on political considerations, not on what was the best geological site. In effect, the decision was rammed down the throat of Nevada, and Nevadans did not exactly take it lying down.

Harry Reid subsequently became the Majority Leader of the Senate, which put him in a position to block the site, and he has vigorously worked to do so (16). He has, in fact, accomplished that, at least temporarily. Since President Obama needed Senator Reid’s support to accomplish other objectives, he cannot support Yucca Mountain when Reid is so opposed to it. In June 2008 the DOE formally submitted a license application for Yucca Mountain to the NRC, which it sub­sequently withdrew because President Obama provided no funding. Instead, in 2010 President Obama directed the Secretary of Energy, Steven Chu, to establish a Blue Ribbon Commission on America’s Nuclear Future to study the issues of spent nuclear fuel disposal (17). Ultimately, the courts will decide whether work on Yucca Mountain should go forward.

So much for the politics. What about the scientific and engineering consid­erations of Yucca Mountain or other long-term waste disposal sites? The major consideration is for a site to have a stable geology in a very dry region. As humans, most of us have little sense of the time involved in geological processes. The dawn of agriculture began just about 10,000 years ago, so nearly the entire lifetime of human societies is encompassed in the time frame for the decay of spent nuclear fuel to the level of the uranium ore it originally came from. It is natural to think that we cannot possibly predict what will happen in the next ten or hundred thou­sand years in human society. A million years is but a blink of an eye in geological processes, however, so it is not so difficult to imagine that very stable geologi­cal formations can be found that are adequate to store nuclear waste. After all, the uranium that is mined to make nuclear fuel has been around since the earth formed 4.5 billion years ago.

Nature has already provided a clear demonstration that nuclear wastes can be contained for millions of years in geological formations. A natural deposit of ura­nium (the Oklo deposit) with about 3% 235U formed in Gabon, Africa, about 2 bil­lion years ago. This concentration of 235U is similar to the concentration in nuclear power reactors and could undergo sustained fission under the right conditions, which existed in the uranium deposit. More than a dozen sites existed in the ura­nium deposits where controlled fission reactions occurred for hundreds of thou­sands of years, producing about 15 gigawatt years of power (18). But how could we possibly know that nuclear fission occurred 2 billion years ago? As discussed earlier, when 235U undergoes fission, it produces fission products and transura — nics, new elements that were not there previously. Also, the 235U gets used up when it fissions, so the percentage of 235U in the uranium ore will be lower than it should be. Both of these were discovered in the uranium ore from Oklo. The natural reactor existed so long ago that all of the plutonium has long since decayed away, as well as the short-lived fission products such as 137Cs and 90Sr, but other reactor-specific, very long-lived isotopes still exist. The long-term waste from the reactor was in the same geological formation as the uranium, and this formation was sufficiently stable to contain the fission products for 2 billion years.

Yucca Mountain is a 6-mile long ridge located in the Nevada Test Site where nuclear weapons were tested during the Cold War, approximately 100 miles northwest of Las Vegas and about 30 miles northeast of Death Valley. Eruptions of a caldera volcano millions of years ago produced ash and rock, which melted and fused to become layers of volcanic tuff. Subsequent tilting along fracture lines formed the ridge that is now called Yucca Mountain (19). The site is in an unpopulated arid desert of rabbitbrush, cacti, grasses, and a few yucca. About $9 billion has been spent on research and development of the site, making it the most intensively studied geology on earth. Alternating layers of hard, fractured tuff and soft, porous tuff with few fractures permeate the mountain, making it a complex geology.

There are three good reasons that Yucca Mountain would be a good burial site, though. One is that the region is very dry, with only about 6 inches of rainfall a year, which mostly evaporates or is taken up by plants. A second reason is that the water table is very low, so that the repository would be about 1,000 feet below the mountain ridge yet still about 1,000 feet above the water table. The third reason is that the layers of tuff contain minerals called zeolites and clay that serve to trap radioisotopes that might eventually get dissolved in water and migrate through the mountain (20). Even if radioisotopes could eventually get into the water table, Yucca Mountain is in a hydrologic basin that drains into Death Valley. On the way it would flow under Amargosa Valley, the desert valley about 15 miles away from Yucca Mountain that has a population of about 1,500 people.

What do we have to show for the $9 billion? The DOE excavated a 25-foot bore­hole sloping down into the mountain about a mile, then turning and after about 3 miles reemerging from the ridge. Rooms and side tunnels have been created to do research on the geology and water infiltration, and sophisticated computer mod­els have been created to model radionuclide movement over time. A fully devel­oped site would have about 40 miles of tunnels to store casks containing the spent nuclear fuel from reactors and other high level waste. Waste would be stored in double-walled, corrosion resistant cylinders 2 meters (about 6.6 feet) in diameter and 6 meters long. The cylinders would be covered with a ceramic coating and a drip shield to further protect against water and then backfilled with a clay soil that would absorb radioisotopes in the spent nuclear fuel (20). Yucca Mountain is designed for retrievable storage of nuclear waste according to the governing law, although it would eventually be permanently sealed.

There are several plutonium isotopes and other transuranics that are produced by neutron capture during burn-up of nuclear fuel. Anti-nuclear activists like Helen Caldicott cite the long lifetimes of 239Pu and other transuranics to fuel fears about spent nuclear fuel. Since the half-life of 239Pu is 24,100 years, surely it is going to be a big problem, right? Actually 239Pu is not the real problem because it will be adsorbed by the clay and zeolite in the rock and also is not readily soluble in water. After all, the half-life of 235U is 700 million years and the half-life of 238U is 4.5 billion years, the age of the earth, and they are in geologically stable formations! To see what the real problem is, we have to dig a little deeper into the nuclear transformations in the waste. I mentioned earlier that 241Pu was the most serious problem, but why is that? After all, the half-life of 241Pu is only 14.7 years, so there will be almost none left in 150 years. When a radioisotope decays, some­thing else is created. In the case of 241Pu, it p-decays to form americium (241Am) with a half-life of 432 years, which hangs around a lot longer. But 241Am a-decays into neptunium (237Np), which has a half-life of 2.1 million years. And that is the real problem. It turns out that neptunium is about 500 times more soluble in water than plutonium, even though neither one is absorbed very well by the human digestive system. So the real radiation concern at Yucca Mountain is not plutonium but neptunium. That is why the main study of radiation from Yucca Mountain is concerned with modeling the transport of neptunium, not pluto­nium. Scientists from Los Alamos National Laboratory have concluded that the various levels of containment at Yucca Mountain will contain the neptunium for more than 10,000 years. In fact, they concluded that it would take at least 100,000 years for the radiation level to reach 20 mrem/yr (0.2 mSv/yr) (20).

The US Environmental Protection Agency (EPA) was given responsibility for setting the radiation standards to be used for Yucca Mountain, and it issued stan­dards in June 2001. However, the EPA was sued and the US District Court of Appeals ruled that the EPA’s standards did not adequately take into account a 1995 National Academy of Sciences report (21), so EPA revised its radiation standards. The current EPA rules require that groundwater near Yucca Mountain can have no more radiation than is allowed by current groundwater standards nationwide, which is a maximum dose of 4 mrem per year (0. 04 mSv/yr) to an individual drink­ing the water. The external dose is limited to 15 mrem/yr (about the equivalent of a chest X-ray) for the next 10,000 years to an individual who might live in the area. Because of the court ruling, the EPA then required that the dose to an individual be no more than 100 mrem/yr (1 mSv/yr) from 10,000 to one million years (22).

The DOE believes that the multiple barriers it has designed for the containers, the geology, and the low water infiltration at the site will be able to meet these extremely stringent standards. But what is the worst that could happen? Recall that after about 15,000 years the toxicity of the spent nuclear fuel is reduced to that of the uranium ore from which it originally came. So, in effect, the waste storage site has become a radiation deposit not much different than a natural deposit of uranium ore. One of the concerns about Yucca Mountain is that we have no idea whether human society will exist in the area in 10,000 years, let alone one million years. So let’s suppose that society continues for the next 10,000 years. Because of global warming (if we haven’t solved the problem), it is likely that the area will be much drier than now, so there will probably be no agriculture and little chance that radiation would enter the groundwater. But what if it is actually a wetter cli­mate? If the society living then is more advanced than we are now, they will be well aware of the effects of radiation and will be able to minimize any effects on humans in the area. If we have bombed ourselves back to the Stone Age, then the primitive people will not be able to build and operate wells that would get water from hundreds of feet below the valley, so they would not be exposed to ground­water in any case. So society would either be advanced enough to deal with a little extra radiation or too primitive to be exposed to it.

There are numerous sites in the United States where groundwater exceeds the EPA standards because of natural uranium and radium in the soil, so it would not be a catastrophe if radiation from Yucca Mountain actually got into groundwater and exceeded current EPA standards. What about the standard of not exceeding 1 mSv/yr for the next million years? The natural exposure to radiation in the sparsely populated Amargosa Valley is 1.30 mSv/yr, which is less than the US average (23). Recall that the background radiation from natural sources for US citizens is about 3.20 mSv/yr, but some of us are exposed to a lot more radiation than that. The average for Colorado, where I live, is about 4.5 mSv/yr because Colorado is at a high elevation, causing increased exposure to cosmic radiation, and there is a lot of uranium and radium in the granite of our mountains. There are communities at particularly high elevations in Colorado, such as Leadville, where the radiation level is much higher than the Colorado average (about 5.5 mSv/yr). Just from that fact alone, the concern about an additional 1 mSv to people living near Yucca Mountain in tens or hundreds of thousands of years becomes trivial. Their total dose would still be less than the average dose to other US citizens and about half the dose that millions of Coloradans get every year! And Colorado has the fourth lowest death rate in the United States from cancer (24).

But that is not the end of the story either. The average exposure of US citizens to radiation from medical procedures is an additional 3.0 mSv/yr, a factor which has increased five-fold over the last two decades. If the people in the Amargosa Valley in a few thousand years are in a primitive society, they will probably not be get­ting a lot of CT scans, so their radiation doses will be much lower than that of US citizens now. And finally, it is highly likely that research will continue in the pre­vention and treatment of cancer so that it will be a much more treatable disease.

So, as far as an enhanced radiation exposure from storing spent nuclear fuel in a stable geological site such as Yucca Mountain, that is trivial compared to the exist­ing exposures of millions of people, and the enormous public concern is really just a tempest in a teapot. As I said earlier, the problem of long-term storage of nuclear waste is a political problem, not a scientific or engineering problem. We simply lack the political will to make intelligent decisions and instead get caught up in outlandish “what-ifs" And we waste billions of dollars studying and litigating a problem to death instead of just taking care of it.

RADIATIVE FORCING

Radiative forcing is a measure of the influence that a particular factor has in altering the earth’s energy balance and is specified by energy rate (power) per area (W/m2). Some factors cause atmospheric warming (greenhouse gases) while some factors cause cooling (aerosols, ice cover, and clouds). The amount of forcing from the most important factors is shown in Figure A.2. Carbon dioxide is the largest positive radiative forcing agent, contributing 1.66 W/ m2, while aerosols are the largest negative forcing agent, though they have a

Подпись:-1

Radiative Forcing (Wm-2)

Подпись:image067

Подпись: (a) image069 image070

0 -1

Radiative Forcing (Wm-2)

Figure A.2 Radiative forcing (RF) in watts per square meter for various anthropogenic and natural climate factors between the years of 1750 and 2005. Factors that are positive contribute to warming the earth; factors that are negative cool the earth. The bars represent 90% confidence intervals. LOSU is a designation for the level of scientific understanding of a particular factor.

source: Reproduced by permission from Climate Change 2007:The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Figure TS.5. Cambridge: Cambridge University Press, 2007.

large uncertainty. Solar irradiance contributes only a small radiative forcing. Overall, the radiative forcing is 1.6 (0.6 to 2.4) W/m2 , which is why global warming is occurring. The large degree of uncertainty is caused by the uncer­tainty in the aerosol effects.

The Siren Song of Renewable Energy

Renewable energy from the sun—which includes solar, wind, and water energy— can meet all of our energy needs and will allow us to eliminate our dependence on fossil fuels for electricity production. At least, that is the “Siren song” that seduces many people. Amory Lovins, the head of the Rocky Mountain Institute, has been one of the strongest proponents of getting all of our energy from renew­able sources (what he calls “soft energy paths”) (1) and one of the most vociferous opponents of nuclear power. A recent article in Scientific American proposes that the entire world’s needs for power can be supplied by wind, solar, and water (2). Is this truly the nirvana of unlimited and pollution-free energy? Can we have our cake and eat it, too? Let’s take a critical look at the issues surrounding solar and wind power.

SOLAR

Photovoltaic (PV) Solar Power

Let me be clear that I am a proponent of solar energy. I built a mountain cabin a few years ago that is entirely off the grid. All of the electricity comes from solar photovoltaic (PV) panels with battery storage. The 24 volt DC is converted to AC with an inverter and is fed into a conventional electrical panel. It provides enough energy to power the lights, run a 240 volt, three-quarter horsepower water pump 320 feet deep in the well, and electrical appliances such as a coffee pot, toaster, and vacuum cleaner. But I am not implying that all of my energy needs come from solar. The big energy hogs—kitchen range, hot water heater, and a stove in the bedroom—are all powered with propane. Solar is not adequate to power these appliances. In 2010 I also had a 2.5 kW solar PV system installed on my house that ties into the utility grid. When the sun is shining, I use the electricity from the solar panels, and if I use less than I generate, it goes out on the grid to other

users. If it does not produce enough for my needs, then I buy electricity from the grid. When I am generating more than I use, the utility buys electricity from me. Because my house is entirely electrical, including heating, I generate only enough electricity to provide about 20% of my total electrical power needs. I am a pro­ponent of solar and I live in an area where it makes sense to use solar power. But there is more to the story than meets the eye.

The most important thing about solar is that it is a diffuse, intermittent energy source that is highly variable both geographically and temporally. It is just com­mon sense that there is more solar energy available in hot, sunny areas such as the southwestern United States and less in cloudy areas such as the northwestern and eastern United States. We also know that even in sunny areas the clouds some­times block the sun, the intensity of the sun varies from dawn to dusk and, of course, it does not shine at night. Even in sunny areas, there are only about 9-10 hours of useful sunlight in the summer—depending on latitude—which does not cover the early morning and late afternoon and evening hours when people use the most electricity. And, of course, in the winter there are fewer hours of sunlight and the intensity of the sun is less during the daylight hours compared to the summer. Since people still want to use electricity when the sun is not shining, solar power cannot be relied upon as a primary source of energy unless there is adequate storage of energy—such as in batteries, as I do in my solar cabin—or it is backed up with the grid that gets a stable baseload source of power from a conventional power plant, as in a grid-tie system like the one I use for my house.

The National Renewable Energy Laboratory (NREL) in Golden, Colorado, produces a solar map that shows the solar availability for the United States (Figure 4.1). A glance at this map shows that the best solar resources by far are in the Southwest, especially states such as Arizona, Nevada, New Mexico, and south­ern California, and are good over much of the western United States. Solar energy is a less attractive option in the eastern half of the nation. Except for southern California, the best solar resources are in areas where there is a low population density.

The most common use of solar power is to produce electricity using photo­voltaic (PV) cells. These work by the photoelectric effect,1 but in this case, the photons from the sun do not have enough energy to ionize atoms. PV (solar) cells are made of layers of silicon crystals with small amounts of other elements added, very similar to the material used in semiconductors for computer chips. Electrons in the silicon are normally bound to atoms in the material, but when they absorb a photon they get enough energy to jump into a conduction band in the silicon material, creating a current that is collected through wires. Thus, PV cells directly convert photons from sunlight into electricity, but the process is quite inefficient. Solar cells are individually small but can be put together into modular solar panels that are 1 or 2 meters on a side. Besides the silicon that is at the heart of solar cells, they are covered with an antireflection coating consisting of silver, titanium, pal­ladium, and silicon layers. Most solar panels have an efficiency of only 15-16%, with the best in research labs at about 23% for single-crystal cells that are very

United States
Mainland

 

Germany

 

large

area

 

Hawaii (USA)

 

suns rays

 

long distance

 

image025

Hawaii (USA)

 

equator

 

short distance

 

United States Mainland

 

small

area

 

Geographic regions
are not shown to scale.

 

NATIONAL RENEWABLE ENERGY LABORATORY

 

image026

F igure 4.1 Solar resource map of the US and Germany.

source: Reproduced by permission from the National Renewable Energy Laboratory.

expensive (3). Of course, this is an active area of research, and new materials and processes may increase the efficiency somewhat.

Let’s crunch some numbers to put things in perspective, starting with my grid-tie system (here comes the math!). I have 12 solar panels, each of which are rated at 210 watts and have a surface area of 1.61 square meters. In order to stan­dardize things, let’s think about the area to produce 1 kW of power, which is 7.67 square meters or about 5 panels. A very important point about solar panels is the rated power output, which is the maximum power the panel is capable of produc­ing. In reality, it will only produce this at the peak time of the day when there are no clouds. Suppose that my system could generate the full kW for every hour of the day. The amount of energy produced in kWh (kilowatt hours) would be 24 kWh, which is about what an average Coloradan uses each day. But, of course, it can’t actually do that, since it is cloudy sometimes during the day, the intensity of sun varies during the day, and it doesn’t shine at night. So how can you determine what to expect?

An excellent tool for analyzing solar power is a program developed by the NREL called PVWatts.2 This program gives the amount of average daily solar energy falling on a square meter of land at many locations in the United States and around the world. It then calculates the amount of AC energy you can expect to generate for a given size of solar system; it will also calculate how much money that will save you. The yearly average solar radiation in Boulder, Colorado (the closest location to my home in Fort Collins) is 5.56 kWh per square meter per day. Of course, it is less in the winter (4.43 kWh/m2/day in January) and more in the summer (6.24 kWh/m2/day in August). Five of my solar panels, with an area of 7.67 square meters that produce 1 kW maximum DC power, could theoretically produce 42.6 kWh per day on average, but over two years of actual use, I gener­ated an average of 4.57 kWh per day for an overall efficiency of 10.7%. Over a year, PVWatts predicted that I should generate 1,459 kWh, but I actually generated 1,650 kWh. The predicted energy generation takes into account average or typi­cal actual weather on a daily basis throughout the year, which reduces the total energy generated compared to a theoretical value based solely on the solar irradi — ance. Because of vagaries of weather, the predictions of PVWatts are expected to be within 10% for annual predictions and 30% for monthly predictions.

Why am I able to utilize only about 11% of the available solar energy? Do I have a particularly bad solar system? In fact, my system and its various components are new and efficient for commercially available systems. My solar panels are rated at 15% efficiency, but they become less efficient when they get dirty or are cov­ered with snow, amounting to about a 5% loss. The inverter that converts the DC coming from the solar panels into AC that can be used in my house is about 96% efficient, so there is another 4% loss. There are also small losses in the wiring and other electronic components. Overall, PVWatts estimates a derating factor of 0.77 in converting the DC power to AC power, giving an efficiency of about 12% over­all for the system. As solar panels age, they also get less efficient, losing efficiency by about 1% per year, according to the NREL and the solar panel warranty. That may not sound like much, but it means that after 20 years they produce only 80%
as much power as when they were new. That is why the typical useful lifetime (and warranty) is 20 years.

Colorado State University (CSU), where I teach, is one of the top universities in the country for using solar power. CSU has developed a 30-acre site filled with high-efficiency solar panels—about one-third of which track the sun, improving the overall efficiency. The 23,049 solar panels produce 230 W DC (maximum) per panel, for a total capacity of 5.3 MW (megawatts). The annual energy output is about 8.5 million kWh, which works out to be slightly better than the efficiency of my system—about 12%. This amount of energy is sufficient for about 1,000 homes using 8,500 kWh per year, the average for Colorado but much less than the overall US average of 10,900 kWh per year. Average energy output is not the most important information, however. What really matters is what is available when users need it, and this varies greatly by hour, day, and month. If it is cloudy or stormy and the system is not generating electricity, the grid has to have sufficient power to make up the difference.

An electrical grid system has to have sufficient baseload power to handle con­stant needs 24 hours a day and other sources of power that can be switched on or off to meet the intermediate and peak loads that vary during the day (Figure 4.2). Intermittent sources such as solar can only contribute toward the intermediate loads during the day, but there still has to be sufficient capacity from other sources to make up for the loss of power when the sun isn’t shining. Peak loads can vary rapidly and require a source of energy that can be quickly ramped up and down, typically gas-fired power plants. Peak loads in the summer tend to occur during the late afternoon and evening when people get home from work and turn on the air

source: Reprinted by permission from Enerdynamics Corp (Understanding Today’s Electricity Business, 2012).

Solar Radiation Energy Energy

(kWh/m2/day) (kWh/mo) (kWh)

Location

Max

Min

Average

Max

Min

Yearly Total

Boulder, CO

6.24

4.29

5.56

129

103

1,459

Tampa, FL

6.52

4.24

5.37

136

95

1,364

New York, NY

5.78

2.85

4.56

121

69

1,218

Seattle, WA

5.88

1.26

3.76

127

26

970

Los Angeles, CA

6.68

4.44

5.63

146

100

1,470

Phoenix, AZ

7.54

4.88

6.57

153

109

1,617

Chicago, IL

5.92

2.27

4.42

124

55

1,176

Frankfurt, Germany

5.04

0.86

3.13

107

17

802

Madrid, Spain

6.95

2.35

5.05

143

52

1,285

Aswan, Egypt

7.41

5.67

6.82

147

128

1,701

notes: Data calculated using PV Watts from NREL.

in the morning when people get up and turn on lights, heat their houses and cook, and another peak in the late afternoon and evening (4).

Fort Collins is a pretty good location for solar power, but there are better and much worse places. Table 4.1 lists different locations in the United States and other countries with their solar resources. It gives the monthly maximum, mini­mum, and average solar radiation in kWh/m2/day and the monthly maximum and minimum—as well as yearly—expected energy production in kWh for a 1 kW DC system.

Existing utility-scale PV installations in the United States are mostly in the Southwest, with a few scattered in New England, Florida, and other states. These plants are relatively small, mostly below 20 MW. The total PV solar installations in the United States by the end of 2012 had a rated DC value of 5.9 GW (5), but remember that is only the peak value, and the actual average amount of power generated is less than 15% of that. Solar power currently accounts for only 0.04% of the electricity generated in the United States (see Chapter 2). There are about 125 pre-operational PV installations under development in the United States, concentrated in California, Nevada, and Arizona, and a few of these plants are up to several hundred megawatts (6). Even so, solar electricity production is pro­jected by the EIA to be just 5% of the non-hydropower renewable energy portfolio by 2035 in the United States (7).

One of the worst places for solar energy is Germany, but they are one of the leaders in solar energy production in the world, thanks to large subsidies to the industry and to users (8). The map in Figure 4.1 shows that Germany’s solar resources are worse than Alaska—about half as good as they are where I live. The subsidy arrangement in Germany is the feed-in tariff—a guaranteed price that utilities have to pay for solar power—that makes solar panel installations profit­able at the expense of electricity consumers who pay high prices (9). While renew­able energy rose from 6% of German electricity production in 2000 to 16% in

2009, PV solar contributed only 6.2% of that (for a total of 1% of electricity gen­eration), yet it was subsidized at a rate of 43€ cents ($0.57) per kWh in 2009. That amounts to a subsidy that is about 9 times the cost of electricity produced by the power exchange (4.78€ cents [$0.06] per kWh in 2009). Altogether, the subsidy for PV solar amounted to €35 billion ($46 billion) by 2008 and was projected to be €53 billion ($70 billion) by the end of 2010 (10). Germany installed 4,300 MW of solar in the first half of 2012 before a 30-40% reduction in government subsidies in July. That brings the contribution of solar to 5.3% of its total electric power (11). The cost for abating carbon dioxide (CO2) by using solar energy for electricity is over €700 ($928) per metric ton of abated CO2, an incredibly high cost (10). The subsidies amounted to about 2 cents/kWh in 2012 and cost German consum­ers an extra $5 billion annually to the cost of their electricity (11). Clearly, the extremely high subsidies for PV solar in Germany make no economic sense and simply take away from other less expensive options. Germany is not the model to follow for solar power.

Charged Particle Interactions

Charged particles that are of radiobiological concern are electrons (p particles), protons, and a particles. Neutrons are uncharged, of course, but they mostly inter­act by colliding with protons, and the protons then become energetic charged par­ticles. All charged particles interact with the cloud of electrons circulating around the nucleus in atoms by colliding with them and exciting or ionizing the atom, creating what is called an ion pair. The end result of charged particle interactions is to ionize atoms and give energy to electrons, which go on to cause further inter­actions. This is similar to the process shown in Figure 7.1, except that it is due to
a charged particle instead of a photon, and the charged particle continues on its way with slightly less energy.

Each time a charged particle ionizes an atom, the charged particle loses about 34 eV of energy. Frequently a cluster of about three ionizations occurs in a very small volume, so on average an interaction of a charged particle with matter results in the particle giving up about 100 eV of energy to the matter (3). As a result, the charged particle slows down. This transfer of energy from the charged particle to the matter is called the “Stopping Power” (S) or the “Linear Energy Transfer” (LET) of the particle. Specifically, the Stopping Power or LET is the rate of energy loss of a charged particle per unit of distance traveled. Hans Bethe developed an equa­tion to describe the Stopping Power mathematically (4). This equation says that the Stopping Power is proportional to the square of the charge (Z) of the charged particle and inversely proportional to the square of its velocity (v). Mathematically,

Подпись: SZL

V2

The implication of the Bethe-Bloch equation, as it is usually called, is that heavy charged particles, such as protons or a particles, move in a straight line through matter, losing energy to electrons and gradually slowing down. As they slow down, they give up energy at a faster rate, ending up in a very dense cluster of energy deposition called the Bragg peak. Also, because the Z of an a particle is 2 while the Z of a proton is 1, an a particle gives up its energy 4 times faster than a proton. As a result of this pattern of energy deposition, heavy charged particles move a defi­nite distance in matter and then come to a complete stop. This is called the range of the particle. Beyond that range, no energy can be given to the matter, so there are no further effects from the radiation. This property of heavy charged particle interactions is the basis for cancer therapies using heavy ions such as protons or carbon nuclei. The radiation penetrates into the tumor but cannot pass completely through it into surrounding normal tissue because its range is very well defined. This allows a very high dose to be given to the tumor but little to the normal tissue.

Electrons have charged particle interactions and also follow the Bethe-Bloch equation, but there is a critical difference. Since they are very light, when they knock an electron out of its orbit to ionize an atom, they are deflected like pool balls collid­ing. As a result, electrons have a zig-zag path as they move through matter and scat­ter their energy over a broader path than a heavy charge particle. They also do not have a well-defined range but gradually taper off as they move deeper into matter.

These interactions of photons and charged particles explain why different types of radiation have different abilities to penetrate into matter. Photons such as у and X-rays are the most penetrating, being able to go through several centime­ters or meters of tissue, depending on their energy. Electrons can penetrate a few microns (millionths of a meter) up to a centimeter in tissue, depending on their energy. Alpha particles can only penetrate a few microns in tissue and are easily stopped by a piece of paper or the outer layer of skin cells.

Neutron Interactions

There is one other type of radiation that we have not yet considered, namely neutrons. Since many neutrons are produced in a nuclear reactor and are funda­mental to its operation, it is important to also understand their interactions with matter. Neutrons do not have a charge so they cannot directly interact with elec­trons. Instead, neutrons crash into the nucleus and can bounce off while giving up part of their energy in what are known as elastic collisions. When a neutron hits a hydrogen nucleus, which is just a single proton, it bounces off, just like pool ball collisions, and gives energy to the proton. When it hits a heavier nucleus, it is more like a pool ball hitting a bowling ball, so the neutron bounces off but the heavy nucleus does not move much and the neutron does not lose much energy. The best way to slow down neutrons is to use material that has a lot of hydrogen nuclei. This is the reason that Fermi got a high rate of fission when he put paraffin between his source of neutrons and uranium, since paraffin has a lot of hydrogen to slow neutrons down, and slow neutrons are essential for fission to occur (see Chapter 6). It is also why water is used as a moderator in most nuclear reactors, since water (H2O) also has a lot of hydrogen. As the neutron gives up energy to a proton, the proton then becomes an energetic charged particle and gives up its energy according to the Bethe-Bloch equation, as discussed above. Since our cells are about 85% water (5) and our bodies are about 60% water, we are also good absorbers of neutrons, so shielding of neutrons is extremely important around nuclear reactors.

Neutrons can interact by another method known as inelastic collisions. Since the neutron has no charge, it can actually penetrate the nucleus and is sometimes captured by the nucleus to form a new isotope. Hydrogen can capture a slow neutron to become deuterium (hydrogen with 1 proton and 1 neutron or 2 H) and deuterium can capture a slow neutron to become tritium (hydrogen with 1 proton and 2 neutrons or 3H). This is also the process that produces transuranic elements when uranium captures a neutron and, instead of fissioning, becomes a new element, as described in Chapter 6. Sometimes neutrons can knock a proton out of a nucleus and change it into a new element. The carbon isotope ^C that is important for radiocarbon dating is produced in the upper atmosphere by neutrons from outer space that crash into nitrogen (^N) and are absorbed while knocking out a proton. And, of course, under the right circumstances, neutrons can be absorbed by uranium or plutonium isotopes and cause fission.

FUKUSHIMA, MARCH 11, 2011 How the Accident Happened

A huge natural disaster hit Japan at 2:46 p. m., local time, on March 11 as a 9.0 earthquake occurred at sea 95 miles from the Daiichi nuclear power plant near Fukushima. There were six reactors at the Fukushima plant. Units 1-3 were oper­ating and they immediately shut down and went to emergency cooling with diesel generators after the earthquake disrupted electrical power to them, as they were supposed to do. Unit 4 was undergoing a fuel change so it had no nuclear fuel in the core. Units 5 and 6 were in cold shutdown and were not operating.

The huge earthquake, the largest ever to strike Japan, caused two enormous tsunamis that devastated coastal cities in northern Honshu about an hour later. By June, the toll of dead and missing was above 24,000. The final number of dead and missing a year later was about 19,000 with 27,000 injured. Four hundred thousand buildings were destroyed and nearly 700,000 more were damaged (36). The tsunami caused a wall of water about 45 feet high to surge over the reactors at Fukushima, topping the 20 foot wall that was designed to protect the reactor. The flood covered the diesel generators with water and they quit working. Battery backup power was activated to run an additional emergency cooling system, but the batteries failed after a few hours and there was no more electric power to run the pumps to cool the reactors. As a result, a nuclear accident developed on top of the enormous devastation of the earthquake and tsunami (37-39).

The reactors at Fukushima are boiling water reactors, different from those at Three Mile Island but the same general type as those at Chernobyl. However, they did not have graphite moderators but were cooled by water. The reactor core is contained in a steel reactor vessel which sits within a concrete primary con­tainment structure. A torus surrounds the reactor and is connected to the core (Figure 10.3). It is designed to provide emergency cooling even in the event of total loss of external power, but it can only do this until water gets to the boiling point. With the total devastation caused by the tsunami, after the water in the core reached boiling, the nuclear crisis began.

The water in the cores of reactors 1-3 began to turn to steam and, as pressure built up, relief valves opened up to vent the pressure. The water level dropped in the reactor cores, like what happened at TMI but even worse. As about three-fourths of the cores were exposed, the temperatures rose to over 1,200°C and the Zircaloy cladding started to oxidize and fall apart, fission products were

image061

released, and hydrogen was produced from the reaction of zirconium with steam. Most of the core melted in unit 1 by 16 hours after the tsunami. It was later learned that the melted core had burned its way through the bottom of the steel reactor pressure vessel (RPV) and ate its way about 70 centimeters into the concrete pri­mary containment structure. Much of the fuel in units 2 and 3 also melted a day or two later but did not melt through the steel RPV (40).

Hydrogen and volatilized fission products such as xenon, iodine, and cesium built up excessive pressure in the reactor primary containment structure and had to be vented into the secondary containment buildings. These buildings have only about 1-inch thick walls and could not contain much pressure, so they were vented, releasing hydrogen and some radioactivity. In spite of this, on March 13 hydrogen explosions occurred in reactor units 1 and 3 and blew apart much of the secondary containment building, though the reactor primary containment struc­tures were still intact. Reactor 3 burned reprocessed MOX fuel, so it had a higher proportion of plutonium in it, but the plutonium and uranium remained in the reactor cores. Reactor 2 had damage to the torus, and this led to an uncontrolled leakage of fission products to the environment (38, 41, 42). By October, a cover was constructed for reactor building 1 to contain any further releases of radiation and by December 16, all three reactors were in cold shutdown with temperatures at around 70°C. Construction of covers for buildings 3 and 4 began in 2012 (40, 42-44).

Reactor 4 was shut down before the tsunami and did not have any fuel in its core, so there was no problem with a core meltdown. However, the reactor build­ings housed the cooling pools that contained the spent fuel rods from the reactors,
and the cooling pool for reactor 4 was unusually full, with a full core load of fuel, thus generating a lot of heat. Cooling pools have to be cooled with circulat­ing water, and the tsunami also shut down those pumps. As water boiled out of cooling pool 4, part of the fuel rods became exposed and became hot enough to partially melt and release hydrogen gas and fission products. Building 4 also exploded from a hydrogen gas explosion, the gas apparently coming from shared vents with reactor 3. Water was pumped into the fuel pools after the accident to keep them from boiling out the water. At the present time, the cooling pools are being cooled to normal temperature with a closed loop water circulation system, and the fuel pool 4 building has been reinforced (40).

Not all of the reactors that were caught in the earthquake and tsunami had nuclear accidents. Reactors 5 and 6 were in cold shutdown and did not have any problems. Three of the four reactors at the Daini nuclear power plant about 7 miles south of Daiichi were running at full power, and their diesel engines were also shut down by the tsunami but they were able to go into a full cold shutdown without experiencing a meltdown or release of radioactivity. Seventy-five miles north of Daiichi and even closer to the epicenter of the earthquake, three reac­tors were operating at the Onagawa nuclear power plant, and they were built to withstand 9-meter tsunamis. They went into a normal cold shutdown and did not release any radioactivity (42, 45).

In contrast to the other nuclear power accidents that were caused by operator error and design problems, the Fukushima accident was the result of an enormous natural disaster for which the reactors were not designed. The entire infrastruc­ture of a huge area of northern Japan was destroyed by the earthquake and tsu­nami—roads destroyed, power out, buildings shattered, equipment trashed, boats sitting far inland, nearly 20,000 people dead. This made it extremely difficult to bring equipment in to deal with the accident, and workers had to deal with a nuclear crisis even though many of them had lost their houses and perhaps fam­ily members. It was truly a heroic effort on the part of these workers to bring the reactor crisis under control under these horrific conditions.

Back to the Future: Nuclear Power

With climate change, those who know the most are the most frightened. With nuclear power, those who know the most are the least frightened.

—Variously attributed

Nuclear power is considered by many to be an old technology locked in the past— they say the future is with solar and wind. Commercial nuclear power began in 1951 when Russia built the first civilian nuclear power reactor, followed by the British in 1956 and the Americans in 1957 (1). In the 1960s and 1970s, nuclear power plants blossomed all over the world. There were 42 reactors in the United States in 1973; by 1990 there were 112. Some of these were closed, so by 1998 there were 104 operating nuclear reactors (the same number operating at the end of 2012) providing about 100 GWe (gigawatts electric 1 ) to the grid. Worldwide, there were 432 operating nuclear reactors as of mid-2013. Nuclear reactors have been providing about 20% of the electricity in the United States for over 20 years, with no emissions of carbon dioxide (CO2) (2). France gets nearly 75% of its electricity from nuclear power, the highest proportion of any nation. Germany and Japan each got more than 25% of their electricity from nuclear power in 2010; though Germany shut down about half of its reactors, Japan temporarily shut down all of its reactors, and both are consider­ing permanently closing down their reactors after the accident in Fukushima, Japan, in 2011 (3). So nuclear power has been providing electricity for over 50 years and plays a major role in the energy mix for a number of countries.

But nuclear power is also critically important for an energy future that will meet our electrical power needs with minimal production of greenhouse gases and benign effects on the environment. We must go back to the future if we want to make serious inroads into reducing greenhouse gases and global warming. To see why nuclear power is critical for the future, let’s begin our journey by touring a nuclear power plant.

Hereditary Effects of Radiation

Cancer is the result of mutations and alterations in the genes in somatic cells (all cells but the gonadal cells) that affect the individual who has them. Hereditary effects, also frequently known somewhat confusingly as genetic effects, result from mutations in the gonadal germ cells and affect following generations. The ability of radiation to cause hereditary mutations in genes was first discovered and stud­ied in detail in the fruit fly, Drosophila melanogaster, by Hermann Muller in 1927. He showed that the number of mutations increased linearly with dose. Clearly, humans are more complex and very different from fruit flies, so a huge study known as the Megamouse Project was done at Oak Ridge National Laboratory in the 1950s by the husband and wife team of William and Liane Russell. They demonstrated that mutations in mice also increased linearly with dose, but that it depended greatly on the dose rate, so that the effects were much less at a low dose rate (7). The BEIR committees have used these results to calculate what is known as the “doubling dose”—the dose that would cause an additional number of muta­tions in a human population equal to the number normally present, so the total number would be doubled. They conclude that the doubling dose is 1 Gy (29, 36).

You might ask, why use mouse results when we really care more about humans? Once again, the Japanese atomic bomb survivors constitute the largest human pop­ulation carefully studied for hereditary effects, and there simply are no hereditary effects that can be attributed to the radiation. These negative results lead to the likelihood that the doubling dose for humans is actually between 1.5 and 2 Gy. The overall conclusion is that there have been no measurable hereditary effects from exposure of human populations to radiation. The main concern when a human population is exposed to radiation is the risk of cancer, not the hereditary risk.

Is There Enough Uranium for a Nuclear Renaissance?

At the current usage rate of 69,000 tonnes of uranium—and assuming all of it would come from mining—there is an 80-year supply based on the Red Book recoverable resources at a cost less than $130 per kilogram but that increases to a 100-year supply at double the cost. But what would happen if the number of reactors increased from the current 443 nuclear reactors to 1,000 reactors? Could they be supplied with uranium fuel? The Red Book estimates that there are undis­covered uranium resources of about 10.4 million tonnes, more than doubling the available uranium resources. And that does not include several major producers with large identified uranium resources that do not report estimates of undiscov­ered resources. Uranium resources should be similar to other metals that have been mined historically, in which the predictions of the depletion of a resource have not been met in reality (45). The MIT study on the future of nuclear power used a model to predict a tenfold increase in resources at a doubling of price due to increased exploration and reclassifying resources as economically recoverable. The study concludes that “the world-wide supply of uranium ore is sufficient to fuel the deployment of 1,000 reactors over the next half century and to maintain this level of deployment over a 40 year lifetime of this fleet” (52, 53).

But that is not the end of the story. Most of the world’s—and all of the United States’—nuclear fleet operates in a once-through mode known as an open fuel cycle. In an open fuel cycle, uranium is mined, enriched, made into fuel, burned in a reactor, and the spent nuclear fuel has to be stored until it decays. But the spent nuclear fuel contains uranium and plutonium that can be used for new nuclear fuel. In a closed fuel cycle, the spent nuclear fuel is recycled to extract the plutonium and uranium, providing a new resource to fuel reactors and reducing the nuclear waste storage problem. This is not only feasible but is currently being done in France and a few other countries, as discussed in Chapter 9. Reactors can have a maximum of about 30% of their fuel supply provided by recycled MOX fuel. If both the plutonium and uranium were recycled into new fuel, that could increase the available fuel resources by about 25%.