Category Archives: Why We Need Nuclear Power

The Good, Bad, and Ugly of Coal and Gas

COAL

About ten miles north of where I live in northern Colorado, a smokestack rises 500 feet in the air alongside a stair-step series of buildings. On a summer day, nothing appears to be coming from the smokestack, as though it is a ghostly relic; in the winter, a white plume rises. On closer approach, a lake teeming with ducks, geese, pelicans, and other waterfowl sits in the foreground. A herd of American bison roam on over 4,000 acres of grasslands surrounding the smokestack. This apparently benign plant called Rawhide Energy Station is actually a 280 MWe coal-fired power plant that provides about one-quarter of the electricity for four nearby communities—Fort Collins, Loveland, Longmont, and Estes Park. It is a public utility owned by the four communities and is near state-of-the-art for a coal-fired power plant, being one of the most efficient in the western United States and among the top ten in lowest emissions (1).

Anatomy of a Coal-Fired Plant

I drove up to the Rawhide Energy Station and called on an intercom box to the security station to identify myself so the guard could open the security gate for me to enter. After driving across the edge of the lake, the armed guard then directed me to the visitor center. I met Jon Little, the knowledgeable and friendly tour guide, and a group of bicyclists from a local environmentally conscious brewery who were taking the tour also. We put on headphones with a radio set and a hard hat for the tour.

The first and largest building houses the boiler and the generators. The coal arrives by train in five — to six-inch lumps, which are broken down into one-inch lumps before being fed by conveyor to grinders that convert it into a powder finer than facial powder. This powder is then mixed with air and blown into the 16-story

boiler from four directions, where it burns efficiently at a hellish temperature of 2,800°F. Three hundred gallons per minute of purified water circulate in tubes through the boiler, which produces steam at 1,000°F and 1,900 pounds per square inch to drive the turbines. The water from the lake is fed through a condenser that turns the steam back to water so it can recirculate through the boiler. Enough heat is absorbed by the water to keep the 5 billion gallon lake at 70°F even in the winter, which the ducks and geese like. The turbines drive the generator, which produces electricity at 24,000 volts (24 kV). The turbine and generator room is extremely noisy, necessitating the headphones and radio set to hear Jon’s explanations. The electricity from the generator then goes through a step-up transformer station to produce electricity at 235 kV that is fed through the transmission lines so people can turn on their lights, air condition their houses, and cook their dinners.

After leaving the clean turbine room, we went to the scrubber room, where the noise level was much more tolerable but there was a lot more coal dust around. The gases and fly ash from the boiler come to this room where the main contami­nants, sulfur oxides, are removed in huge conical scrubber tanks. A mixture of lime and water is injected into the scrubber tanks and mixes with the heated air to chemically react with the sulfur, precipitating out calcium sulfites and sulfates to remove the sulfur. The Rawhide plant is very efficient in this process, produc­ing only about one-tenth of the sulfur dioxide in the exhaust as that of an average coal plant. Finally, the air goes into the bag room, another building filled with huge bag filters where the fly ash is removed, and up the smokestack. The opacity of the exhaust air is continuously monitored and is less than one-tenth of what is allowed by state and federal law. So, when you look at the smokestack you don’t see smoke or anything else coming out, except in the winter—the white plume is the condensed vapor from the water produced when the coal burns.

THE NUCLEAR ATOM

J. J. Thomson, director of the famed Cavendish Laboratory in Cambridge, England, had discovered the electron in 1897, as noted above. At the time, there was no clear picture of the structure of an atom, but if it had electrons that had negative charge, then by the laws of electricity it had to have positive charges that would neutralize them, since there was no net charge in atoms. Thomson postulated a model of the atom that was called the “plum pudding” model, which consisted of electrons (the raisins) scattered throughout a sea of positive charge (the pudding) so that all of the charges would cancel out.

Ernest Rutherford grew up in the rough frontier of New Zealand and began his scientific career studying the new field of radio waves at the University of New Zealand. In 1895 he accepted a scholarship to work under J. J. Thomson in the Cavendish Laboratory, where he wanted to commercialize his work with radio waves. But the discoveries of X-rays by Rontgen and radioactivity by Becquerel and the Curies changed the focus of Thomson’s lab to these strange new radia­tions. Thomson subsequently discovered the electron, and Rutherford left the

Cavendish lab in 1899 to start his own lab at McGill University in Canada, where he began his work on seminal discoveries that led to an understanding of the nucleus of the atom (9, 10).

Rutherford analyzed the radiation emitted by the radioactive elements ura­nium, radium, and thorium and discovered that there were two types of radiation particles emitted, which he named alpha (a) and beta (p). Alpha radiation was readily absorbed by a sheet of paper while p radiation could penetrate through solid objects. He subsequently determined that a particles were identical to the helium atomic nucleus and were positively charged, while p particles were identi­cal to the electron discovered by Thomson and were negatively charged. A French physicist, Paul Villard, subsequently discovered a third type of radiation that was similar to the X-rays discovered by Rontgen, called gamma (y) radiation (10).

While studying the radioactivity given off by thorium, Rutherford discovered that a radioactive gas was formed. He collared a colleague at McGill, the chem­ist Frederick Soddy, to help him analyze the gas, and they determined that it was radon (11). The only possible conclusion was that the radioactive element, thorium, was slowly turning itself into radon by emitting an a particle from the nucleus, a process called transmutation. They studied numerous radioactive ele­ments and determined that different ones decayed into new elements at different rates by emitting a and p particles. Each radioactive element lost half of its radio­activity in a specific time that varied greatly among the elements. They called this the half-life of the radioactive element, and it became a way to distinguish differ­ent radioactive decays. There were different variations of a radioactive element that had different half-lives, and they called these different variations isotopes (10). Exactly what an isotope is will be clear later.

Rutherford moved back to England to take up a position at the University of Manchester in the spring of 1907. He was fascinated by a particle radiation and he worked with the German physicist Hans Geiger to develop detectors that could measure a single a particle. Geiger subsequently developed the modern Geiger counter, which is extremely useful in detecting radiation. Rutherford thought he could study the structure of atoms by bombarding them with a par­ticles emitted from a radioactive source such as thorium. He made thin foils of heavy elements such as gold and measured the scattering (deflection) of a particles as they moved through the foil. On a whim, he told an undergradu­ate student, Ernest Marsden, to measure the scatter of a particles in a backward direction. Rutherford was never sure why he assigned Marsden this “damn fool experiment,” but it was another one of those serendipitous moments in science (7). To everyone’s complete surprise, Marsden actually measured a particles scat­tering backward from the gold foil. Rutherford was astonished. “It was quite the most incredible event that had ever happened to me in my life. It was almost as incredible as if you fired a 15 inch shell at a piece of tissue paper and it came back and hit you”(10) He concluded that the positive charges of an atom must be clustered in a very small volume with electrons circulating around them and the scattered a particles occasionally hit a nucleus nearly head-on and bounced back as if you rolled a pool ball into a bowling ball. When he published this finding in 1911, it ended the idea of the “plum pudding” model for an atom and led to the modern concept of the nuclear atom with its mass clustered in a tiny nucleus with electrons circulating around it (9).

But there were a couple of big problems with this nuclear model of the atom. According to classical physics, with all of the positive charge concentrated in the nucleus, the electrons would emit electromagnetic radiation and lose energy as they orbited the nucleus and would eventually fall into the nucleus, much like a satellite circling the earth will eventually slow and fall to earth. Furthermore, as Rutherford later demonstrated, the positive charge in the nucleus is made up of particles called protons. How could a stable atom exist with all of its charge in the nucleus, as Rutherford demonstrated? The positive charges on the protons would push them apart, and the negative electrons should fall into the nucleus. The idea of the quantum once again intruded into classical physics.

WASTE ISOLATION PILOT PLANT (WIPP)

In reality, we already know how to store nuclear waste long-term and in fact are already storing military nuclear waste at a site 26 miles southeast of Carlsbad, New Mexico, called WIPP. What is different about WIPP from Yucca Mountain that makes it a desirable repository? The WIPP site is located in the Chihuahuan Desert, the largest desert in North America, but 250 million years ago the area was a shallow inland sea known as the Permian Sea. Over millions of years the sea subsided and the water evaporated, leaving a 2,000-foot thick bed of salt known as the Salado Formation. The salt bed lies on an impermeable rock layer and is covered by an impermeable layer of rock called caliche that prevents water from entering from the surface (8).

There are a number of advantages of a salt bed formation for permanently iso­lating radioactive waste. In contrast to the complex geology of Yucca Mountain, a rock salt formation is much simpler. It is geologically stable and not subject to fracturing from earthquakes; flowing water has not been present for 250 million years or the salt would have dissolved away; and rock salt is a crystalline rock with plasticity that slowly moves to fill in voids (25). Some concerns have been raised about the presence of karst, a type of topography in which there are sinkholes and large voids such as caves that could lead to flowing water at the site. However, a detailed review of scientific publications and reports does not support the pres­ence of karst at WIPP (26). The EPA has also evaluated this possibility and has concluded that the WIPP site does not show any evidence of karst (27).

The history of WIPP begins in the era of World War II when the national laboratory at Los Alamos, New Mexico, was developing nuclear weapons. A number of other national laboratories and sites were developed under the auspices of the now-extinct Atomic Energy Commission, which morphed into the current Department of Energy (DOE). These include the Idaho National Environmental and Engineering Laboratory (INEEL), Rocky Flats Environmental Technology Site (Colorado), Savannah River Site (South Carolina), Hanford Site (Washington), Argonne National Laboratory (Illinois), Nevada Test Site, and Lawrence Livermore National Laboratory (California). All of these sites were, and some still are, involved in various ways with research on uranium and plutonium and the development or maintenance of nuclear weapons.

An unavoidable result of working with uranium and plutonium is that trans­uranic (TRU) waste is produced. Recall that transuranics are radioisotopes that have a higher atomic mass than uranium. TRU waste consists of contaminated clothing, plastics, soil, sludge, tools, and other items that are used in producing or working with TRU. Most of the TRU waste is plutonium but can also include americium, neptunium, and other transuranics. As the volume of waste built up at these sites, the National Academy of Sciences concluded in 1957 that an underground repository in salt beds would be the best method of disposal. Until that could be done, however, the TRU waste generated at Los Alamos National Laboratory was stored in thousands of barrels under plastic tents out in the open. In 2000 a severe forest fire came within 500 yards of the barrels (8). If these bar­rels had burned and the radionuclides had been airborne in the fire, it would have been a major environmental disaster. Clearly, on-site storage under poorly designed conditions was not a good way to deal with TRU waste!

Congress authorized WIPP in 1979 as a research and development storage site for radioactive TRU waste from defense activities that are not regulated by the Nuclear Regulatory Commission (NRC). The DOE was given the responsibility for research and development of the site, and the EPA was to establish the radioac­tive waste disposal regulations for the site. Lawsuits were subsequently brought by the state of New Mexico and by various environmental groups to stop WIPP, but in 1999 these lawsuits were resolved and the site began receiving TRU waste. By 2005 a total of12 federal sites were delivering their TRU waste to WIPP, including the last of the shipments from Rocky Flats in Colorado where plutonium triggers were produced, allowing this hazardous site to close a year ahead of schedule (28).

WIPP was designated by the WIPP Land Withdrawal Act of 1992 to store only low level TRU waste that could be contact-handled, meaning that the storage con­tainers shield the waste sufficiently so that it can be handled by workers without further shielding. But there was also a need to store waste with higher levels of radioactivity, known as remote-handled TRU waste, at WIPP. This waste requires further lead shielding and special remote handling. The EPA approved a DOE plan for storage of remote-handled TRU in 2004 and the state of New Mexico gave its approval in 2006, allowing the first shipment of this type of waste (28). Current law allows for 96% of the TRU waste stored at WIPP to be contact-handled and up to 4% to be remote-handled (29). The law requires that WIPP be recertified by the EPA every five years, and it received its second recertification in 2010, indicating that WIPP complies with federal disposal regulations to safely contain TRU waste for the regulatory period of 10,000 years (30).

Low level contact-handled TRU waste is transported from around the country in special containers carried by flat-bed trucks that are monitored by satellite. Seven 55-gallon barrels fit into a specially designed cylindrical cask 8 feet in diam­eter by 10 feet tall called a TRUPACT-II. These casks have been approved by the NRC after tests show that they can survive severe crashes and punctures followed by fires or immersion in water. More than 10,000 shipments of this waste have been sent safely to WIPP from sites all over the United States by the end of 2011 (31). In reality the public is in much greater danger from the enormous volume of highly toxic chemicals that routinely travel through our cities on trucks and trains than from the shipments of TRU waste. The remote-handled TRU waste requires a different kind of container since it is more radioactive. The NRC has certified two different containers for shipping remote-handled TRU, which have more rig­orous requirements and are heavily shielded with steel and lead. Once the trucks arrive at WIPP with the waste, the casks are opened and the drums of waste are removed and stored in the WIPP site (32).

The WIPP site has four shafts sunk 2,150 feet into the Salado formation. At the base of the shafts there will eventually be eight panels divided into seven cham­bers 33 feet wide by 13 feet high. Thousands of barrels of contact-handled TRU are stored in columns in the chambers, while the remote-handled TRU waste is stored in shielded casks in boreholes carved into the chamber walls. Two of the panels have already been filled and sealed off to let the slow but inevitable creep of the salt enfold the barrels and compact them to about a third of their present size (8). Thus immobilized, the TRU waste will be safely isolated for millions of years, but after 250,000 years (ten half-lives of 239Pu), a blink in the lifetime of a salt mine, it will no longer be dangerous.

Why is there so much controversy and study of Yucca Mountain if there is already an approved repository for radioactive waste? Why not just store the spent nuclear fuel at WIPP? There are several factors to consider in answering these questions. The first factor gets back to politics. The laws that authorized WIPP specifically allowed only TRU waste from defense-related installations to be stored there, and separate legislation mandated that only Yucca Mountain be studied for spent nuclear fuel waste from commercial reactors. So the current law does not allow WIPP to be used for disposal of spent nuclear fuel. Of course, in principle, the law could be changed, so the question is whether WIPP could handle the waste from nuclear reactors. According to D. Richard Anderson (“Rip”), the scientist who was in charge of the risk assessment analysis for WIPP, “WIPP could safely hold all the nuclear waste in the world. Six million cubic feet—585,000 thousand barrels—is the limit by regulation here. In practice, the mine, or another mine next door, could take millions” (8). So we really do have a solution to long-term storage of radioactive waste.

But there is one more factor to consider and that is whether it is desirable to permanently store the waste from spent nuclear fuel. The Yucca Mountain site is specifically designed so that stored waste could be removed if desired before permanently sealing the disposal site. But the plasticity of the rock salt ensures that the waste stored there can never be retrieved. At this point, you are probably thinking that I have gone off the deep end. Isn’t permanent storage the Holy Grail of nuclear waste management?

THE IPCC SPECIAL REPORT ON EMISSIONS SCENARIOS (SRES)

In 2000, the IPCC published a special report on various scenarios of world economic growth, population, and technology development to serve as a basis for making pre­dictions about global warming (3). These scenarios have been used in subsequent IPCC reports to establish a range of potential global warming predictions. The sce­narios are given verbatim here.

A1. The A1 storyline and scenario family describes a future world of very rapid economic growth, global population that peaks in mid-century and declines thereafter, and the rapid introduction of new and more efficient technologies. Major underlying themes are convergence among regions, capacity building and increased cultural and social interactions, with a substantial reduction in regional differences in per capita income. The A1 scenario family develops into three groups that describe alternative directions of technological change in the energy system. The three A1 groups are distinguished by their technological empha­sis: fossil-intensive (A1FI), non-fossil energy sources (A1T) or a balance across all sources (A1B) (where balanced is defined as not relying too heavily on one particular energy source, on the assumption that similar improvement rates apply to all energy supply and end use technologies).

A2. The A2 storyline and scenario family describes a very heterogeneous world. The underlying theme is self-reliance and preservation of local identities. Fertility patterns across regions converge very slowly, which results in continuously increasing population. Economic development is primarily regionally oriented and per capita economic growth and technological change more fragmented and slower than other storylines.

B1. The B1 storyline and scenario family describes a convergent world with the same global population, that peaks in mid-century and declines thereafter, as in the A1 storyline, but with rapid change in economic structures toward a service and information economy, with reductions in material intensity and the introduc­tion of clean and resource-efficient technologies. The emphasis is on global solu­tions to economic, social and environmental sustainability, including improved equity, but without additional climate initiatives.

B2. The B2 storyline and scenario family describes a world in which the emphasis is on local solutions to economic, social and environmental sustainability. It is a world with continuously increasing global population, at a rate lower than A2, intermedi­ate levels of economic development, and less rapid and more diverse technological change than in the B1 and A1 storylines. While the scenario is also oriented towards environmental protection and social equity, it focuses on local and regional levels.

An illustrative scenario was chosen for each of the six scenario groups A1B, A1FI, A1T, A2, B1, and B2. All should be considered equally sound. The SRES scenarios do not include additional climate initiatives, which means that no scenarios are included that explicitly assume implementation of the United Nations Framework Convention on Climate Change or the emissions targets of the Kyoto Protocol.

image071
Подпись: CD LL < S! <
Подпись: CSI CD

Figure A.3 Multimodel averages and assessed ranges for surface warming for the various scenarios.

source: Reproduced by permission from Climate Change 2007:The Physical Science Basis. Working Group I Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Figure SPM.5. Cambridge: Cambridge University Press, 2007.

The IPCC 2007 report made predictions about the amount of global surface warming from 2000 to 2100 for the various scenarios based on averages from multiple climate models (Figure A.3). Scenario B1 is the only one that will most likely lead to global surface warming of less than 2°C.

Concentrated Solar Power (CSP)

An alternative to the direct conversion of sunlight into electricity is to concentrate the sun to produce heat. CSP plants use mirrors to concentrate the sun, similar to the way that kids take a magnifying glass to concentrate the sun to burn wood—or other things! These are utility-scale thermal plants that in principle are no differ­ent from a coal-fired plant or a nuclear plant, but they use the concentrated heat of the sun to produce high-pressure steam, which drives a turbine and a generator to produce electricity. Because of this, they use large amounts of water—twice as much per unit of electricity produced as coal-fired power plants (12)—which can be a severe problem since the best locations for solar power are in deserts. Several variations exist to concentrate the sun.

Parabolic trough reflectors use expensive parabolic mirrors 300 to 450 feet long and 15 to 20 feet tall to concentrate the sun onto a receiver tube that runs down the middle carrying a high-temperature heat transfer fluid that heats up to 700°F. The fluid then runs through a heat exchanger to produce steam from water. Many parallel rows of the parabolic mirrors constitute a power plant. A less expensive variation known as Compact Linear Fresnel Reflector uses long rows of flat mir­rors to focus the sun on tubes containing water to produce steam directly (13, 14). The world’s largest parabolic trough solar power plant, with a capacity of 350 MWac, has been running for more than 20 years in the Mojave Desert at Kramer Junction, California. A 64 MWac parabolic trough plant was built in Nevada in 2007 (15). By far the world’s largest parabolic trough plant—two phases of 500 MW each—was planned at Blythe, California, but the high cost of these plants led its developer to convert it to a PV plant (16).

A solar dish uses concave mirrors of about 40 feet in diameter to focus the sun on a small area, resulting in a very high concentration of the sun—typically by a factor of over 2,000—to heat a working fluid to about 1300°F. These systems have a Stirling engine that converts the working fluid to mechanical energy, which then drives a generator. Each dish generates 5-30 kW, depending on the system (14). This technology is lagging behind the other CSP approaches because of the high cost of building the dishes. At the time of writing, there are no large-scale Dish Stirling projects in operation and few, if any, in the pipeline (17). This technology is not ready for prime time.

The most efficient CSP plants use a circular series of flat mirrors that follow the sun and focus its energy on a central tower—the Power Tower. A transfer fluid is heated to temperatures of 800-1000°F, which then produces steam to drive a turbine (14). There is only one operating power tower in the United States—in Antelope Valley, California—that has a 5 MW capacity. Two experimental 10 MW power towers were operated by Sandia National Laboratory in the Mojave Desert for several years before being closed down (18). Spain has been the world leader in solar tower technology, due to a feed-in tariff system similar to that in Germany. At least Spain has good solar resources, similar to Florida (see Table 4.1). The Gemasolar plant near Seville, Spain, has a capacity of17 MW. It uses thermal stor­age to even out the variation in the sun’s intensity and to generate power even after the sun has set. The concentrated solar energy heats a molten salt to a temperature of 565°C (1050°F), which then goes through a heat exchanger to turn water to steam to drive a turbine and generator. The hot salt can store the heat for up to 16 hours (19).

The United States will soon take the lead in concentrated solar power, though, doubling the global capacity in a single plant—Ivanpah—at the edge of the Mojave Desert in California about 40 miles southwest of Las Vegas. The world’s largest solar power tower is scheduled to be finished in 2013 and will provide a nominal capacity of 377 MW. It is huge. It has three 459-foot tall towers surrounded by about 3,500 acres of reflecting mirrors (heliostats) that follow the sun. The mirrors focus the sun on the solar receivers at the top of the towers, where it boils water to make steam to drive a turbine to make electricity (20, 21). It uses air cooling to condense the steam, making it 95% more efficient than wet-cooled solar thermal plants (21). Its overall efficiency for electricity production is about 29%, much higher than PV electricity production (22).

Another large solar power tower project is scheduled for completion in 2013— the Crescent Dunes Solar Energy Project near Tonopah, Nevada. It is rated at 110 MW and covers 1,600 acres of desert land. The 540 foot tall central tower is more than a hundred feet taller than the towers at Ivanpah. It uses molten salt as the heat transfer fluid, similar to the Gemasolar plant in Spain. This will allow it to store solar energy for up to 10 hours (23).

WHAT IS A DOSE OF RADIATION?

The net result of any type of radiation moving through matter, including our cells and tissues, is that it deposits energy by ionizing atoms, according to the interac­tions described in the previous section. In order to make quantitative statements about the hazards of radiation, it is essential to define what is meant by a dose of radiation. A dose is a certain amount of energy deposited in a certain mass of material. However, there are different ways to specify the dose, depending on the type of radiation and the type of cells or tissues that are in its path.

The most basic definition of dose is called absorbed dose (D), which is given in units of gray (Gy) or rad. The official unit for absorbed dose is the gray, named after the British physicist Louis Gray who established a famous radiation labora­tory in Oxford, England, now known as the Gray Institute for Radiation Oncology and Biology. One gray is 1 joule of energy from ionizing radiation absorbed in 1 kilogram of matter. An older definition for absorbed dose is rad (radiation absorbed dose)—which is still frequently used in the United States—and is 100 ergs of energy absorbed in 1 gram of matter. One gray is equal to 100 rads. An absorbed dose of 1 Gy is independent of the type of radiation, so 1 Gy of у rays is equal to 1 Gy of protons is equal to 1 Gy of electrons is equal to 1 Gy of a particles, since it is always the same amount of energy absorbed in a given mass of matter.

The absorbed dose is not sufficient to understand the biological effects of radia­tion, however, since different types of radiation can produce different levels of damage in cells. The principal effect of radiation on cells is to cause damage to the DNA. Gamma rays and electrons are not nearly as efficient in doing this as a particles and neutrons, for example. In order to compare the biological effects of different types of radiation, radiation biologists irradiate cells and determine what dose of protons or neutrons or a particles cause the same amount of damage as a given dose of у or X rays. The results from these experiments determine the relative biological effectiveness (RBE) of different types of radiation compared to X-rays. These values of RBE for different radiations are assessed by national and international scientific agencies such as the National Council on Radiation Protection and Measurements (NCRP) in the United States and the International Commission on Radiological Protection (ICRP) and are formulated as radiation weighting factors (WR) for different kinds of radiation. A new measure of absorbed dose, known as the equivalent dose (H), is then defined as the dose (D) times the radiation weighting factor for a particular type of radiation. The equivalent dose, H, is still given as absorbed energy in a given mass but varies by the type of radia­tion. The official unit is sievert (Sv), which is the dose in Gy times the weighting factor, named after the Swedish physicist Rolf Sievert who invented the ionization chamber to measure doses of radiation (6). The older but still-used unit is rem (roentgen equivalent man), which is the dose in rads times the weighting factor (7). One Sv equals 100 rem. The various units for dose are given in Table 7.1 and the radiation weighting factors are given in Table 7.2.

Because of the radiation weighting factors, one Sv is equal to one Gy for у radia­tion or electrons, but for neutrons or a particles, one Sv is equal to about 20 Gy in terms of biological damage. For this reason, doses are normally specified in sievert or rem so that the biological effects are independent of the type of radia­tion. Then you can predict that the biological effects of 1 Sv of у radiation are the same as 1 Sv of a particles or 1 Sv of electrons or 1 Sv of neutrons.

There is one other factor to consider when a person is exposed to radiation, and that is the specific parts of the body that are exposed. For example, radia­tion workers may have a higher exposure to their hands or feet than the rest of

Подпись: Table 7.1 VARIOUS METHODS TO SPECIFY DOSE Dose Symbol Unit Value Relationship Absorbed dose D gray (Gy) 1 Joule/kg 100 rad Absorbed dose D rad 100 erg/gm 0.01 Gy Equivalent dose H sievert (Sv) millisievert (mSv) D(Gy) X WR 0.001 Sv 100 rem Equivalent dose H rem millirem (mrem) D(rad) X WR 0.001 rem 0.01 Sv

Table 7.2 Radiation Weighting Factors (ICRP)

Подпись:Type of Radiation

Photons (X and y)

Electrons (P)

Protons

a-particles, fission fragments, heavy nuclei Neutrons

Table 7.3 Tissue Weighting Factors (ICRP)

Tissue

WT

Total W

Bone marrow, breast, colon, lung, stomach

0.1T2

0.60

Bladder, esophagus, gonads, liver, thyroid

0.05

0.25

Bone surface, brain, kidneys, salivary glands, skin

0.01

0.05

Remaining tissues

0.10

0.10

their body. People who breathe in radon are exposing the lungs. As it turns out, there are differences in the radiation sensitivity of different tissues, which leads to another factor, known as the tissue weighting factor (WT), to compare the various tissues (Table 7.3). Tissues with larger values of WT are more sensitive than those with smaller values of WT There is a big difference in sensitivity between the skin, for example, and the lungs or breast. Total WT is the summation of individual tissue weighting factors for each of the tissues. For a total human body, the total weighting factor is one, which just means that the sensitivity of the entire body is the sum of the sensitivities of the individual tissues. The effective dose is the equivalent dose times the tissue weighting factor.

Health and Environmental Consequences

In spite of the meltdown of reactor cores in three of the reactors, this was no Chernobyl because the primary reactor containment structures were not destroyed, though they may have been damaged, and the release of radioactivity was limited to three major spikes in the days after the tsunami. The amount of radioactivity released to the air from the three damaged reactors at Fukushima was about 18% of the 1 31I-equivalent radioactivity5 released from Chernobyl, including 500 PBq of 1 31I and 10 PBq of 1 37Cs (compared to 1760 and 85 PBq, respectively, at Chernobyl) (40). 90Sr was not released from the accident, probably because the core temperatures were not high enough to volatilize it (46). There were initial fears that water supplies in Tokyo to the south would be in danger, with 131I testing above Japan’s stringent standards for infants, which are 10 times lower than international standards. However, the level receded below the stan­dard by the next day, so restrictions were lifted (47).

Much of the radioactive cloud was deposited locally around the plant, giving the highest dose levels, but also carried by winds to the northwest of the plant, where it fell in spotty locations from rain, similar to the pattern at Chernobyl. A 20-kilometer (12 mile) mandatory evacuation zone was established with evacuations also from a few communities beyond the 20-kilometer evacuation zone, known as the “deliberate evacuation zone.” The highest levels outside the 20-kilometer evacuation zone were at the village of Iitate, 30 kilometers north­west, which had about 6,000 residents. About 100,000 people were evacuated and will not be able to return for at least six months. None of the people evacuated were exposed to high enough levels of radiation to suffer health consequences. Much of the radioactivity was due to 131I, which decayed away within a couple of months, but soil contamination of 137Cs will remain for years and will have to be cleaned up, and some areas may remain as long-term exclusion zones. However, the highly contaminated areas are much smaller than those near Chernobyl (48). The Japanese government has set a target of reducing the radiation from the acci­dent to below 20 mSv/yr in the evacuation zone and less than 1 mSv/yr in areas such as schools frequented by children (49). As of the end of 2012, about half of the original evacuated area is now accessible without protective gear or monitor­ing (dose rate less than 20 mSv/yr), but overnight stays are not allowed (40).

How many people will die from Fukushima? There were 3 deaths among nuclear plant workers from the earthquake and tsunami. In addition, there were about 15 workers who were injured in the explosions, but none of the injuries was life-threatening. There was much hysteria about the condition of the workers who were responding to the accident under the terrible conditions left by the tsunami and earthquake. Lurid headlines proclaimed that the workers were “suicide work­ers” who would die from the high doses of radiation they were getting. “ ‘I don’t know any other way to say it, but this is like suicide fighters in a war,’ said Keiichi Nakagawa, associate professor in the Department of Radiology at the University of Tokyo Hospital” (50). But this is sheer nonsense. Two workers got radioactive water into their boots while they were working at the plant and were taken to the hospital with great fanfare. They suffered erythema (skin reddening similar to sunburn) from the localized radiation to their feet. They received 2-3 Sv to their feet, which caused the skin reddening (42). To put this into context, the standard radiation treatment for cancer is 2 Sv fractions given five times a week for six weeks. Thus, they received the equivalent of about one radiotherapy treatment to their feet, and the feet are a very radioresistant part of the body, with an allowable dose of 500 mSv per year for radiation workers in the United States.

By the end of 2011, 167 workers had received doses of over 100 mSv, with 135 of them getting between 100-150 mSv, 23 getting between 150-200 mSv, 3 getting between 200-250 mSv, and 6 getting over 250 mSv (51). The normal international allowable dose limit for radiation workers is 50 mSv per year but in an emergency is 500 mSv; under the circumstances, the dose limit was set to 250 mSv for the Japanese workers (52). For adult workers, the risk of dying of cancer is 4% per

Sv, so the workers who got 250 mSv would have about 1% chance of dying of cancer from the radiation. Of course, their normal risk of cancer death is much higher, about 25% (53). These were hardly “suicide workers" In truth, the number of workers exposed to significant levels of radiation is so small that there is likely to be only a single death from cancer.

A year after the accident, the health effects have come into clearer focus as stud­ies on the exposed population were reported. A press briefing in Washington, D. C., on March 2, 2012, by the Health Physics Society, reported that about 20,000 people died from the earthquake and tsunami but none has died from radiation effects. Of 10,000 people nearest the reactors, nearly 60% had doses of less than 1 mSv and 40% had doses of between 1 and 10 mSv. Seventy-one received doses between 10 and 20 mSv, and 2 received doses between 20 and 23 mSv. Recall that the annual average dose for Americans is 6.2 mSv per year (Chapter 8). The doses to the Japanese public from the reactor accident are so low that the increase in cancer incidence is estimated to be about 0.001%. This is so low that there will never be any epidemiological studies that could detect any excess cancer risk (54). The health risks come almost entirely from the stress associated with the wide­spread destruction of homes and towns from the earthquake and tsunami, with fears of radiation on top of that. In short, peoples’ lives have been scrambled, and the ongoing stress contributes to depression and heart disease, the greatest health consequences from the disaster (55).

The World Health Organization (WHO) released a report in early 2013 assess­ing the risks of getting cancer (not dying of cancer) for people in the deliberate evacuation zone. The highest doses were to people in Namie Town, which had about 21,000 people before the accident (56), and Iitate with about 6,000 resi­dents. Using very conservative assumptions that are very likely to overestimate the doses, the average dose to residents of Namie Town was about 25 mSv and Iitate Village residents had about half that (15 mSv). Doses included both external doses and internal doses from eating food, which was assumed to be grown in the same neighborhood, an unlikely assumption. The report also assumed that the DDREF was 1 (see Chapter 7), which likely overestimates the risk. The report con­cluded that male infants in the highest exposed region (Namie Town) could have an additional 7% increase in lifetime risk of leukemia compared to the normal baseline rate, an increase of 6% in breast cancer over baseline for female infants, a 4% increase for all solid cancers in female infants, and a 70% increase in thyroid cancer for female infants. Since the normal thyroid cancer incidence is very low in Japan (0.75%), this represents an increase of only 0.5%. And, of course, thyroid cancer is rarely fatal. Expected cancer risks for older children and adults are lower (46). The report does not specify the number of people in the various age groups.

Let’s think about these estimates to see if they are realistic. Remember that there has been no increase in leukemia or breast cancer in people exposed to higher doses after the Chernobyl accident, so it is likely that the WHO has overestimated the cancer risk. If you use the ICRP risk estimate of 5% per Sv (for low dose radia­tion) for a population that includes children (see Chapter 7), you would expect about 26 additional cancers among the 21,000 people in Namie who were exposed on average to 25 mSv and about 4 additional cancers among the 6,000 people in Iitate. That would compare to a normal expectation of 25% cancer risk, or 5,250 in Namie and 1,500 in Iitate. In other words, the additional cancers expected are such a small number that they will never be measurable.

Both the Chernobyl and the Fukushima nuclear accidents were rated as a 7 (major accident) on the International Nuclear and Radiological Event Scale (INES), a logarithmic scale similar to the Richter scale used for earthquakes (57). However, even though both Chernobyl and Fukushima were major accidents, there are huge differences between them. In Chernobyl, the accident was caused by operator error and faulty reactor design; there were 28 deaths from radiation exposure, 15 deaths from thyroid cancer, 19 deaths from uncertain causes, and a lifetime expectation of 4,000 additional cancer deaths; and there was widespread environmental contami­nation necessitating the evacuation of 336,000 people. In Fukushima, the accident was not due to operator fault or reactor design but was the result of an unprec­edented earthquake and tsunami that killed around 20,000 people; about 18% as much radiation was released as at Chernobyl; there were no deaths from radiation exposure, and there will be only a single cancer death expected in the workers and possibly 25-30 from people in the path of the fallout; there was fairly widespread contamination necessitating the temporary evacuation of 100,000 people but the long-term effects should be confined to a much smaller area. Both were bad acci­dents but the consequences of Fukushima are far less severe.

ANATOMY OF A REACTOR

The Wolf Creek nuclear power plant sits on the flat plains of Kansas about 60 miles south of Topeka and 4 miles from Burlington, about 200 miles east of the wheat fields I farmed as a kid. A 5,090-acre lake filled with crappie, walleye, large and smallmouth bass, and other game fish provides cooling water for the reactor and also provides a fishing mecca for Kansans. The 10,500-acre site, including the reactor complex and the lake, has about 1,500 acres of wildlife habitat, and about one-third is leased to area farmers and ranchers. The plant itself takes up less than half a square mile. The lake provides habitat for waterfowl, as well as for bald eagles and osprey. It is hard to imagine that electricity for 800,000 people is generated in this pristine area of farmland and nature preserve.

Security was tight when I visited the plant. At the security checkpoint my car was scrutinized with mirrors to look underneath and scanners to detect anything hazardous being brought onto the site. After passing through the gate and an active vehicle barrier with a guard presiding, I drove to the Security Building. After meet­ing Tom Moreau, my tour guide who is a radiation specialist originally trained in the Navy for nuclear submarines, I gave my identification and went through the security entrance, similar to that for an airport, with an air puffer to detect any chemical residue from explosives, picked up my visitor identification badge, and was let through a locked security gate. I was then in the plant and was not allowed to ever leave the side of my guide. This entire area is surrounded by razor wire and is constantly observed by cameras and guards with machine guns in guard towers.

Earplugs were handed out before we entered the turbine building and I soon found out why. This building houses the enormous feedwater pumps, condens­ers, high — and low-pressure turbines, and the main generator that produces 1,200 megawatts at 25,000 volts that goes to step-down transformers and into the elec­trical grid. It is very clean but very noisy, hence the earplugs. This building is essentially the same that one would find in a coal-fired plant or a gas-fired plant (see Chapter 3); the only essential difference is the source of heat to make the steam to drive the turbines and the generator.

Entering the Radiation Control Area required activating doors using both my badge and Tom’s badge. This area is the heart of the reactor plant, and all who enter must have documentation and prior approval. Computer screens in this area observe radiation monitors throughout the reactor containment and aux­iliary building. We were given radiation badges and took off our earplugs, then went through other secure doors to the containment building which consists of reinforced concrete 3 feet thick, lined with a leak-tight carbon steel barrier (see Figure 5.1). This building, 208 feet high by 140 feet in diameter, houses the reactor vessel, which is 44 feet high and 14 feet in diameter with special alloy steel walls 5.4 to 6.8 inches thick.

Inside the reactor vessel is the actual reactor with the fuel elements and control rods. The fuel elements are uranium fuel pellets the size of a pencil eraser stacked into 12-foot long fuel rods, which are bundled into fuel assemblies containing 264 fuel rods. The reactor contains 193 fuel assemblies. The water circulating through

image032

Typical Pressurized-Water Reactor

the reactor is heated to 585°F at 2,200 pounds per square inch. Because the high pressure prevents the water from boiling, this type of reactor is known as a pres­surized water reactor, the most common type of reactor in the United States and the world. This water in the primary loop circulates through the four steam gen­erators, which form the secondary loop, all within the containment building. No water in the primary loop is in contact with the water in the secondary loop, so no radiation can be picked up by the steam that drives the turbines. The steam is then transferred to the turbine building, where it produces electricity.2 This area by the containment building is completely quiet, in contrast to the turbine building.

The reactor had just been refueled about a month before I was there, and I was curious to see the used fuel assemblies, which are highly radioactive and physi­cally hot. After going through additional security doors, we entered the spent fuel pool area, which is a concrete tank filled with boronated water to absorb neutrons. The pool is surprisingly small, only about the size of a ranch house. The blue water
is extremely clear, and it is impossible to sense the radiation emanating from the spent fuel. If you are lucky, you can see the blue Cherenkov radiation emitted by electrons traveling faster than the speed of light in water, but I was unlucky. You can see a matrix of the long, slender fuel assemblies under the water.

After leaving the reactor spent fuel pool, Tom and I had to be checked for any radiation picked up on our tour. When I put my hand in the sleeve of the radia­tion monitor and stood there, the red light blinked, indicating that I had too much radiation. It turned out, as it usually does, that the hardhat—which is plastic—gets static electricity and attracts charged daughter products from radon decay, which is emanating from the spent fuel pool. After removing my hardhat, I got a green light, indicating that I was free of radiation. My radiation badge indicated that I got zero mrem3 after touring the reactor plant. It is no more dangerous to be in a nuclear power plant than in any other type of power plant. Workers in the plant are allowed a maximum of 500 mrem (5 mSv) per year, but it is extremely rare for any worker to approach that level. The average exposure of workers in nuclear power plants that received a measurable dose of radiation was only 100 mrem (1 mSv), less than a third of average natural background radiation (4).

HOW BAD IS PLUTONIUM?

Sensational assertions have been made that plutonium is the most hazardous substance known to mankind. But is that right? In fact, it is the greatest myth perpetuated about radiation. Plutonium is an a emitter, no different from radon or radium or any other radiation that undergoes a decay. However, plutonium is not nearly as radioactive as an equivalent mass of radon because it has a much longer half-life (24,100 years vs. 3.8 days) and radioactivity is inversely related to half-life. We are all exposed to varying levels of radon in our homes, and some homes have high enough levels to perhaps require mitigation when it is greater than 150 becquerels (radioactive decays per second) per cubic meter (see Chapter 8). Plutonium is primarily a hazard if it is attached to microscopic particles and is trapped in the lungs. Certainly it is true that plutonium could cause lung cancer if it were in a high enough concentration. This is exceedingly unlikely, though, even after the Chernobyl nuclear accident. So the claim of Caldicott that the plutonium released after Chernobyl could have killed everyone on earth is simply not credible. This does not mean that we should not respect the risks of exposure to radiation. It just means “watch out for publicity seeking sensationalists!”

The other route for plutonium exposure is from consumption through food or water, which could possibly be a hazard from nuclear waste disposal (see Chapter 9). As it turns out, though, plutonium is not readily absorbed by the gas­trointestinal tract. Only about 0.05% of plutonium is absorbed by the GI tract (37). Of the plutonium that is absorbed, it acts similarly to a toxic metal deposited about equally between the liver and the skeleton.

We actually do know something about the health risks of plutonium from workers at Los Alamos National Laboratory who worked on developing the atomic bomb and received higher accidental exposures than any other group of people. These people formed a group known as the UPPU (you pee plutonium) Club consisting of 26 members! George Voelz, who was later the director of the Los Alamos Health Division, was interviewed in 1995 about the group. According to him, of the original 26 members with high doses of plutonium measured in their urine in 1945, only 7 had died. “One was a lung cancer death, and two died of other causes but had lung cancer at the time of death. All three were heavy smokers. In fact 17 of the original 26 were smokers at the time. . . . There were three other deaths due to heart disease and one due to a car accident. According to the national mortality rate, one would have expected 16 deaths in this group by this time, so the mortality rate for the group is about 50% lower than the national average” (38). So much for the myth that plutonium is the most toxic substance on earth!

SUMMARY

The fundamental mechanisms of how radiation causes damage are well under­stood. X-rays and у rays interact through the photoelectric effect, the Compton effect, and pair production to knock electrons out of atoms and accelerate them. These electrons, as well as other charged particles such as protons and a particles, cause further ionizations of atoms in their path. Ionization of atoms can break chemical bonds, leading to damage to cells. The most critical target for damage is DNA.

Radiation causes damage to DNA in the form of single and double strand breaks, base damage, and DNA-protein crosslinks that depend on the dose given. And yet, most of this damage does not either kill the cell or cause mutations that lead to cancer because of a variety of DNA repair systems to repair the damage and the large excess of non-coding DNA. The most important form of DNA dam­age is the formation of double strand breaks, which can lead to chromosomal aberrations such as deletions and reciprocal translocations. These types of dam­age do not generally kill the cell, but they can lead to the initiation of a cancer by activating cellular oncogenes or deleting tumor suppressor genes in a cell. This initiating event is by no means a guarantee that an actual cancer will form. That depends on a number of further genetic changes in a cell, such as activation of more oncogenes, inactivation of additional tumor suppressor genes, mutations in DNA repair genes, and induction of genetic instability, all of which are improb­able events. These many necessary molecular changes are the reason there is a latency period of many years from the time of exposure of radiation until a cancer forms—if it forms at all, and in most cases it does not.

Furthermore, we know a great deal about the probability that cancer will develop after a given dose of radiation based on various human exposures to radi­ation, particularly the Japanese bomb survivors. Indeed, we probably know more about the carcinogenic effects of radiation than of any other physical or chemical agent. Based on this information, we can confidently predict the risk of getting cancer from a particular dose of radiation.

So it is a myth that exposure to radiation will inevitably cause cancer, though this myth is widely propagated by anti-nuclear activists and is probably what most people believe. What really matters is the dose that you are exposed to. And we are all exposed to radiation as an unavoidable consequence of living on earth. The next chapter will explore the doses of radiation that we are exposed to from natural sources and from medical procedures.

Breeder Reactors

And that is not the end of the story either. Current and planned reactors are nearly all based on using uranium enriched with 2 35U as a fuel. But there are reactors that can use 238U to make new nuclear fuel. These are known as breeder reactors or fast neutron reactors. Recall from Chapter 6 that 238U does not fis­sion when it is bombarded with neutrons—it is the 235U that absorbs the slow neutrons and undergoes fission. But 238U does have a very interesting property. It can absorb a fast neutron to turn into 239U, which rapidly p-decays into nep­tunium (239Np), which in turn rapidly p-decays to form 239Pu—a fissionable isotope. In fact, part of the power from a standard light water reactor2 comes from fission of 239Pu after it has been created. Since more than 99% of natural uranium is 238U, there is a very large amount of potentially usable uranium that can be converted to plutonium and then burned in a reactor. Use of breeder reactors in a nuclear fuel cycle would extend the supply of usable fuel by a factor of about 60-fold (54).

The design of a fast neutron reactor is quite different from a standard pres­surized water reactor or a boiling water reactor (see Chapter 5). Water is used as a moderator in a standard reactor to slow down neutrons so that they can effi­ciently cause fission in 235U, but a fast neutron reactor requires fast neutrons so no moderator can be used. Plutonium can be used as the fuel because it fissions after absorbing a fast neutron and it releases more neutrons on average than 235U. Uranium enriched to 20% or 30% 235U can also be used—it can fission with fast neutrons, but much less efficiently than with slow neutrons, so a higher concen­tration of 235U has to be used. A blanket of 238U is wrapped around the core of the reactor and some of the fast neutrons that escape from the fission of plutonium are absorbed by 238U to create more 239Pu. The net result is that more plutonium is produced than is burned in the core—hence the name “breeder" reactor. The blanket must be recycled to extract the new plutonium fuel, which can then be made into fuel pellets to be burned in a reactor to generate power and more fuel (54). Fast neutron reactors can also be built in a different design configuration to burn up plutonium isotopes and other transuranics that pose a problem for nuclear waste. In this case, the fast neutron reactors are called “burners” (55, 56).

There are some issues to consider with a fast neutron breeder reactor. Water can’t be used for heat transfer from the reactor to the turbine because it slows down neutrons. The most common alternative is liquid sodium, which does not interact with neutrons and has good heat transfer properties. The one downside is that it is highly flammable when exposed to air or water. Standard light water reactors operate at high pressures and temperatures, while a sodium breeder reac­tor operates at a high temperature but low pressure, since it remains liquid up to a temperature of 1,621°F. That makes it easier to prevent sodium leaks that might contact water or air. Another liquid metal that can be used for cooling and heat transfer is liquid lead, but it is highly corrosive. The design requirements and the need for recycling plants to recycle the fuel mean that a breeder reactor program is an expensive option. It will not be economically viable as long as there is an adequate supply of uranium for conventional reactors (54). A final consider­ation is that breeder reactors use enriched plutonium for fuel and generate more plutonium. This raises the risk of diversion of plutonium to terrorists or rogue countries who might try to make a bomb. Clearly, the breeder reactor fuel cycle would have to be tightly controlled. On the flip side, fast neutron reactors can be designed to burn up plutonium isotopes and other transuranics, greatly reducing the problem of long-term storage of spent nuclear fuel.

Is this technology just a dream for the future? In fact, there are over 400 reactor-years of experience with breeder reactors. The first breeder reactor was developed in the United States in Idaho at what is now known as the Idaho National Laboratory (INL). INL has a long and storied history as the lead national laboratory for the development of nuclear energy and development research. On December 20, 1951, the world’s first breeder reactor—the Experimental Breeder Reactor-I (EBR-I)—began operation at INL. It produced about 100 kW of elec­tricity and ran until 1964, proving that a reactor could be used to generate more fuel than it used. A second generation of breeder—the EBR-II—was also built at INL and operated for 30 years until 1994, producing 20 MWe and demon­strating that a breeder reactor could run for decades with no corrosion from the liquid sodium coolant at high temperatures of over 850°F. The EBR-II was also a research reactor that led to the integral fast reactor (IFR) (40).

The IFR was designed as a complete facility that included the reactor, a special kind of reprocessing (pyroprocessing), and fuel fabrication. It was intrinsically safe because the reactor core could not melt down. Even if there was a complete loss of power to operate pumps to circulate sodium, the core would heat up and cause expansion of the sodium, causing convection currents to circulate sodium. And since liquid sodium doesn’t boil until a very high temperature (883°C or 1,621°F), there is no risk of a steam explosion. Numerous tests showed that the reactor would just go subcritical and fission would halt. The IFR would have led to a full-scale commercial breeder reactor but it was killed by Congress in 1994, three years before it was to come online (57). As a result, the United States has no current breeder reactors in operation. However, GE-Hitachi has followed up on the IFR design and is developing the Prism fast neutron reactor that has two modules of 311 MWe each. It is not designed as a breeder but rather as a burner reactor that will burn up plutonium and other transuranics, reducing the problem of spent nuclear fuel (54).

France also has years of experience with breeder reactors in its Phenix program. The Phenix reactor was a sodium-cooled breeder reactor that had a capacity of 250 MWe and ran from 1973 to 2009, when it was shut down. A much larger 1.2 GWe breeder reactor—the SuperPhenix—was built from 1974 to 1981 but did not begin generating power until 1985 and then seldom operated at full capac­ity. It sparked enormous protests from environmentalists, including a march of 60,000 protesters in 1977 during construction and a rocket-propelled grenade (RPG) attack in 1982 attributed to the international terrorist Carlos the Jackal. During its checkered history, it was shut down more than it was operating, partly due to fixing technical problems but more often due to political and management issues. Power production was halted in December 1997 after 11 years of part-time operation and was not restarted due to a court ruling and a political decision by Prime Minister Lionel Jospin to close down the reactor (58). As part of the Generation IV nuclear reactor program, France is working on the design of the Astrid (advanced sodium technological reactor for industrial demonstration) and the French government is supporting its development. A final decision on con­struction is expected by 2017 (54).

A few other countries also have experience with fast breeder reactors. The for­mer USSR built the BN-600 sodium-cooled breeder reactor, which began operat­ing in 1980 and continues to the present. It produces a nominal 600 MWe and has one of the best operating records of any Russian reactor. A smaller version, the BN-350, was operated in Kazakhstan from 1972 to 1999, primarily to run a desalination plant. A new version, the BN-800, is under construction. Russia has sold two BN-800 breeder reactors that are under construction in China. Japan built a breeder reactor, the Tonju, in 1994, but a sodium leak led to its shutdown for 15 years until it was restarted in 2010. Germany and England also have some experience with fast neutron reactors (54).

So, the possibility of breeder reactors is not just a fairy tale but has a substantial history and is being worked on in numerous countries to both extend the supply of nuclear fuel and to burn up plutonium and other actinides. These reactors are part of the international plan for Generation IV reactors that are intrinsically safe. Uranium may not be a renewable energy, but it is a sustainable energy source for a very long time horizon.