Category Archives: Why We Need Nuclear Power

About Those Accidents

THE SCARE, MARCH 16, 1979

A nuclear power plant is undergoing an emergency shutdown procedure known as a “scram” when there is an unusual vibration and the coolant level drops precip­itously. Subsequent investigation by a shift supervisor reveals that X-rays of welds have been falsified and other problems exist with the plant that could potentially cause a core meltdown that would breach the containment building and cause an explosion. However, the results of the investigation are squelched and the plant is brought up to full power. The shift supervisor takes the control room hostage but is then shot by a SWAT team as the reactor is scrammed. A meltdown does not actually occur.

No, this did not really happen, but these events—portrayed in the movie The China Syndrome—evoked a scenario in which a nuclear core meltdown could melt its way to China and contaminate an area the size of Pennsylvania. It also exposed a nuclear power culture that covered up safety issues rather than fixing them. It made for a compelling anti-nuclear story that scared a lot of people.

And then a real core meltdown happened, 12 days later.

THREE MILE ISLAND, MARCH 28, 1979 How the Accident Happened

The worst commercial nuclear power reactor accident in US history1 began on Three Mile Island, an island in the Susquehanna River three miles downstream from Middletown, Pennsylvania (hence its name). Two nuclear reactors were built on this island, but one of them (TMI-1) was shut down for refueling while the other one (TMI-2) was running at full power, rated at 786 MWe. At 4:00 a. m., what should have been a minor glitch in the secondary cooling loop began a series of events that led to a true core meltdown, but no China syndrome occurred and there was little contamination outside the plant. Nevertheless, it caused panic,

roused anti-nuclear sentiment in the country, and shut down the construction of new nuclear power plants in the United States for decades.

The nuclear reactors at Three Mile Island were pressurized water reactors (PWR), the type of reactor that Admiral Rickover had designed for power plants in US Navy nuclear submarines (1). About two-thirds of the 104 nuclear reac­tors in the United States are of this design. The fuel rods containing the enriched uranium are in the 4- to 8-inch thick high-tensile steel reactor vessel. When the control rods are removed, fission begins and the core heats up.

Water circulates through the reactor core and serves both as a moderator to slow down the neutrons and as the heat-transfer medium. The water heats up to about 585°F at a pressure of 2,200 pounds per square inch but it does not boil. This high-pressure, hot water is pumped through a heat exchanger that gener­ates steam in a secondary cooling loop and cools the high-pressure water in the primary loop. The high-pressure steam in the secondary loop then goes through the turbines, turning the generator to produce electricity. A condenser converts the steam back to water that recirculates through the steam generator. The main feedwater pumps are critical to maintaining this flow of water to extract heat from the reactor core. In principle, a coal-fired power plant and a nuclear power plant are similar in that they both make steam to turn turbines and a generator. The big difference is that you can turn off a coal-fired power plant and it cools down right away. With a nuclear power plant, it takes a long time to cool down after you shut it down by inserting the control rods (see Chapters 6 and 9 for more details about fission).

The reactor core and steam generator are all contained within a primary con­tainment vessel that has steel-lined walls of concrete 3 feet thick. The water that circulates through the reactor core never goes outside the primary containment structure, and the water in the secondary loop is not directly exposed to the reac­tor core. The most critical component of a nuclear power plant is the cooling water because if the cooling water stops flowing, the reactor can heat up so much that it can melt the Zircaloy (zirconium alloy) cladding of the fuel rods and cause a meltdown of the fuel.

At 4:00 a. m. the graveyard shift at TMI-2 was monitoring the normal reactor operation when a pump in the secondary cooling system shut down, triggering the turbine to shut down and the reactor core to scram—the control rods are rapidly and automatically inserted into the core and fission halts. “At this point, as has been said many times before, if the operating staff had accidentally locked itself out of the control room, the TMI accident would never have happened” (1). Instead, because of poorly designed controls and warning lights, malfunctioning valves and indicators, and the chaos of clanging alarms and flashing lights, the operators made faulty decisions that led to a partial core meltdown.

As soon as the main feedwater pumps stopped working, the reactor core was no longer being cooled and pressure built up, triggering a relief valve in the pres — surizer to open and relieve the pressure (see Figure 5.1 in Chapter 5). The relief valve should have then automatically closed but it did not, yet the operators were not aware that it was still open. As the pressure dropped, emergency cooling water pumps turned on to flood the core with water. There was no indicator of the actual water level in the core, and the operators had no signal to tell them that the valve was stuck open. Even worse, a light on the control panel falsely indicated that the relief valve was closed when it was actually still open. The operators thought there was too much water in the core and in the pressurizer, which can be a serious problem, so they turned off the emergency cooling pumps—a fatal mistake. Water poured out of the stuck-open valve into the primary containment building floor, and the core began to heat up with the lack of cooling water. The operators were still unaware of the problem. At 6:00 a. m., a new shift arrived and shut off the venting of cooling water through the relief valve, but it was too late and 32,000 gal­lons of radioactive water had already spilled into the primary containment build­ing. Ten minutes later the Zircaloy cladding of the fuel rods ruptured and the fuel began to melt. Eventually, it was determined that about half of the core had melted. Radiation monitors began going off and the operators finally realized they had a loss of coolant accident on their hands. They turned the emergency cooling pumps back on and appeared to have things under control by the end of the day (1-5).

The crisis was not yet over, however. Water on the floor of the primary contain­ment building was inadvertently pumped into the auxiliary building, and on the morning of March 30, about 13 million curies of noble gases, mostly isotopes of xenon, with traces of iodine (17 curies of 131I), were released into the atmosphere (1, 6). The gases rapidly dispersed, but because of this release and confusion about the plant’s condition, Governor Richard Thornburg advised women and children within a 5-mile radius to leave the area, causing panic. Another concern devel­oped during the day. When Zircaloy cladding on the fuel rods gets hot enough, it reacts with water and produces hydrogen gas. Irradiation of water also produces hydrogen gas. The hydrogen gas formed a large bubble in the pressure vessel, and there was concern that it might burn or even explode. This concern was height­ened throughout Saturday, March 31, but ended on Sunday, April 1, when experts determined that it could not explode because of a lack of oxygen. In spite of the operator mistakes and faulty signals and valves, the containment building design was robust enough to contain the core meltdown without contaminating “an area the size of Pennsylvania” as dramatized in the The China Syndrome.

It took another month for the reactor to achieve cold shutdown status, meaning that the reactor core had a temperature of less than 100°C, the boiling tempera­ture of water. The crisis was over, but the consequences were not. The reactor was destroyed and had to be cleaned up and mothballed at a cost of about $975 mil­lion (1). The other reactor on the site, TMI-1, continued to operate and is licensed to run through 2034.

INTERMITTENCY

Intermittency is a problem for the baseload electricity that an electric utility must provide to its customers. When it is cloudy, rainy, or snowy, my home solar system does not give me electricity and I depend on the grid. Furthermore, I get relatively little solar power in the shorter days of winter than in the summer. Utilities have to be able to count on baseload electricity production so they depend primarily on coal, nuclear, and hydropower plants to provide that (see Figure 4.2). Residential demand is high early in the morning as people get up and get ready for their day, then drops down but builds up through the afternoon and peaks in the evening (26). Solar is not able to meet that demand because solar energy is very limited in the early morning and evening. However, commercial demand rises during the day as people go to work and use a lot of electricity, which solar can help to meet. Cities especially need large amounts of dependable electricity to meet demand even during the night. Solar can help to meet the intermediate load during the day, especially in the Southwest where air conditioning is a big demand factor. Southern California has excellent solar resources and can get a larger proportion of its energy from solar than other parts of the country. The large concentrations of people in the East and Southeast are not so lucky. Whether solar power makes sense depends mostly on where you live.

How Does Radiation Cause Cancer?

Radiation has at its core a fundamental dichotomy in its effects on humans—on the one hand, it is frequently and successfully used to treat cancer; on the other hand, it can cause cancer. What explains this dichotomy? It all comes down to whether the dose of radiation is large enough to kill cells, or whether it is small enough to leave the cell alive with some damage that does not get repaired. Cancer therapy is all about killing cancer cells by causing so much damage that the cell cannot repair it all and dies from lethal chromosomal aberrations. A standard course of radiotherapy consists of 2 Gy fractions of radiation delivered 5 days a week for about 6 weeks. This regimen gives the maximum probability of killing the cancer cells while not causing excessive damage to normal tissues that also get irradiated.

The carcinogenic (cancer-causing) effects of radiation are due to the presence of unrepaired or misrepaired DNA damage that can lead to mutations or altera­tions in the ability of some genes to be expressed. Radiation is not very effective at causing small mutations in individual bases of DNA—most of these get repaired. But it is very good at causing DSBs that lead to deletions of whole sections of a chromosome, and sometimes these deleted sections may contain important genes (14). The formation of chromosomal aberrations is a type of damage from radiation leading to many of its biological effects (15). To understand why that is important, we have to explore the molecular underpinnings of cancer.

Cancer is the result of an evolutionary process in which a normal cell under­goes genetic changes that eventually cause it to transform into a malignant cell that can develop into a malignant cancer in a human. A single mutation is not capable of causing a cell to transform from a normal cell into a malignant cell. Instead, there are a number of individual genetic changes that occur in a process known as multistep carcinogenesis. Cancer is sometimes considered to develop in three stages—initiation, promotion, and progression. Ionizing radiation, along with many toxic chemicals, is known as an initiator because it causes mutations that begin the long process of cancer. Other genetic changes occur during the process of promotion to push the cell closer to the edge of becoming fully malig­nant. Progression is the continued uncontrolled growth of the altered cells to form a malignant tumor. For most solid tumors, there is a latency period of 10 to 60 years from the time of exposure to radiation (initiation) to the development of a malignant tumor. It is notably shorter for leukemia, taking only 5 years or so, but the risk is over by about 15 years (7). After the Chernobyl nuclear accident (see Chapter 10), it was discovered that thyroid cancer also develops within a few years, a similar latency period to that of leukemia. Both of these cancers occur primarily in children, so it may be the result of very rapid proliferation of cells that accounts for the shorter latency period in these cancers. But what are these genetic changes that transform a normal cell into a malignant cell?

The Quest for Uranium

MINING FOR URANIUM Shinkolobwe

The name rises as a phantom from the heart of the Congo. The dawn of the nuclear age began there, though no one knew it at the time. King Leopold II of Belgium claimed the Congo as his colony during the surge of European colonization in the 1870s, promising to run the country for the benefit of the native population. Instead, he turned it into a giant slave camp as he raped the country of its riches. Leopold didn’t care much about mineral wealth, preferring the easy riches of rubber, but after he died in 1909, the Belgium mining company Union Miniere discovered ample resources of copper, bismuth, cobalt, tin, and zinc in southern Congo. The history-changing find, though, was high-grade uranium ore at Shinkolobwe in 1915. The real interest at the time was not in uranium—it had no particular use—but in radium, the element the Curies discovered and made famous. It was being used as a miracle treatment for cancer and was the most valuable substance on earth—30,000 times the price of gold (1). Radium is produced from the decay of uranium after several intermediates (see Figure 8.3 in Chapter 8), so it is inevitable that radium and uranium will be located together. The true value of the uranium would not be apparent until the advent of the Manhattan Project to build the atomic bomb during World War II.

Edgar Sangier, the director of Union Miniere, which owned the mine at Shinkolobwe, hated the Nazis and was afraid—correctly, as it turned out—that they would invade Belgium. In 1939, as Europe was sliding into war, Sangier learned that uranium could possibly be used to build a bomb. He secretly arranged to transfer 1,250 tons of the uranium ore out of the Congo to a warehouse in New York City. There it sat until 1942, when General Leslie Groves, the man whom President Roosevelt put in charge of the Manhattan Project, found out about it

and arranged to purchase it. Throughout the war, shipments of uranium continued from Shinkolobwe to the United States to be used in building the atomic bomb. The veins of uranium ore at Shinkolobwe were the purest ever found on earth—up to 63% uranium oxide, more than 200 times as rich as most uranium ores. Even the mine tailings1 could have up to 20% uranium. Without this incredibly rich ura­nium ore, it is doubtful that the United States could ever have developed the atomic bomb, at least not in time to end World War II in Japan. The mine at Shinkolobwe provided two-thirds of the uranium and most of the plutonium (derived from neu­tron bombardment of uranium) for the two bombs dropped on Japan (1).

The only other source of uranium known at the time was at St. Joachimstal in the Krusne Hory (Cruel Mountains) in the former Czechoslovakia. This is where Marie Curie got the pitchblende that led to her isolation and discovery of polonium and radium in 1898 (see Chapter 2). Hitler annexed the Sudetenland in Czechoslovakia in September 1939 and inadvertently gained the uranium ore there. Hitler also cap­tured some uranium ore from Shinkolobwe that was on the docks in Belgium when he invaded in May 1940. There was great fear in the United States and its European allies that Germany was trying to build an atomic bomb, and indeed it was working on bomb technology under the direction of Werner Heisenberg, but never came close to developing one. The closest they came was an incomplete reactor (2).

In one of his final acts before committing suicide, Hitler shipped 1,235 pounds of uranium oxide by U-boat to Japan to help them build an atomic bomb. The uranium was left over from their research in nuclear fission and presumably came from the tailings dumps at St. Joachimstal. But the war in Europe ended before the submarine arrived in Japan, and the commander headed to North America to surrender to the US Navy. In a historical irony, the uranium most likely ended up in one of the bombs dropped on Japan (1).

General Groves thought that uranium was an extremely rare element and he wanted to corner the world market for the United States, but he was wrong. It turned out to be one of the most common minerals on earth, about 40 times as abundant as silver and more abundant than tin (3). Uranium was formed in the cataclysmic explosions of massive stars known as supernovas and spread through­out the universe. As the earth coalesced from the rock, dust, and gas surrounding the sun, the uranium became part of the rocky interior of the earth. The heat from its natural radioactivity was, and still is, largely responsible for the molten center of the earth, helping to make the earth a livable planet. Through the long geological history of the earth, magma flows, plate tectonics, and other physical and chemical processes led to deposits of uranium in 14 different categories of rock structures, including rich veins such as at Shinkolobwe and St. Joachimstal, unconformities, sandstones, and conglomerates (4).

Shiprock

The reddish spire rises like a ghostly ship from the high desert lands of the Colorado Plateau in the Four Corners region where Colorado, Utah, Arizona, and New Mexico meet. To the Navajo Nation that encompasses Shiprock and surrounding areas, it is a sacred place. But the environs of Shiprock are also the home of Leetso, the powerful yellow monster as it is known to the Navajo (Dine), but uranium as it is known to the rest of us (5). In 1948 the US Atomic Energy Commission announced that it would pay a guaranteed high price for all the ura­nium that could be found in the United States to reduce its dependence on the Congo for uranium to build its nuclear arsenal. This led to a mining boom in the United States, centered on the Four Corners area, much of it in the Navajo lands. Over two decades, peaking in 1955-1956, uranium was mined from this area, producing the necessary ingredients for the plutonium bombs in the arms race with the Soviet Union, as well as the fuel for the nascent nuclear power industry. According to an Environmental Protection Agency (EPA) study done in 2007, there are 520 abandoned uranium mines in the Navajo Nation resulting from the flurry of mining activity, which are now being cleaned up as Superfund sites (6).

The mining took a toll in human lives, though it was far worse at Shinkolobwe and at St. Joachimstal—after it fell into Russian hands after the war and Stalin needed uranium for his bomb development—than in the Four Corners area. The Navajo were eager for the jobs provided by the uranium mines but were not pro­vided safe working conditions, even though it was known at the time that radon in uranium mines could cause lung cancer. The simple expedient of ventilating mines to reduce the radon levels made them much safer, but this was not routinely done until about 1960.

Underground miners who smoke have a far larger risk of lung cancer than those who do not (see Chapter 8). The Navajo workers smoked far less than the white workers and had far lower rates of lung cancer. Age-adjusted annual mortal­ity rates for lung cancer in Native Americans from New Mexico (Navajo, Pueblo, and Apache) rose from 5.3 (per 100,000) in 1958-1962 to 10.8 in 1978-1982. The white population had a far higher rate, however, going from 38.5 to 70.4 during the same periods because of higher smoking rates (7). The rate for Navajo people may be even lower than for Native Americans in general, since the age-adjusted lung-cancer mortality rate for Navajos was 4.8 in 1991-1993 (8). These are general lung cancer mortality rates, not specifically for miners. Navajo who were uranium miners had far higher rates of lung cancer. Among 94 Navajo who were diagnosed with lung cancer between 1969 and 1993, 63 had been uranium miners. The rela­tive risk for the miners was 28.6 compared to non-mining control populations of Navajo (9).

The US Congress passed the Radiation Exposure Compensation Act (RECA) in 1990 to compensate former miners who developed lung cancer or other lung dis­eases attributable to uranium mining. The RECA was amended in 2000 to reduce the radon exposure limits to qualify for compensation and to increase the com­pensation to $150,000 (10). As of September, 2013, 5,893 claims for compensation from uranium miners had been accepted and paid (11).

The traditional form of mining done in the Navajo Nation and other places was vertical or horizontal shaft mining, where miners dug into the earth and extracted the uranium ore. It is worth putting the danger to uranium miners in context compared to underground coal mining at the time. Mining accidents killed 450 coal miners annually in the United States in the 1950s, and black lung disease killed over 2,000 miners annually from 1970 to 1990 (see Chapter 3). While the incidence of lung cancer in Navajo and white miners was unneces- sary—and could have (and should have) been avoided with proper ventilation of the mines—the hazards of uranium mining pale compared to the hazards of coal mining. There were about 5,900 miners who developed lung cancer or other lung diseases (though many survived) over roughly 20 years of uranium mining compared to about 2,500 annual coal miner deaths during this time. This min­ing was done before the days of the EPA and the MSHA (Mine Safety and Health Administration), which now make all kinds of mining much safer for the workers and for the environment.

Location and Footprint

What is true about the desirability of real estate is also true of nuclear power— location, location, location. A map of the nuclear reactors in the United States (Figure 5.2) can be overlaid with the picture of lights at night in the United States (Figure 4.3) and it is pretty clear that reactors are where most of the people are—the eastern part of the country. This greatly reduces the need for costly and

image033

U. S. Operating Commercial Nuclear Power Reactors

environmentally damaging new transmission lines. These are also regions of the country that have relatively poor solar and wind resources.

Nuclear reactors use the most concentrated form of energy known—the energy released by splitting a nucleus of uranium—to provide huge amounts of power in a single power plant.4 The footprint of a nuclear power plant is less than half a square mile to produce one GW or more of electricity. This small footprint con­trasts greatly with the approximately 50 square miles of solar panels or nearly 500 square miles of wind turbines to provide a similar average amount of power (see Chapter 4). Thus a nuclear reactor can be located near cities where the power is needed and not make an impact on the landscape or the environment. Two or more reactors can be put at a single site, doubling or tripling the amount of power without increasing the footprint appreciably. The 104 reactors operating in 2012 occupied only 65 locations around the US with about half of the sites containing two reactors and several sites containing three (Figure 5.2) (4).

Nuclear reactors last a long time. Reactors in the US were originally licensed for 40 years and half of the reactors are approaching that lifetime. While a few will be decommissioned, 71 of them have received license renewals to operate for a total of 60 years and others will most likely do so as they approach the 40 year license deadline. To do this, the reactor owners have to go through an extensive process through the Nuclear Regulatory Commission (NRC), the US organization that is charged with ensuring the safety of nuclear reactors. The Department of Energy (DOE) is even studying the possibility of extending the life of existing reactors to 80 years after careful research on the effects of long-term irradiation and heat on reactor materials (7). Solar panels have a typical warranty lifetime of 20 years, though the efficiency goes down by 1% a year so they will have lost 20% of their output by 20 years, and wind turbines have lifetimes of 15 to 20 years. The wind and solar farms will need to be rebuilt two to three times during the lifetime of a nuclear reactor, a factor that is seldom taken into account in discussions about these forms of power plants.

Cosmic Radiation

Cosmic “rays” are extremely high energy particles that come from outer space. The majority of cosmic rays come from our galaxy, though some of the highest
energy particles come from outside our own galaxy, and some come from the solar wind generated by magnetic storms in the sun (2). About 90% of the cosmic rays are protons, 9% are helium nuclei (a particles), and 1% are electrons (3). The energies of cosmic rays are much higher than anything mankind could create—up to 1020 eV! To put that in perspective, the Large Hadron Collider—the most pow­erful particle accelerator on earth that discovered the Higgs Boson in 2012—can attain proton energies of about 7 TeV (7×1012 eV). As the energetic cosmic rays crash into the upper atmosphere, they create other particles such as pions, muons, neutrons, у rays, and electrons (2, 3). The muons are similar to electrons, but they are about 200 times heavier, and they are the main source of cosmic radiation exposure on the earth’s surface.

Fortunately, we live on a planet that is surrounded by an atmosphere that absorbs much of the cosmic radiation. At lower elevations, the atmosphere is dense and much of the radiation is absorbed, but as you get to higher eleva­tions, there is less atmosphere for the charged particles to interact with, so the dose is higher. That is one of the reasons that the dose is higher in Fort Collins than in Florida, for example. The dose from cosmic rays increases as a second order polynomial with altitude (4) and can easily be modeled. Fort Collins is at an elevation of 5,000 feet, a little lower than Denver, and the cosmic ray dose is 0.38 mSv per year, compared to a US average of 0.33 mSv and a dose in Florida of 0.24 mSv (Figure 8.2). People living in Leadville, a small town high in the mountains in Colorado at 10,152 feet, get an annual dose of 0.85 mSv from cosmic radiation. I have a cabin in the mountains at about 8,200 feet eleva­tion, about the same as Estes Park, and the dose there is 0.6 mSv/yr. The actual

image051

Figure 8.2 Variation in cosmic ray dose rate with altitude.

dose rate depends on how much time a person spends outdoors compared to indoors, since the house provides some shielding against the cosmic rays (5). When you fly in an airplane at about 10,000 m (33,000 ft), the dose rate is much higher and depends on the exact flight altitude and path. On average, the dose rate is about 5 to 8 microSv/hr (2), so if you flew from New York City to London, about an 8-hour flight, you would get a dose of 0.04 to 0.06 mSv, about half the dose you would get from a chest X-ray. However, airline pilots and crew average about 500 hours of flight time at high altitude annually (2) and would get doses of about 2.5 to 4.0 mSv/yr.

The earth also has a magnetic field that extends far into space—the magneto­sphere. The magnetic field is important because the charged particles in the cos­mic rays are deflected by the magnetic field, just as an electric current is deflected by a magnet. Since the earth’s magnetic field is roughly aligned with the North and South Poles, the cosmic rays are deflected to the northern and southern regions of the earth. In times of severe magnetic storms on the sun, this leads to the phenomenon of the aurora borealis, or northern lights (and aurora australis, or southern lights, in the Southern Hemisphere). As a result of the magnetic field, there is a variation in cosmic ray exposure with latitude on earth, being greater at higher latitudes, though this is a relatively small factor, adding only about 0.01 mSv/yr to the dose in Colorado as compared to Florida.

There is one other interesting phenomenon associated with cosmic rays. The high energy protons create neutrons, which sometimes interact with nitrogen in the atmosphere. As described in Chapter 5, neutrons can sometimes be captured by nuclei. In this case, 14N captures a neutron but then spits out a proton and the resulting nucleus becomes 14C, a radioactive form of carbon that has a half-life of 5,730 years. Since the cosmic ray flux is relatively constant over time, the pro­duction rate of 14C is constant and becomes a constant fraction of total carbon in the atmosphere. When plants breathe in carbon dioxide to photosynthesize carbohydrates, a certain fraction consists of 14C while the remainder is the normal 12C (and a little 13C). Once plants die, the ratio of 14C to 12C begins changing as 14C p-decays back to 14N. This changing ratio allows scientists to estimate the age at which the plant died, and this forms the basis for radioactive carbon dating of archaeological sites. And, of course, we get a little 14C when we eat plants or ani­mals that feed on plants.

MYTH 1: RADIATION IS EXTREMELY DANGEROUS AND WE DON’T UNDERSTAND IT

Many critics of nuclear power have very little understanding of radiation and its dangers, but that doesn’t mean that it is not well understood by experts. Scientists understand a great deal about radiation, and there is an enormous scientific lit­erature on radiation and its effects. Radiation is not something outside human experience—rather, it is something that was present as life evolved, and our cells developed repair enzymes to repair DNA damage caused by radiation. We are constantly exposed to radiation from the air (cosmic rays), from the earth (pri­mordial radioisotopes such as uranium, thorium, and radon) and from the food we eat (40K in bananas and other foods). Many people live in areas with high levels of natural radiation—like those of us who live in Colorado—but there is no cor­relation between these higher background levels of radiation and the risk of can­cer. And many people get as much or more exposure to radiation from medical procedures as they do from natural sources.

The risks of radiation have been better studied than probably any other toxic compound. It is, of course, true that radiation can cause cancer, but it is a sig­nificant risk only at high doses. The risks are well understood and can be strictly related to dose. Nuclear critics virtually never mention dose and they imply or state that radiation from a nuclear power plant or from nuclear waste is somehow especially evil. The truth is that there is no difference between an a, p, or у from natural sources and one that comes from a nuclear reactor. All that matters is the dose. We also know that the relative biological effects depend on the specific kind of radiation and the sensitivity of specific tissues. So we actually understand radiation and its effects very well, and we can make specific predictions about the probability that exposure to a particular dose of radiation will cause a cancer.

It is often said that plutonium is the most toxic compound on earth. In fact, it is not particularly toxic and is hardly absorbed by the body at all. Its danger is primarily if it is inhaled and remains trapped in the lungs. It is an a emitter and, like radon, can cause lung cancer if a sufficient dose is given. The people with the most exposure to plutonium were workers at Los Alamos who built bombs—the infamous UPPU club—and they had lower cancer rates than the general public. In fact, there is no known population of people who have gotten cancer from plu­tonium. It is time to put this myth to rest.

Mining and Health Hazards

Mining coal is also a hazardous enterprise, both for humans and for the environ­ment. Underground mines are the most hazardous for humans because of cave-ins and methane-gas explosions. Deaths from coal mining in the United States aver­aged about 1,000 per year during the 1930s and 1940s, fell to about 450 annually in the 1950s and 140 in the 1970s. Death rates continued to fall to 45 per year during the 1990s, with an all-time low of 23 in 2005 (17). More recently, there was a jump to 47 deaths in 2006 and 48 deaths in 2010, with an average of about 35 in the first decade of the twenty-first century. This toll is just from mining accidents, but that is only part of the story. Black lung (pneumoconiosis) from breathing coal dust killed over 2,000 coal workers annually from 1970 to 1990. The number of deaths has dropped since then to about 1,000 in the year 2000 and about 700 in 2005 (18).

A recent report states that black lung caused or contributed to 75,000 deaths in miners between 1968 and 2007. After a 1969 federal law required mining compa­nies to control coal dust in the mines, black lung declined substantially. Since the late 1990s the rate has gone up, and the rate of the most severe form of the disease is nearly back to the levels of the 1970s. Furthermore, it is occurring in younger min­ers. This is due to companies seldom following the law in mitigating dust and is likely also due to the increased problem of silica dust produced in modern mining (19).

It is hard to imagine this degree of mortality being allowed in any other indus­try in the United States without a huge public outrage. But, of course, miners choose to do this work (though they may not have many other job options) and we do not protest too much since we greatly enjoy the electricity that coal brings to our homes!

Most mining for coal is not underground, however. The Powder River Basin in eastern Wyoming, the major source of low sulfur coal in the United States, has seams of coal 100 feet thick near the surface. The surface is removed and the coal is excavated by monstrous machines, creating huge open pits (Figure 3.1). Besides the obvious visual aspect of these mines, effects on groundwater are the major environmental concerns. In many western states, the coal seams are also a shallow aquifer, and the water has to be pumped out to allow mining. This can reduce groundwater in nearby wells. But, of course, there are not a lot of people who live in this area to complain of the view, dust, or water problems and there is good money to be made!

In the Appalachian Basin, another major source of coal, mountaintop removal with valley fill has become the mining method of choice. Vast areas of mountaintop

image023

Figure 3.1 Pit mine in Wyoming.

source: Photo courtesy of Doris Rupp, Dailyville.

forests are clear-cut, then the mountains are blasted, the debris is removed with gigantic draglines and the valleys are filled with debris from the removal of over­burden (Figure 3.2). Eventually, the mountains are reduced to flat land as the coal seams are removed (13). The visual impact of this is far greater than that of the coal mines of Wyoming. There are also much more severe problems with water quality, especially acidic drainage, in eastern mining. The EPA estimates that acidic drainage has polluted about 11,000 miles of streams in Appalachia (3). Mining companies are required to reclaim the land but cannot replace the mountains and valleys. Furthermore, reclaimed areas have little regrowth of trees, headwater streams are lost, and biodiversity is reduced (20).

Transportation of all of that coal poses dangers also. Coal transportation accounted for 44% of tonnage and 24% of train carloads in 2007. Not only does it take a lot of fuel to run the trains, thereby generating Co2, but there are direct fatalities and injuries from train accidents. There were 578 fatalities and 4,867 nonfatal injuries from train accidents in 2008 (16).

FISSION

Once the neutron was discovered, a new era began in studying the nucleus. Rutherford had bombarded nuclei with a particles to discover the nucleus. In ground-breaking work, the Joliot-Curies bombarded aluminum with a particles and discovered that the aluminum was converted into a radioactive form of phos­phorus, the first proof of artificial transmutation (converting one element into another) and artificial radioactivity. The a particle was absorbed by the nucleus of aluminum (Z = 13) and converted it to phosphorus (Z = 15).

Enrico Fermi recognized that there was a huge advantage in using neutrons to study artificial transmutation because they have no charge, so they are not repelled by the protons in the nucleus. He began a systematic series of experi­ments irradiating all of the elements with neutrons produced by the same kind of neutron source that Chadwick had used to discover neutrons, namely a par­ticle irradiation of beryllium. He and his colleagues irradiated 60 elements and found that 40 of them became radioactive (10). When they irradiated uranium, the heaviest element known, they found that the neutron was absorbed by the nucleus and subsequently negative p particles were emitted with a half-life of 13 minutes. According to the rules of p decay, this means that a neutron changed into a proton, changing the element from uranium with a Z of 92 to a new element with a Z of 93. If true, this would mean they had created an entirely unknown ele­ment, since uranium is the highest atomic number element that occurs naturally. These artificially made elements are known as transuranics. Unfortunately, Fermi got a little ahead of himself in interpreting the experiment as the creation of a new transuranic element (it wasn’t), though it is now known that transuranics can indeed be produced by capturing neutrons.

Lise Meitner and Otto Hahn, working at the Kaiser Wilhelm Institute in Berlin, began bombarding uranium with neutrons, and they found a veritable zoo of dif­ferent radioactive species decaying by p decay with different half-lives ranging from 10 seconds to 23 minutes. Lise Meitner, an Austrian of Jewish ancestry, had to leave Germany in 1938 to avoid being arrested by the Nazis. She was able to escape to Denmark with the help of Niels Bohr—who helped many physicists escape Germany at this time—and subsequently went to Stockholm, Sweden. In the meantime, Hahn—who was probably the world’s best radiochemist—and Fritz Strassman continued trying to identify the radioactive elements that were producing the varying lifetimes after neutron bombardment of uranium. He pre­cipitated different elements out of the radioactive material and finally concluded that an element with the same chemical properties as barium had radioactivity (9, 10). But this seemed impossible! Uranium has a Z of 92 while barium has a Z of 56. This could not possibly happen by p decay. What was going on? Hahn was a chemist, not a physicist, so he wrote to Meitner in Sweden to ask what physics could explain these results.

Meitner had a nephew, Otto Frisch, who was working with Bohr in Copenhagen. For Christmas 1938 they were both invited to stay in Kungalv, Sweden, with a friend of Meitner’s. They were walking in the snow, talking about Hahn’s unbeliev­able results, when Meitner had a revelation! She recalled that Bohr had explained the nucleus as a liquid drop, held together by the short-range strong force that acted similarly to the surface tension holding a drop of water together. If a neu­tron entered the nucleus, it would be like perturbing a drop of water, causing it to jiggle. As the nucleus jiggled, if it became elongated, the repulsive force due to the charge of the protons would be able to overcome the strong force holding it together, distorting it into a dumbbell shape and finally splitting it into two sepa­rate pieces. This would explain Hahn’s results because barium (Z = 56) would be one piece and the other piece would have to have an atomic number of 36, which would be krypton.

Meitner and Frisch did some quick calculations and determined that it was indeed possible for a large nucleus to split into two pieces, and they would carry away about 200 million electron volts (MeV) of energy.6 They showed that the energy came from the decrease in mass of the two pieces of the nucleus com­pared to the original uranium nucleus, according to Einstein’s prescription that E = mc2 (12). When Frisch went back to Copenhagen and told Bohr, he imme­diately understood and exclaimed “Oh, what idiots we have been! Oh, but this is wonderful! This is just as it must be!” (10). Frisch asked a biologist friend what he called the division of bacteria and he said “binary fission”; Frisch shortened it to just “fission,” and that became the new name for the splitting of the nucleus.

Everything suddenly became clear. The different lifetimes that Fermi and Hahn and others found when they bombarded uranium with neutrons were from dif­ferent pieces of the nucleus that split. A nucleus of uranium could split in many different ways, as long as the pieces conserved the number of nucleons and energy was conserved. The different pieces, called fission products, would then undergo p decay to convert excess neutrons into protons to make a more stable nucleus. Each of the many pieces would have different decay half-lives. Fermi had not discovered transuranic elements after all—he had discovered fission but hadn’t recognized it.

A new world began. Soon after Frisch told him the news about fission, Bohr traveled to the United States, and the word spread like wildfire among physicists, many of whom had emigrated from Europe to the United States because of the war. A Hungarian physicist, Leo Szilard, was in New York at the time, and he heard about the discovery. He immediately realized that the fission fragments would have excess neutrons, and if enough neutrons are emitted for each ura­nium nucleus that decays, it would be possible for those neutrons to cause fission in other uranium nuclei, leading to a chain reaction. And if a chain reaction could occur, there was the possibility that a bomb could be made. But no one really knew if that were possible. Because of fears that Germany would develop a bomb if it were possible, the United States soon began a crash program to determine whether an atomic bomb was possible and, if so, to build it before Germany could. The history of that effort, the Manhattan Project, is compellingly told by Richard Rhodes in his book The Making of the Atomic Bomb (10).

This book is about nuclear power, not atomic bombs. There are fundamen­tal differences in the two processes, but the physics for nuclear power had to be developed before atomic bombs could be built. Enrico Fermi was driven out of Italy because of the rise of Mussolini and the fascists who allied with Hitler in Germany. Fermi came to the United States and became a professor at the University of Chicago, where he joined the secret effort to build the bomb. Fermi had made a critical discovery when he was bombarding various elements with neutrons back in Italy. His lab had measured different intensities of radioactivity when they irradiated samples on a marble table (only in Italy could this happen!) or a wooden table. What could cause this? Fermi, for some reason, decided to put paraffin in front of his neutron source before irradiating a sample and found that the radioactivity increased dramatically. He realized that the paraffin, a rich source of hydrogen, was slowing down the neutrons so they could more easily be captured by the uranium nucleus. The wooden table, having more hydrogen than marble, also slowed neutrons down more than the marble table, so the radioactiv­ity was greater. The slow neutrons were more effective in causing fission, though he did not know that at the time.

It was later discovered that natural uranium is actually a mixture of two dif­ferent isotopes: 238U accounts for 99.3% and 235U accounts for 0.7%.7 Bohr once again had the critical insight. He realized that 238U had a high probability, called a cross section, of capturing slow neutrons of about 25 eV without fissioning. This would make a new isotope of uranium, 2’92U, which p-decays to the transuranic element neptunium-239 (239Np). This is what Fermi thought he was observing in his experiments on neutron bombardment of uranium. Neptunium-239 rapidly undergoes p decay to form another transuranic element, plutonium-239 C^jPu). 235U, on the other hand, has a high probability of fissioning when it captures very slow neutrons called “thermal” neutrons. 238U required very high energy neutrons to fission, but neutrons quickly lose their energy in collisions so they could not sustain a chain reaction. So, the first requirement for fission to occur is to slow down neutrons with a material called a moderator. The other critical requirement for a chain reaction is that the fission of a 235U nucleus must produce more than one neutron on average that can be captured by another 235U nucleus and cause it to fission. This would cause a self-sustaining controlled fission reaction.

Fermi had to build a reactor to prove that a fission chain reaction really was feasible, since no one knew whether it could be done. The reactor was built under top secret conditions on a rackets court under the stands of Stagg Field at the University of Chicago. The first decision was what to use to slow down the neutrons. Water is a good moderator, but it can absorb some neutrons. Since they were going to use natural uranium, it was essential that neutrons not be absorbed by the moderator. They decided to use carbon, which would slow down the neutrons with little absorption, and it could be made into graphite blocks to construct the reactor. The next decision was how to arrange the uranium in the graphite moderator. It takes a critical mass of uranium packed within a critical volume to undergo a chain reaction. If there is not enough uranium or it is in too large a volume, then the neutrons that are produced do not cause a self-sustaining fission reaction. Fermi decided to make graphite blocks with recesses in them to hold cylinders of pressed uranium oxide. The blocks with the fuel were arranged in a slightly oval pile with blocks of pure graphite inter­spersed. The first name for a reactor was a “pile.” As the pile was built by adding blocks of graphite and blocks with uranium, there came a point where it was approaching a critical mass.

There is an obvious problem here: How do you shut it down when it goes critical? Cadmium is an element that strongly absorbs slow neutrons, so cad­mium sheets were nailed to flat wooden strips, which could be inserted into the pile through slots. When these control rods were inserted, the reactor could not go critical. On December 2, 1942, in freezing cold in the unheated rackets court, the final construction of the pile was completed, and the control rods were slowly withdrawn while detectors measured the neutron intensity. At 3:49 p. m. the neutron counters began clicking with increasing speed—the first chain reaction created by humans had begun! After four and a half minutes, Fermi shut down the reactor by lowering the control rods (10). Thus began the era of nuclear power. The principles that Fermi and others used in that reactor, Chicago Pile #1 or CP-1, are still the principles used in every nuclear reactor in the world, though most of them use water for a moderator and use uranium that is enriched to about 3-4% of 235U so that it fissions more efficiently and has a smaller critical mass.

To make a bomb, at least two neutrons would need to be produced for each fissioning uranium nucleus and captured by other uranium nuclei to get a geo­metric progression of fissioning nuclei (one fission causes 2 fissions, which cause 4 fissions, which cause 8 fissions, and so on). It turns out that the fission of 235U produces on average two and a half neutrons, so there are plenty to cre­ate a chain reaction. However, to efficiently capture these excess neutrons and have an extremely fast chain reaction requires that the natural uranium be highly enriched to over 90% 235U. It is physically impossible for a power reactor to ever become a fission bomb because a power reactor does not have the highly enriched 235U that is necessary for a bomb.

Consequences of TMI

What about the health effects? How many people got cancer or mutations from the radiation that was released? The answer is possibly one cancer death over a lifetime and 1 or 2 hereditary mutations, according to an expert group, the Ad

Hoc Population Assessment Group (7). To put this in perspective, about 450,000 people would be expected to die from natural causes of cancer in this population of 2 million people living within a 50-mile radius of the plant (6). Again, it all comes down to dose. It is estimated that the average dose to the 2 million people in the area was about 1 mrem (0.01 mSv) and the maximum dose to a person at the boundary of the plant was about 100 mrem (1 mSv) (2). Recall that you would get a dose of about 0.05 mSv flying from New York to London, or about five times the dose for the average person around TMI. The average annual natu­ral background radiation is about 1 to 1.25 mSv in Pennsylvania, but remember that in Colorado the average natural background radiation is about 4.5 mSv and Coloradans have among the lowest cancer rates in the nation. Without even doing a study, you can predict that there would be no observable cancers resulting from the radiation released from TMI. The highest exposed person got a dose less than one-fourth of the annual dose to an average Coloradan!

But, of course, many studies were done—by the NRC, the DOE, the EPA, the Pennsylvania Health Department, and independent researchers. These studies show that there were no effects on pregnancy outcome, spontaneous abortions, fetal and infant mortalities, or cancer (6). Actually, there were fewer cancer deaths than expected in residents within 20 miles of the plant in the following five years. Given that thyroid cancer and leukemia would be the only cancers that are likely to crop up within five years, these are of special concern, but no excess in these cancers was found. In fact, the only cancer that would be likely to appear would be thyroid cancer, because the only radioisotope of biological significance in the release was 1 31I. Iodine, including radioactive iodine, is rapidly transported through grass to cows’ milk to children who drink milk and is concentrated in the thyroid. There was no measurable release of 137Cs or 90Sr (6, 7) (the next section has more details about these isotopes). The only health concern caused by the accident was a high level of stress in people in the surrounding area.

Nevertheless, TMI was a huge wake-up call to the Nuclear Regulatory Commission (NRC), the governmental agency that regulates nuclear power, and the nuclear industry. The NRC made major changes in its regulations and oversight of the nuclear industry—plant design and equipment requirements were upgraded and strengthened; human factors were taken into account in redesigning control panels and instrumentation to avoid confusion; training was greatly improved and increased; emergency preparedness was increased; the resident inspector program was expanded so that two NRC inspectors live nearby and work exclu­sively at each plant; and numerous other changes were made (2). Two of the most important changes were the creation of the Institute of Nuclear Power Operations (INPO) and the National Academy for Nuclear Training—industry-run organiza­tions that promote safety and excellence in the nuclear power industry—follow­ing the realization that major problems at a single nuclear reactor in the United States would affect public acceptance of every other plant (4).

Coming on the heels of the release of The China Syndrome, the TMI accident sparked a resurgence in the anti-nuclear environmental movement, with authors such as Helen Caldicott, Amory Lovins, and John Gofman transforming their anti-nuclear weapons stance to anti-nuclear power, claiming that it was too dangerous to be used and calling for a shutdown of the nuclear power indus­try (8-10). Protesters tried to block the completion of reactors undergoing con­struction. Public hearings extended plant licensing for years and made the cost of construction prohibitive. The public view of nuclear power was very negative, and people were scared of radiation. Demonstrations took place at nuclear power plant sites (my sister and brother-in-law demonstrated at the nuclear power plant in Kansas). And utilities became afraid of the liabilities they might face with a nuclear accident. Of the 129 nuclear power plants scheduled to be built at the time of the accident, only 53 were completed (1). Despite all of that, existing reac­tors and newly constructed reactors became safer and more efficient and nuclear power provided about 20% of the electricity in the United States over the coming decades.

When I visited the Wolf Creek Nuclear Plant in my birth state of Kansas, I had the opportunity to see firsthand the result of these changes. Every nuclear reactor in the United States has a training control room that is identical to that used in the actual reactor. Dials, gauges, and controls are grouped together so that operators can easily see and control reactor operations, unlike at TMI. Operators train 10 weeks every year in the training control room, and the con­trols work with feedback, so they function exactly like the real controls in the actual control room. Supervisors running the simulator can create all kinds of accident scenarios for training purposes. While I was there, the supervisor sim­ulated a loss of electrical power to the pumps, and it is pretty scary to see all of the lights flashing and horns blaring. But it gives the operators a chance to work through all kinds of scenarios before they need to deal with a real-life situation.

These changes in nuclear power plant design, operation, and training have obviously been effective because there have been no accidents in the United States since Three Mile Island. The United States has over 3,600 reactor-years of expe­rience with commercial nuclear power without the loss of a single life from a nuclear accident (11). And sailors have been living right next to pressurized water reactors on nuclear submarines for nearly 60 years, with nary a loss of life or expo­sure to harmful doses of radiation. Despite decades of warnings from anti-nuclear people who imagine that the worst will soon happen, it just doesn’t happen. Of course, that does not mean that it is impossible, but the risk is very small. There is always a risk from any energy source, but nuclear power has the best safety record of any major power industry in the United States.