Category Archives: Why We Need Nuclear Power

Footprint

The large footprint of solar is due to the fact that solar energy is very diffuse and its conversion to electricity is very inefficient, no matter how you do it. If you scale up the solar utility-scale facilities to be equivalent to a 1 GW nuclear power plant, it takes about 50 square miles of panels or mirrors to generate the same amount of electricity as an average nuclear power plant. A nuclear power plant sits on about a third of a square mile. This huge solar footprint is not entirely benign. Environmentalists are already up in arms about scraping away vast areas of the desert to build solar facilities and in the process destroying fragile ecosystems and endangering desert tortoises and other plants and animals (27, 28). The Ivanpah solar project required moving about 150 endangered desert tortoises at a cost of more than $50 million (20). And it isn’t just environmental damage that is of con­cern. Native Americans have sued to stop several large solar developments in the Western desert because they fear the huge facilities will damage sacred and cul­turally significant sites (29). Rooftop solar, of course, does not take up any extra space, but it is not able to meet the demand for electricity, even though it can contribute to some of it some of the time.

Cost

Solar power is expensive, and it is highly subsidized to encourage its use. Levelized cost is a method to measure the overall competitiveness of different technologies to generate electricity. It takes into account the capital cost, operating cost, and transmission investment over an assumed life cycle and duty cycle. According to the US Energy Information Administration (EIA), assuming a very optimis­tic capacity factor of 25%, the total average system levelized cost for solar PV is $153 per MWh3 (range is $119 to $248/MWh) and for CSP is $242 per MWh (range is $176 to $386/MWh). For comparison, the average system levelized cost for conventional coal is $98 per MWh, natural gas is $66 per MWh, and advanced nuclear is $111 per MWh (30).

The Ivanpah solar power tower will produce a nominal 377 MW at a cost of $2.2 billion, with a federal government-guaranteed loan for 80% of the cost ($1.6 billion). And investors were guaranteed contracts from California utilities, which will pay a premium for the electricity (20). It is just over one-third as efficient as a nuclear power plant operating at 90% capacity. If you scaled it up to a 1 GW nuclear power plant equivalent, it would cost $16 billion! A comparable nuclear power plant costs about $7 billion (see Chapter 5).

My grid-tie system cost $16,000, but my utility and the state of Colorado kicked in a combined $3 per watt for a total of $7,500 as a direct rebate. Why would a utility do this? Colorado demands that utilities get 30% of their power from renewable resources by 2020, so the utility gets Renewable Energy Credits for the electricity that I generate to help them meet the requirement. On top of the direct subsidy of nearly half of the cost, I also got a 30% tax credit from the IRS for the difference. Once I received that, my total outlay was just under $6,000. Altogether, my subsidy amounted to 63% of the cost. Not bad! And that is not the end of it. I also get a feed-in tariff from my utility for the electricity I produce at 100% of my electricity rate. But, of course, the rest of society pays that cost.

Similar subsidies occur in California and other states that are ramping up solar energy. There was an uproar in San Diego when the San Diego Gas and Electric utility filed a request with the California Public Utilities Commission to charge solar customers for the use of the grid when the utility buys back the solar energy. The utility says that its average solar power customer is subsidized to the tune of $1,100 per year by the other utility users (31).

California Valley Solar Ranch, a 250 MW solar project, received so much largesse from the federal and state governments that its cost of $1.6 billion was almost completely taken care of with total subsidies of $1.4 billion. The subsidies include federal loan guarantees, 30% of the cost up front as a cash grant, a favor­able interest rate about half of the commercial rate, property taxes waived, depre­ciation tax breaks, and favorable guaranteed rates for the generated electricity that are about 50% higher than for a gas-powered plant (32).

You can argue that all forms of energy get subsidized, including fossil fuels and nuclear as well as renewable energy, and I don’t disagree with that. But let’s not pretend that solar power can stand on its own two feet. Without these large sub­sidies, solar will likely remain a very small component of the overall energy mix because it is so expensive compared to other alternatives.

As I was writing this chapter, a political scandal was developing that will likely have consequences for the subsidies for solar energy. Solyndra, a California com­pany employing 1,100 people, developed a different technology for solar panels that did not depend on the silicon wafers that are used in most PV panels. The US Department of Energy gave them a $535 million loan guarantee to build a plant and produce the solar panels. The plant broke ground on September 4, 2009, and went bankrupt on September 6, 2011, losing half a billion dollars of taxpayer money in the process. Even closer to home, Abound Solar—a start-up company based on research at Colorado State University—developed a thin-film solar panel technology, but it also went bankrupt in 2012 (33). These solar technologies could not compete with the Chinese, who pumped $30 billion into their solar industry in 2010 alone and undercut solar panel manufacturers in the United States and elsewhere (34, 35).

Oncogenes

In 1909 a chicken virologist named Peyton Rous investigated a tumor growing on the back of a Plymouth Rock chicken that was brought to him by a farmer. Cancers are named for the type of cells in which they originate, and this tumor was a sarcoma, a tumor that grows in connective tissue and muscle. Rous found that he could transmit the tumor from chicken to chicken by injecting cancer cells into disease-free chickens. This was not surprising, but then he found a shocking result—he could grind up the cells and filter the cellular soup through very fine sieves until he had no cells to inject, but he could still transmit the cancer! This led him to conclude that a very small particle, called a virus, was transmitting the cancer, and this led to the idea that cancer was a viral disease. The virus was later named Rous sarcoma virus (RSV) and was the first virus known to cause cancer (16). However, RSV does not cause cancer in humans and, in fact, it is rare to find any viruses that can cause human cancer. The human papilloma virus (HPV) can cause cervical cancer, the Epstein-Barr virus (EBV) can cause a form of lymphoma cancer in sub-Saharan Africa, hepatitis B virus is associated with liver cancer, and an adult form of leukemia is caused by the T-cell leukemia virus, but that is about it. Fewer than 5% of human cancers are caused by viruses (17).

So, if RSV does not cause human cancer, why did I bring it up? RSV was later discovered to be a type of virus called a retrovirus, most well-known for the human immunodeficiency virus (HIV) that causes AIDS. A retrovirus has RNA (ribonucleic acid) for its genetic code instead of DNA (deoxyribonucleic acid). The “central dogma” of molecular biology was long thought to be that the flow of information goes from DNA to RNA to proteins, but retroviruses turn the process around. They infect cells and turn their RNA into DNA, which gets incorporated into the DNA of the host cell and then makes more viral particles. The critical issue is that it can sometimes capture a gene from the host and incorporate it into the viral genome and then transmit it to other cells. That is exactly what RSV does; in particular, it picks up a gene called src (pronounced “sarc”) for sarcoma. Src is an example of a cancer-causing gene called an oncogene.

Many years later, in 1977, Ray Erikson discovered that src is a gene that codes for a particular kind of protein called a kinase, whose function is to add a phos­phate group to other proteins at certain critical sites (16). It turns out that the function of cells is strongly dependent on the specific phosphorylation of many different proteins, and this regulates processes of cell growth. When the Rous sar­coma virus picks up src in its genome and then infects a cell, the src gene phos — phorylates proteins and causes unregulated cell growth—a hallmark of cancer.

In paradigm-shifting experiments for which they received the Nobel Prize, Harold Varmus and Michael Bishop showed that the src gene is present in all kinds of cells as a normal gene that they called a proto-oncogene (16, 17). But if src is a normal gene in cells, how does it cause cancer when it is transmitted into other cells? The specific src gene that was transmitted by RSV actually had a deletion at the end of the gene that made it into a hyperactive kinase that phosphorylated proteins without normal regulation. That, then, is what made src a viral oncogene.

Since viruses are not a major cause of human cancer, are oncogenes impor­tant in causing cancer? To answer this question, scientists took DNA from human cancers and used new tools in molecular biology to break the DNA up into small pieces and insert them into cells. Sometimes the DNA that is inserted can cause a normal cell to transform into a malignant cell, and the specific gene that was inserted can be identified. These genes that could transform cells were also called oncogenes, but the difference is that they were genes that came from human can­cers, not from viruses, so they were called cellular oncogenes. A large number (at least 70) of these cellular oncogenes have been identified in various cancers, and they play an important role in causing cells to transform from a normal cell into a malignant cell (18). These cellular oncogenes all come from normal genes (proto-oncogenes) that help regulate the complicated process of cell growth. In all cases, the cellular oncogenes are modified from their parental proto-oncogenes so that they are hyperactive in promoting cell growth. Normal cells go through a cell cycle; in the final stage, mitosis, each cell divides into two cells. Cells normally respond to a lot of signals and follow intricate biochemical pathways to decide whether to grow and divide, but cellular oncogenes short-circuit some of these pathways to drive the cell into division. If you imagine that a cell going through the cell cycle is like a car going around a racetrack, the oncogenes are like stepping on the gas so that the cell grows and multiplies rapidly.

There are several ways in which proto-oncogenes can become cellular onco­genes, and radiation can cause at least two of them. In some cases, a deletion of part of a gene can cause it to become activated without normal regulation, and since radiation is good at deleting sections of DNA, it can activate oncogenes in this way. Another way is by producing reciprocal translocations. Recall that these occur when two different chromosomes are broken but the wrong pieces are stuck back together by DNA repair (NHEJ), and the DNA is criss-crossed from the two chromosomes so that each chromosome contains part of the other chromosome. A famous example of this is known as the “Philadelphia chromosome,” which the cytogeneticist Janet Rowley discovered is a reciprocal translocation between chromosomes 22 and 9 so that a piece of chromosome 9 is on 22 and a piece of 22 is on 9 (19). This particular chromosomal rearrangement is commonly found in a cancer known as chronic myelogenous leukemia (CML). But how can that cause a problem, since the total amount of DNA and the genes are preserved? The prob­lem is that this exact fusion of pieces of different chromosomes creates a chimeric gene known as bcr-abl that makes a new kinase, which phosphorylates proteins and causes cells to grow abnormally. Another cancer known as B-cell lymphoma is caused by the reciprocal translocation of chromosomes 14 and 18. In this case, the gene that is activated is a gene called bcl-2 that prevents lymphocytes (white blood cells) from dying in the process called apoptosis, so they keep multiplying and cause cancer (17).

MILLING

The net result of mining is ore containing the uranium. Like any other ore, it has to be processed to make it into a usable product. The next step is called milling. The uranium ore is hauled to a milling plant, where it is crushed into finer particles before being leached with sulfuric acid or alkaline solutions to dissolve the uranium from the ore. The leaching solution also extracts other heavy metals, such as vanadium, molybdenum, selenium, iron, lead, and arsenic, along with the uranium. The slurry is washed and clarified before heading to a solvent extraction area, where the uranium is removed from the solution by ion exchange columns. After another aqueous phase, a yellow slurry of uranium oxide (U3O8) is produced that is dried to make the final product—yellowcake— which is packed in 55-gallon steel drums and is ready for further processing to make uranium fuel pellets (12). The waste products from the milling plant include the fine-grained solids remaining after extracting the uranium and liq­uids from the slurry. These go into a mill tailings holding pond that contains toxic heavy metals that occur in the ore, as well as radium and radon, which are always associated with uranium. The radiation and heavy metals in mill tail­ings have to be carefully monitored and controlled under regulation by the US Nuclear Regulatory Commission (NRC) (13).

It hasn’t always been that way. Numerous mill tailing sites, left over from the mining binge of the 1950s to 1980s, were left in place, causing environmental problems. One particularly notorious site was the Climax uranium mill in Mesa County near Grand Junction, Colorado. The Climax Uranium Company let pub­lic and commercial interests have access to the mill tailings to be used as fill mate­rial and in concrete and mortar in housing construction. This led to excessive levels of radium and radon in more than 4,000 private and commercial properties in and around Grand Junction (14). The only scientific study that has been done to determine whether radiation from the mill tailings in houses and commercial buildings in Mesa County caused cancer was negative, however. Although leuke­mia rates from 1970-1976 were twofold higher in Mesa County than in Colorado overall, there was no excess incidence of lung cancer. Furthermore, the leukemia cases were unrelated to any excess exposure to radiation from mill tailings (15).

Congress passed the Uranium Mill Tailings Radiation Control Act in 1978 that authorized the US Department of Energy (DOE) under the Uranium Mill Tailings Remedial Action Project to clean up inactive uranium milling sites, including the one at Grand Junction. Houses and businesses with high levels of radiation were either demolished and the radioactive materials hauled away to a proper disposal site or were remediated (14). A milling site at Shiprock in the Navajo Nation was also contaminated and is still undergoing remediation (16). Overall, 22 mill tail­ings sites have been cleaned up by the DOE.

It is worth asking whether the radiation emanating from these milling sites actually caused higher rates of cancer in the general population around them. The answer is no, with the exception of an increased risk of lung cancer for those men who actually worked in underground mines. Montrose County, Colorado, had more uranium mines and mills than anywhere else in Colorado—a total of 223 mines—but a 50-year study of people living in Montrose County found that they had the same mortality rate as the general Colorado population. The only exception was a nearly 20% increase in lung cancer among men, most likely due to working in underground mines and smoking (17). Another epidemiological study of people living in Uravan—a town in Montrose County named by combin­ing uranium and vanadium—found that the overall rate of mortality was 10% less than the national average, and the average mortality from all cancers was equal to the national average. Again, the sole exception was an enhanced risk of lung cancer. The lung cancer mortality rate among underground miners was double the national average, but it was not elevated among Uravan residents who did not work in mines (18).

The history of cavalier attitudes about uranium mining and milling and lack of regulation has ended. As a sign of the future, the first new uranium mill since the Cold War was recently approved in the Paradox Valley in Montrose County near Naturita, Colorado, about 50 miles south of Grand Junction, though not without controversy (19, 20). Naturita is the site of a legacy uranium mill that has been cleaned up by the DOE, but the new mill would be under strict regulatory control and should prove that milling sites can be operated with minimal environmental damage.

Subsidies for Nuclear and Renewables

The big problem for new nuclear power plants is that they are very expensive initially—$6 to $8 billion for each reactor—and utilities are hesitant to commit to such an expensive plant. The 2005 Energy Policy Act provided for $18.5 billion in construction loan guarantees for nuclear plants, so they can get financing at good rates. The first of these loan guarantees for $8.3 billion went to Southern Company in 2011 to add two reactors to its existing plant in Georgia. The loan guarantee is for 70% of the reactor cost, so the utility has substantial “skin in the game” (13). The DOE loan guarantees are similar to those for renewable energy, and they can’t exceed 80% of the cost of the project. Furthermore, the nuclear developer pays the cost of the loan guarantee and the full cost of administering the DOE loan program, which is not the case for wind and solar guarantees (14).

Since new nuclear reactors are being designed for a 60-year lifetime (and some even for 100 years), a good way to think about the huge initial investment is a long-term mortgage on an expensive house. A very well-built house that will last for 100 years or more is going to cost more than an inexpensive house that may have to be rebuilt two or three times. Of course, the solid house is going to cost more but in the long run is a better deal, even though you have to take out a bigger mortgage in the first place. After 30 years, though, the mortgage is paid off and for the remaining years the cost is minimal—just upkeep and taxes. It is similar with a nuclear reactor compared to a solar or wind power project—the reactor will outlast several alternative energy projects but will cost far more upfront.

To be honest, nuclear power developers need these loan guarantees to commit to building the expensive power plants, especially until they prove they can build a few on time and at cost. This is one of Amory Lovins’s big complaints about nuclear power—he says they cannot attract private capital, in contrast to solar and wind (15). But this is disingenuous; solar and wind power projects do not attract private capital without subsidies either—witness the cries of alarm from the wind industry facing the possible end of the Production Tax Credit at the end of 2012. The market alone is unlikely to be able to support either renewable energy proj­ects or nuclear power projects because they are very expensive. But nuclear power alone has the potential to substantially reduce the CO2 emissions from coal used for baseload power, which neither solar nor wind can do. And, as I pointed out in Chapter 2, it is the most cost-effective per ton of avoided CO2 of any of the other power source alternatives. To me it seems like a good trade-off. And, of course, a loan guarantee does not mean that the money will be lost. Once a plant is built on time and at cost, the money is no longer at risk. Many nuclear power plants are owned by public utilities (that is the case with the Wolf Creek nuclear plant) rather than large businesses, and they are more responsive to public interests than those run by private industry, which are primarily responsible to shareholders. To my mind, it is a good thing for the government to be investing in its future needs. This is no different from long-term investments in highways, bridges, or even a space program.

One person’s subsidy is another person’s incentive. A comprehensive review of federal government incentives from 1950 through 2010 shows that incentives come in various forms: tax incentives, regulations, research and development (R&D), market activity, government services such as infrastructure, and direct disbursements. The total amount of energy-related incentives over 60 years was $837 billion. Tax incentives such as credits, exemptions, and deductions are the largest category, accounting for 47% of all incentives. Fossil fuels account for 70% of the total incentives, mostly through tax incentives, but coal also received a sub­stantial amount for R&D. Hydropower accounts for 11% of the total, primarily through regulating the electricity market. Nuclear power and renewable energy (mostly wind and solar) each accounted for 9% of the total incentives and the remaining 1% was for geothermal. Most of the incentives for nuclear power were for R&D, with a much smaller amount for design regulations through the NRC and other governmental agencies while most of the incentives for renewable energy were tax incentives and about one-third were for R&D (16). The policy of the US government for decades has been to incentivize various forms of energy, and that is not likely to change. Reducing our emissions of CO2 is in the national and global interest, and incentivizing nuclear power and renewable energy is a valuable tool to help achieve that.

Primordial Terrestrial Radiation

Primordial terrestrial radiation accounts for a much larger share of the back­ground radiation we are all exposed to. But what is primordial radiation and where does it come from? Primordial terrestrial radiation comes from extremely long-lived radioisotopes that were present when the earth was formed. The three principal primordial radioisotopes that contribute to terrestrial background radiation are uranium (238U), thorium (232Th), and potassium (40K) (2). 238U has a half-life of 4.5 billion years, 232Th has a half-life of 14 billion years, and 40K has a
half-life of 1.3 billion years. Natural uranium also consists of 0.7% 235U, which has a half-life of 0.7 billion years, and of course is the isotope that is used in nuclear reactors. These elements are widespread in the earth’s crust but are much more prevalent in some areas than others. Uranium and thorium originally formed in stellar explosions known as supernovas and were spread throughout the universe. Potassium is formed in our sun and similar stars. When the earth coalesced out of interstellar matter, it contained these radioactive elements in its crust and in its core. The radiation from the primordial uranium and thorium accounts for much of the heat in the center of the earth that makes it molten.

Uranium and thorium are the beginning radioisotopes in a long series of decays that eventually end up as stable isotopes of lead. 238U and 232Th decay by emitting a particles, creating new radioisotopes that further decay by either a or p decay, along with emitting у rays. The decay scheme of 238U is particularly interesting because it leads to the most important component to our background exposure—radon (Rn) (Figure 8.3). The arrows that move down and left represent a decay QHe nuclei) in which the atomic number changes by 2 and the atomic mass changes by 4. Arrows that move to the right represent p decay where a neu­tron is converted into a proton, changing the atomic number by 1 (see Chapter 5 for more details on radioactive decay processes). The half-lives are roughly rep­resented by the size of the arrows. After several decays, 238U is converted into radium (226Ra), the radioisotope made famous by Marie and Pierre Curie. Radium a-decays to radon (222Rn), which has the unique distinction of being a gas with a short half-life of 3.8 days.

Internal Primordial

These primordial radioisotopes are widespread in the earth’s crust. For example, granite in the mountains in Colorado can have relatively high levels of uranium and thorium. Other areas on earth can also have high levels of thorium, partic­ularly monazite sands in Brazil and India. Potassium is widespread around the earth, and it is essential for all living organisms. Elemental potassium consists of 93.3% 39K, 6.7% 41K, and 0.01% 40K, but only the 40K is radioactive. We get a dose from 40K primarily through the food we eat—bananas are particularly high in potassium, as are Brazil nuts and red meat. It is not too much to worry about, though. You would have to eat 600 bananas to get the same dose as a chest X-ray! Smaller internal doses come from uranium and thorium that can also be ingested from various foods. The average US dose from ingestion of radioisotopes, primar­ily 40K, is 0.29 mSv/yr.

External Primordial

The a particles from uranium and thorium decay do not contribute to an external dose because they cannot travel far in air and cannot penetrate through clothing or our skin. The principal dose comes from the у rays that are emitted by radio­isotopes in the decay series. The distribution of external background radiation varies widely throughout the United States and indeed the world. It is particu­larly high in mountainous regions of the West, and that is a major factor con­tributing to the higher dose in Colorado compared to coastal areas (Figure 8.4). For Colorado communities at elevations lower than 2,000 meters (6,600 ft.), the

Terrestrial Gamma-Ray exposure at 1 m above ground

image052

Source of Data: U. S. Geological Survey Digital Data Series DDS-9, 1993

Figure 8.4 Background у radiation from primordial isotopes. source: Courtesy of the US Geological Survey.

average terrestrial radiation dose is 0.79 mSv/yr, and for communities at eleva­tions above 2,000 meters, the average terrestrial dose is 1.12 mSv/yr (4). The com­munities at higher elevations get a higher dose due to the closer association with granitic rock containing uranium and thorium. Of course, they also get a higher dose from cosmic rays. Shielding from houses will reduce this dose, so the actual dose depends on how much time you spend outdoors. The average actual dose for Colorado is 1.25 mSv/yr from cosmic rays and terrestrial radiation (5). This com­pares to 0.54 mSv/yr for an average US citizen (Figure 8.1). The dose in Colorado is adding up!

Radon

We still have not gotten to the main source of background radiation that we are exposed to—radon. The average annual dose from radon is 2.28 mSv, but this also varies widely. Radon (222Rn) is formed from the a decay of radium (226Ra), but radon a-decays in 3.8 days to an isotope of polonium (218Po), which itself quickly a-decays through a series of p-emitting daughter products with half-lives of min­utes to another isotope of polonium (214Po), which in turn a-decays in a fraction of a second to a longer-lived isotope of lead (210Pb) and finally to stable lead (206Pb) (Figure 8.3). The unique aspect of radon is that it is a neutral gas that is formed in rocks and soils from the uranium decay series. External exposure is not a problem with radon because the a particles cannot penetrate the skin. Instead, radon con­tributes to an internal dose to the lungs. Radon itself is not particularly harmful because it is breathed in and out and doesn’t remain in the lungs. The danger from radon is actually the polonium daughter products that are also a emitters and are charged atoms (ions). They can readily attach electrostatically to small particles of negatively charged dust that can get trapped in the lungs. The a particles emitted by the polonium isotopes then irradiate cells in the lung tissue, and damage to these cells can lead to lung cancer (6). Recall from Chapter 7 that a particles are 20 times more damaging per Gy than у rays and that the lungs are among the most sensitive tissues in the body; thus a particle irradiation of the lungs is particularly dangerous.

Another unique aspect of radon is that the danger is not primarily from expo­sure outdoors but rather in our houses. Radon is formed as a gas in the soil and can percolate up to the surface through cracks and fissures in the soil. That is not much of a problem outdoors because it dissipates in the atmosphere, but if it leaks into a house through basement cracks and openings, it can remain trapped in the house and build up to dangerous levels. Many communities now require radon measurements in houses before they can be sold.

The level of radon is measured by the concentration of radioactivity1 in a cubic meter of air (Bq/m3). It is not simple to convert this to a dose in mSv/yr, so it is usually just specified as Bq/m3. The average concentration of radon in houses in the United States is 45 Bq/m3; the median is 24 Bq/m3, but the distribution is not normal—there are many houses with low levels but a few with much higher levels of radon, which skews the distribution. Furthermore, levels can vary substantially

image053

Figure 8.5 EPA map of radon zones in the US. Zone 1 is > 150 Bq/m3, Zone 2 is 75-150 Bq/m3, and Zone 3 is < 75 Bq/m3.

source: Courtesy of the US Environmental Protection Agency.

from one house to another in the same community. The EPA recommends that action should be taken to mitigate radon if it is above 150 Bq/m3 (4 pCi/l). This can be done by sealing cracks in basement areas and venting the soil to the atmosphere.

As you might expect, radon concentration varies by region of the country and is generally higher in mountainous areas and lower in coastal areas (Figure 8.5). The overall average radon dose to a person in the United States is 2.28 mSv/yr. In Colorado the average radon dose is 2.87 mSv/yr (5) and in Fort Collins it is about 2.94 mSv/yr (7). In Leadville the average radon dose is 3.44 mSv/yr (5). Once again, we get a higher dose than elsewhere in the country, and people in Florida or Texas get a lower than average dose. According to the Texas State Health Department, the average radon dose in Texas is about 1 mSv/yr.

The US National Academy of Sciences did an extensive study of the health effects of exposure to radon (the BEIR VI report) in 1999. They analyzed 11 dif­ferent epidemiological studies of 68,000 underground miners who were exposed to radon, with 2,700 deaths from lung cancer. One of the major difficulties with these studies is the confounding problem of smoking, which is quite prevalent in miners. Radon and smoking work in a synergistic way, with a much greater risk for getting lung cancer after exposure to both of these carcinogens rather than just radon or just smoking. Using two different dose risk models, the report estimates that 10-15% (15,400 to 21,800) of cancer cases annually in the United States are due to indoor radon exposure. However, the large uncertainties suggest that the number of cases could range from about 3,000 to 33,000. The report also esti­mates that if all houses above the action level of 150 Bq/m3 were mitigated, about one-third of the radon-attributable lung cancer cases would be avoided; that is a reduction of 4% of all lung cancer cases (6). Since about 95% of cases of lung cancer occur in past or present smokers, a far more effective approach to reducing lung cancer would be to convince people to quit smoking!

The linear no-threshold (LNT) model is generally used to estimate the pos­sibility of getting lung cancer from exposure to radon. But is it really the best model? That question engenders strong conversations among radiation biologists. A recent assumption-free statistical analysis of 28 different scientific papers on lung cancer incidence from radon concluded that there is no evidence that radon causes lung cancer below a level of 838 Bq/m3, over five times the level at which the EPA recommends mitigation. The LNT model did not prove to be a good fit to the data (8). Thus, it is likely that the EPA recommendations for mitigation are extremely conservative.

MYTH 2: THERE IS NO SOLUTION TO THE NUCLEAR WASTE PRODUCED BY NUCLEAR POWER

Nuclear waste disposal is often considered to be the Achilles’ heel of nuclear power. According to the anti-nuclear crowd, geological storage of nuclear waste is condemning future generations to high levels of radiation. In reality, there is no crisis for nuclear waste, but there is a need for action. The spent nuclear fuel (SNF) is currently being stored in cooling pools and in dry cask storage at existing reac­tors. Cooling pools were not really expected to contain the spent nuclear fuel for decades because the United States was supposed to develop a long-term storage facility at Yucca Mountain to open in 1998—which, of course, did not happen. There is one upside to keeping SNF in cooling pools for years, though; over time, heat and radioactivity greatly decrease, making it is easier and safer to handle. A very good intermediate solution is to move the SNF from cooling pools after several years and put it into dry cask storage, where it could remain for a century or so. There could be several consolidated dry cask storage facilities in the United States, or they could be maintained at each reactor. In fact, both are likely. The Blue Ribbon Commission appointed by President Obama to study nuclear waste disposal recommended that consolidated dry waste storage be implemented in the near future, especially for “stranded fuel” from closed nuclear power plants. There would still be a need to store SNF at the reactor sites for an interim period (20).

And yet a long-term nuclear disposal site is still necessary. Yucca Mountain was born out of political gun-slinging that doomed it for political reasons. But

that doesn’t mean that it would not be a perfectly satisfactory and safe disposal site. After a few hundred years, the fission products will have decayed to the radioactive equivalent of the uranium ore the nuclear fuel originally came from. The longer-term concern is from plutonium isotopes and other transuranics. Plutonium is not really that much of a concern because it is readily adsorbed by clay and is essentially insoluble in water. It can be easily contained for thousands of years. The main problem would potentially be neptunium (see Chapter 9 for details), but that can also be contained for tens of thousands of years, according to modeling studies done by scientists studying Yucca Mountain.

The allowable dose limit of0.15 mSv for the next 10,000 years and 1 mSv for the next million years is well below normal background levels of radiation. The radiation background in Amargosa Valley near Yucca Mountain is 1.3 mSv per year, about one-third of the average background in Colorado (4.5 mSv). So even if the radiation to the public tripled after 10,000 years to 3 mSv per year, it would still only be equal to the average dose in Colorado. And it is not as if the radiation would emanate from Yucca Mountain. There would only be exposure if people pumped water out of the ground that somehow became contaminated with radio­nuclides. In the end, the controversy about Yucca Mountain is really a tempest in a teapot, kept alive by political considerations.

There is another solution that minimizes the problems with long-term disposal of SNF and that is to reprocess it, as is done in France and other countries. By separating out the fission products—especially cesium and strontium—and vit­rifying them, their disposal becomes much simpler and the radioactivity decays to background levels after a few hundred years. The plutonium and uranium can then be used in mixed oxide fuel and burned in existing reactors. This has the advantage of minimizing the waste storage problem and also getting about 25% more fuel out of the SNF. The United States has been opposed to this approach since Presidents Ford and Carter shut down the planned reprocessing plant in South Carolina, but it has been done successfully for decades in France. This is certainly an option for the future. Ultimately, the plutonium isotopes that build up even in the MOX fuel could be burned up in a fast neutron breeder reactor.

So the truth is that there are solutions to spent nuclear fuel that are available and that do not pose a risk to humans. The Blue Ribbon Commission appointed by President Obama lays out the issue very clearly:

The problem of nuclear waste may be unique in the sense that there is wide agreement about the outlines of the solution. Simply put, we know what we have to do, we know we have to do it, and we even know how to do it. Experience in the United States and abroad has shown that suitable sites for deep geologic repositories for nuclear waste can be identified and developed. The knowledge and experience we need are in hand and the necessary funds have been and are being collected. Rather the core difficulty remains what it has always been: finding a way to site these inherently controversial facilities and to conduct the waste management program in a manner that allows all stakeholders, but most especially host states, tribes and communities, to con­clude that their interests have been adequately protected and their well-being enhanced—not merely sacrificed or overridden by the interests of the coun­try as a whole (20).

How Much Is There?

Given all of these adverse health and environmental consequences of coal (the “bad” and the “ugly”), why do we still depend so much on coal? Is there any “good”? The good is that coal is the most abundant fossil fuel on earth, and it has a relatively high

image024

Figure 3.2 Mountaintop removal and valley fill in West Virginia.

source: Photo by Kent Kessinger, courtesy of Appalachian Voices. Flight courtesy of

SouthWings.

energy density. The United States has about 29% of the world’s reserves (the Saudi Arabia of coal), followed by Russia with 19%. China, India, and Australia also have large deposits, while the Middle East has very little coal (21). This wide distribu­tion means that many major economies have ample resources without depending on imports. The World Coal Institute estimates that the earth’s coal reserves will last at least 120 years at the present rate of consumption. The abundance of coal makes it the cheapest form of energy for producing electricity. The other major reason that we depend so much on coal is because very powerful political interests from coal states have lobbied Congress for years to maintain a supportive environment for coal.

How Dangerous Is Radiation?

Many people think that radiation is extremely dangerous. Helen Caldicott, a long-time anti-nuclear activist, claims that “a single mutation in a single gene can be fatal,” meaning it could cause a fatal cancer (1). She is a physician and she ought to know better. Plutonium is frequently stated as the most dangerous element on earth. Helen Caldicott also says: “Plutonium is so carcinogenic that the half ton of plutonium released from the Chernobyl meltdown is theoretically enough to kill everyone on earth with lung cancer 1,100 times if it were to be distributed uniformly in the lung of every human being” (1). This is sort of like saying that a man theoretically has enough sperm to impregnate every woman on earth. The big problem is distribution! Nobody actually died from plutonium released in the Chernobyl accident, and a man cannot impregnate all the women in the world!

These kinds of hypothetical scare-mongering statements get a lot of press, but they are far removed from the reality of the risks of radiation. But what are the actual risks?

CHERNOBYL, APRIL 26, 1986 How the Accident Happened

The worst nuclear power reactor disaster that the world has known began in the late evening of April 25, as poorly trained workers at the Chernobyl nuclear power plant in northern Ukraine began an unauthorized test while they were doing a scheduled shutdown of unit 4. They wanted to see how long the slowing turbine could provide power after the reactor was shut down, and they shut off the emer­gency core-cooling system since it would draw power. This was the first of many major safety violations. The next major safety violation they made was to with­draw most of the control rods to increase power as it fell to dangerously low levels. The reactor was supposed to be running at 700-1,000 MW thermal (MWt) for the test, but it actually dropped to 30 MWt for unknown reasons. At 1:03 a. m. on April 26, they activated the cooling water pumps, but this, combined with the low power, required manual adjustments, so they turned off the emergency shutdown signals. At 1:22 a. m. they shut off the trip signal just as it was about to shut down the reactor. At 1:23 a. m. the test began, but the reactor was in a very unstable state, so that any increase in power could cause a rapid surge in power due to a design characteristic of the reactor called the positive void coefficient. As the power rose, water turned to steam, reducing the absorption of neutrons and causing a rapid increase in power.

The operators tried to insert control rods, but that actually increased the reac­tivity because the rods had graphite at their ends, which acted as a moderator to slow down neutrons and increased the rate of fission. Power surged to 100 times the operating capacity of the reactor, the uranium fuel disintegrated and caused a huge steam explosion that blew the 1,000-ton lid of the reactor aside. A second explosion a few seconds later, probably from hydrogen gas released by the Zircaloy cladding of the fuel rods, blew through the reactor walls and threw burning blocks of graphite and fuel into the compound. A plume of radioactive debris rose 10 kilometers into the atmosphere, and the reactor spewed radiation over the next 10 days as fires continued burning (12, 13). It is often stated that the pure graphite core itself burned, but that is somewhat controversial since “tests and calcula­tions show that it is virtually impossible to burn high-purity nuclear-grade graph­ites” (13). However, in 1957 the Windscale graphite-moderated nuclear reactor in Sellafield, England, caught fire and released more radiation than any other accident before Chernobyl (1). The United States had one graphite-moderated, helium-cooled reactor in Fort St. Vrain, Colorado, but that was closed down in 1989 and converted to a natural gas plant (1), so no US reactors could have the kind of fire that happened at Chernobyl.

While the operators violated a number of safety procedures, the design of the reactor was also at fault. This Soviet-made reactor was of a type called RBMK, which was unique in the world. It was designed both to produce power and to produce plutonium for making bombs. These reactors have a graphite core that serves as the moderator to slow down neutrons with channels for water to cool the core and to produce steam. The reactor was of a general type called a boiling water reactor (BWR), as contrasted to the PWR used at Three Mile Island. There is only one cooling loop, so the water that goes through the reactor core also goes through the turbines to turn the generator. This combination of a graphite mod­erator and water cooling is dangerous. Water actually absorbs some of the neu­trons and slows down the fission reaction, but if the water turns to steam, it cannot absorb as many neutrons so the fission reaction proceeds more rapidly. This is called the “positive void coefficient” and was instrumental in causing the accident. When the reactor started to get out of control, it turned the water into steam, which increased the reactivity, which turned more water to steam, and so on, in a positive feedback loop. The graphite on the ends of the control rods also made things worse since graphite is a moderator (it slows down neutrons rather than absorbing them). The initial effect of inserting the control rods was to actually increase reactivity before the control rods begin to absorb the neutrons and shut down the reaction. As a result of these factors, the reactor quickly became uncontrollable. The RBMK reactors were the only ones in the world designed like this (14).

Another major fault with the reactor design was that, in contrast with other BWR reactors, there was no massive concrete containment structure that could contain a core meltdown, such as at TMI. Instead, the reactor had a 1,000-ton lid that could be removed to change fuel while the reactor was actually operating. When the reactor went supercritical, the steam explosion blew off the lid and blew apart the relatively flimsy containment building. This was a steam explosion, not a nuclear explosion. It is impossible for a power reactor to explode like an atomic bomb because the concentration of 235U is not high enough for that to ever happen.

The fires continued for 10 days as the firefighters dumped sand, lead, clay, dolomite, and boron on the ruined reactor in quick bombing raids from helicop­ters and poured hundreds of tons of water per hour to quench the fires and radio­activity. People were evacuated from the nearby city of Pripyat, where 45,000 people lived, and from a 30-kilometer exclusion zone. By October, a temporary concrete sarcophagus was built to enclose the entire demolished reactor unit 4 so the other reactors (units 1-3) could continue to operate. Reactor 2 was shut down in 1991 after a fire, and reactors 1 and 3 were permanently shut down by the year 2000 on orders of Ukrainian president Leonid Kuchma. A new struc — ture—the New Safe Confinement—is now being built to cover the reactor and the temporary sarcophagus. It is scheduled for completion in 2015 (12, 13).

WIND

I grew up on a farm in Kansas. One of my fondest memories is climbing up the rickety ladder on our windmill tower to the platform just below the gearbox and wind vane to view the fields spreading out like a huge checkerboard in every direc­tion. The wind pumped water from underground to fill the stock tank to water the dairy cows and sheep that we raised. Indeed, the wind was critical for providing water so that Kansas and other states could be settled and farmed. Wind has been harnessed to do work for thousands of years. Who has not been charmed by the old Dutch windmills harnessed to grind wheat into flour and do other work? Can wind now provide the clean electricity to drive a modern economy?

Once or twice a year, my wife and I drive from Colorado back to Kansas to see relatives and to visit her farm. Sprouting out of the prairie are enormous towers for wind generators that put the old windmills to shame (Figure 4.4). The most prominent and largest of these is the Smoky Hills Wind Farm about 20 miles west of Salina just north of Interstate 70. This wind farm has 155 turbines spread over an area of 26,000 acres and can produce 250 MW at peak capacity (36). The enormous blades on wind turbines are typically between 125 and 150 feet in length and the towers are 225 to 350 feet high, for a total height approaching 500 feet (37).

image028

Figure 4.4 Wind farm in Kansas—the old and the new. source: Photo by author.

Wind energy ultimately comes from the sun because it depends on the tem­perature and pressure differences created by the sun shining on the oceans and land. The wind turns the blades that drive a gearbox to run a generator that pro­duces electricity. The gearbox and generator are in a bus-sized nacelle perched atop the tower. The energy of the wind varies by the cubed power of wind speed, so the electrical power output increases dramatically as the wind speed increases to the turbine’s rated wind speed, typically 11 to 14 meters per second (25 to 31 mph), and then remains flat until the cut-out speed. For example, if the wind speed doubled from 10 to 20 mph, the electrical power output would increase by eight times! Of course, the output also drops off rapidly as the wind falls below the rated wind speed. This makes wind power fluctuate rapidly as wind speeds constantly vary. Wind can also blow too hard. When wind speeds exceed about 60 mph (the cut-out speed), the turbines shut down to avoid permanent damage. The rated output of most wind turbines is 1.5-2.5 MW, which is specified as the rated wind speed (38).

As with solar energy, wind energy is not equally distributed across the United states or the world. Wind is less variable and stronger as you get higher from the earth’s surface, so wind maps are usually based on the wind speed at 50 meters (160 ft.) from the ground. Even better wind resources are available at 75 or 100 meters above the earth, a height that wind blades reach, especially on newer tow­ers. Wind power is classified in a range from 1 to 7 by the NREL, based on the range of wind speeds. Areas that have a wind power classification of 4 or higher are areas where it might make sense to use wind energy to produce electricity. The best wind resources are in a large band across the central plains of the United States and areas of Wyoming and Montana (Figure 4.5). As you might expect, 9 of the top 10 states in terms of wind energy as a percentage of total electricity portfo­lio are in this band, the sole exception being Oregon (39). Ridges of mountains are also good locations for wind power, especially in the Northeast. The West and the East have poor wind resources for onshore wind. Both coasts of the United States have excellent offshore wind power resources. The West Coast resources are not very amenable to development, however, because of the rapid drop-off of the con­tinental shelf, making the water too deep for wind turbines. Since the best onshore wind resources are where the population density is low (compare the wind map to the map of night lights), the problem of long-distance transmission rears its ugly head, as it does for solar power. The plentiful wind resources offshore have their own issues that will be discussed later.

Installed capacity of wind energy has been growing rapidly in both the United States and the world in recent years. About 10 GW was installed in the United States in 2009, but that dropped by over half in 2010 and picked up again in 2012. The total installed capacity by mid-2012 was 49.8 GW in the United States com­pared to 67.8 GW in China—the world leader in wind energy—and 30 GW in third-place Germany (40). Even so, the fraction of electricity generated from wind was only 3.6% in 2012 in the United States (see Chapter 2) and 9.2% in Germany by mid-2012 (11). Denmark has the highest percentage of its electricity generated by wind power, currently about 19% but wind actually satisfies only about 10% of its actual electrical demand (41).