Category Archives: Why We Need Nuclear Power

Tumor Suppressor Genes

Oncogenes are not the end of the story of cancer, however. Simply activating cellu­lar oncogenes does not inevitably lead to cancer. If oncogenes are stepping on the gas in the cellular automobile, are there brakes? Indeed there are. After the break­through in growing tumor cells in culture from a cervical cancer in Henrietta Lacks (20)—the famous HeLa cells—scientists were able to study characteristics of cancer cells more readily. One remarkable experiment was to take cancer cells and fuse them with normal cells, resulting in hybrid cells that would have charac­teristics of both types of cells. The important question was whether the cells would behave like cancer cells or like normal cells. Surprisingly, the hybrid cells were no longer malignant—something in the normal cells was able to override the genetic changes in the cancer cells that made them malignant. The unidentified substance in the normal cells was called a tumor suppressor gene (17), but the tools were not yet ready to discover what it was, though it could be traced to a location on normal human chromosome 13. Whatever it was, it acted like the brakes on the cellular automobile.

In 1969 a geneticist named Alfred Knudson moved to the MD Anderson Cancer Center in Houston, Texas, to study childhood cancer. He became interested in a rare tumor in the retinas of children known as retinoblastoma. Curiously, the tumors most frequently occurred in very young children under age one who are in families with a history of retinoblastoma, but sometimes they occurred in two — to four-year-old children who had no family history of the cancer. Knudson worked out the genetics of the disease and proposed that the infants who had the familial form of retinoblastoma inherited a mutant copy of a gene called Rb. If the infant then got a mutation in the other copy of Rb, it would develop retinoblastoma very early. Older children who got the sporadic form of retinoblastoma did not inherit a mutated copy of Rb so they would have to get two separate mutations in Rb to get the retinoblastoma tumors, which is much less likely (16). The surprise was that a single normal copy of the Rb gene could suppress the tumor, so both copies would have to be deleted or mutated to allow the retinoblastomas to grow.

This Rb gene turned out to be the same type of gene that could suppress the malignant characteristics of the hybrid cultured cells—a tumor suppressor gene. It does this by halting cells in the cell cycle before they enter the synthesis phase where DNA gets replicated, at a location known as a checkpoint. If the DNA of cells is damaged, the normal function of Rb is to block cells at the checkpoint until they can repair the damage. If there is no functional Rb, as in the case of the children with two bad copies of the gene, then the cells proceed through the cell cycle and divide, even though they contain DNA damage.

The retinoblastoma gene turned out to be the tip of the iceberg, as more than 20 tumor suppressor genes are now known (21). These tumor suppressor genes act as anti-oncogenes since they have the capability to halt the hyperactive signaling of the oncogenes. As molecular tools became available to analyze the molecular characteristics of cells from human tumors, it became clear that many different tumors had mutations in the Rb gene. Another tumor suppressor known as p53 became famous as the “guardian of the genome” because of its involvement in halting cell growth in the cell cycle, facilitating repair of DNA damage and induc­ing a form of cell suicide if the damage could not be repaired (22). Mutations in the p53 gene (called TP53) are found in more than half of all human tumors. It became clear that tumor suppressor genes had very powerful influences on whether activated oncogenes could cause a cancer.

IN SITU RECOVERY

Underground mining in the Colorado Plateau had virtually ceased by the 1980s but mining has had a resurgence in recent years as the future for nuclear power has become rosier. In Wyoming, uranium mining was done in large open-pit mines but there are currently no operating open-pit mines in the US. A newer form of mining known as in situ recovery (ISR) or in situ leaching (ISL) is much less hazardous and environmentally damaging and has become the dominant form of mining in the United States. ISR accomplishes both the mining and the milling procedures in one step at the site of the uranium mine but does not create mill tailings.

How does ISR work? To answer that, we need to understand the geology of the uranium deposits. In the United States, most deposits of uranium occur in sandstone deposited by ancient meandering streams. Weathering of mountains containing uranium-bearing granite deposited the uranium in the sands of the streams that were later covered and formed into sandstone. If the sandstone was trapped between impermeable layers of shale, the conditions were set for trapping uranium in an underground aquifer. Uranium is highly soluble in water in its oxidized form. When rainwater with dissolved oxygen flowed into the sandstone aquifer containing uranium, it dissolved the uranium. As the water containing the oxidized uranium slowly percolated through the aquifer, it would accumulate in the presence of naturally occurring reducing agents such as sulfides (pyrite [FeS2], for example) or organic material. The uranium would then precipitate out to form what is known as a roll front—a convex surface containing a high concentration of uranium (Figure 11.1) (21).

ISR essentially reverses the process by which the uranium was deposited. The uranium is in an aquifer under chemical-reducing conditions. By pumping oxy­genated water and sodium bicarbonate (baking soda) into the aquifer, the oxi­dized uranium will go back into solution and can be recovered. A typical mining

image062

Figure 11.1 The dark area shows a roll front of uranium ore.

source: Reproduced by permission from the Wyoming Mining Association: www.

wma — minelife. c om/uranium/uranium. html.

configuration consists of a recovery well surrounded by four injection wells. Water containing the lixiviant (oxygenated water and sodium bicarbonate) is pumped into the injection wells and is drawn up through the recovery well (21).

The water containing dissolved uranium is pumped into a central processing plant, where it passes through ion exchange columns that extract the uranium from the solution, and the water is reused in the injection wells. About 1% of the water is bled off and stored in a holding pond, so there is always more water being recov­ered than is being pumped into the injection wells. This creates a negative pressure in the uranium aquifer to prevent dissolved uranium from leaving the aquifer. The ion exchange columns are eluted with a solution that extracts the uranium into a yellow slurry that is dried to form yellowcake, exactly analogous to the process in a milling plant (22, 23). In some cases—depending on the size of the mining opera­tion—the ion exchange columns loaded with uranium are not processed at the mine but are shipped to a central milling facility where the yellowcake is produced.

Just east of Fort Collins in the grasslands lies an underground formation that contains uranium in a roll front that is amenable to ISR. A Canadian uranium mining company, Powertech, has proposed to build an ISR facility to recover the uranium, though as of this writing they are not actively pursuing the mining option (24). The mining proposal raised a firestorm of concern by residents in the vicinity of the proposed mine, centered in the small town of Nunn. Making use of a caricature of radiation—that it makes you glow—they developed an anti-uranium-mining website called NunnGlow (www. nunnglow. com) and sprinkled the area with yellow signs saying “no to uranium mining in Colorado" So here is the conundrum—people want to have clean energy but the NIMBY principle (Not In My BackYard) or, in this case NIMS (Not In My State), means they don’t want it anywhere near where they live. Neighbors are opposed to a proposed wind farm north of where I live and to in situ uranium mining east of here. The huge Niobrara oil and gas field in eastern Colorado is being fracked to get natural gas, a fossil fuel that emits less carbon dioxide (CO2) than coal when it burns, but people along the Front Range of Colorado are opposed to it, too. Since nuclear power produces 20% of the electricity in the United States and contributes very little CO2, it is a clean energy source, yet if you can’t mine uranium, you can’t have nuclear power.

But do the citizens in the area have a right to be concerned? To be honest, few people, including myself, would be thrilled to have an ISR plant next to them because of the initial drilling of wells. No one likes industrial development in their neighborhood, but I would far rather have an ISR facility near me than a natural gas field that has far greater impact on roads from truck traffic and on water supplies. Once the ISR wells are drilled, there is very little disturbance of the area, only wellhead covers and a central facility containing the ion-exchange columns (Figure 11.2). The uranium extraction and recovery phases of an ISR mining operation usually take two to five years (25).

The main concern is that the mining would contaminate an aquifer used for agricultural or human use. However, the aquifer containing the uranium cannot be used for agricultural or human use anyway because it is naturally contaminated

image063

Figure 11.2 Cameco Corp ISR mine near Douglas, Wyoming. source: Photo CourtesyofDr. Les Fraley.

with uranium, radium, radon gas and other elements such as vanadium and lead. The aquifer containing the uranium lies between layers of impermeable shale, so water can’t migrate into a lower aquifer. Monitoring wells are drilled outside the mined area, so any excursions of uranium into the outlying aquifer can be detected and dealt with simply by pumping more water out of the collection well. This creates a negative pressure on the surrounding aquifer and draws the contaminated water back into the well site. Also, there are thousands of existing water wells in the West and Midwest with uranium concentrations that exceed the EPA standard of 30 micrograms per liter (pg/l) because of the natural pres­ence of uranium (26), yet these high levels have not caused apparent health prob­lems. There are very few scientific studies—and no reliable ones—on this subject, however.

The US EPA regulates underground injection wells that are used in ISR min­ing and will not issue a permit unless stringent requirements are met. If the mine operator is injecting into an underground source of drinking water (USDW), they must get an aquifer exemption. An exempted aquifer must not currently serve as a source of drinking water and will not serve as a source of drinking water in the future (27). Besides requiring an aquifer exemption, the EPA specifies the well construction materials and requires casing and cementing to prevent any flow of water between the uranium-containing aquifer and any USDW aquifer. Monitoring wells must be drilled in the aquifer and at aquifers above and below the ore zone. If an excursion of uranium in solution occurs in the region of the monitoring wells, the recovery wells in the vicinity can be pumped to create a lower pressure within the producing aquifer and draw water back into the well field zone and away from the excursion. Finally, the operators must properly plug the wells after mining is completed (28, 29).

The bleed-off water contains a low concentration of radium (226Ra) that goes into the holding pond and must be disposed of properly according to Nuclear Regulatory Commission (NRC) or state regulations, the same regulations that govern conventional mill tailings and the required removal of uranium and other radionuclides that are often found in public drinking water systems. In some cases, the water can be injected into deep wells in an aquifer that is unsuitable for human consumption. The other radionuclide of interest is radon (222Rn) that is dissolved in the leach water and is released into the atmosphere, where it dis­sipates harmlessly but may need to be vented from the processing building (21). Overall exposure to the general public must be less than 1 mSv/yr at the boundary of the ISR facility, a dose that has no health consequences and is less than a quarter of natural background radiation in Colorado (see Chapter 8).

The overall operation of ISR facilities is governed by the NRC. The majority of states, including Colorado, are Agreement States that have formal agreements with the NRC to regulate radioactive materials through state agencies. They must meet or exceed the NRC regulations. Colorado drafted such stringent conditions that it may be impossible to have any ISR facilities in the state, leading Powertech to sue the state and, in fact, to place the project on hold, at least for now (24, 30).

ISR mining has been done for over 30 years in Texas, but ISR facilities also exist in Wyoming and Nebraska and are licensed in New Mexico (31). The NRC studied ISR well fields in Wyoming and Nebraska that have been restored after recovery of uranium. While the levels of some water parameters such as alkalin­ity, magnesium, manganese, sodium, lead, and radium did not return to exact baseline levels in some of the well fields,

for the approved restorations, the impacts to groundwater in the exempted aquifer met all regulatory standards for the state or EPA UIC (underground injection control) program, met the quality designated for its class of use prior to ISR operations, have been shown to decrease in the future due to natural attenuation processes, and have been shown to meet drinking water standards at the perimeter of the exempted aquifer. Therefore, the impacts of the exempted aquifer for each of the approved restorations do not pose a threat to human health or the environment. (32)

The NRC also studied the frequency of excursions from the ore site and found that, while there was a small frequency of excursions, they were adequately con­trolled by the pumping and injection process. Finally, the study analyzed well integrity failures. These have occurred infrequently and have not posed any threat to the environment or to human health.

You don’t just have to take the word of the NRC for it, though. Scientific epi­demiological studies have been done to determine whether ISR mining and mill­ing operations cause cancer in populations surrounding them. The region in the United States with the longest experience with ISR is Karnes County, Texas, where 40 mines and 3 mills operated beginning in the 1960s. The principal radia­tion concerns from mining and milling come from uranium, radium, and radon. Radium accumulates in the hard bone surfaces and can cause bone cancer, but recall that the tissue weighting factor of bone is 0.01, so it is very insensitive to radiation-induced cancer (see Chapter 7). Uranium can be inhaled into the lungs and cause lung cancer, but it can also be taken up by body tissues, mostly bone, kidney, and liver. Radon is a gas and can cause lung cancer, as it did in some of the Navajo miners. A cancer mortality study covering 50 years of potential exposure to uranium, radium, and radon from mining and milling activities found that there was no increase in cancer mortality rates for lung, liver, kidney, or bone cancer (or any other cancers) in Karnes County compared to control counties (33). In fact, the cancer mortality rates both in Karnes County and the control counties was less than the overall cancer mortality rate in the United States. This is consistent with the previously mentioned studies in Montrose County, Colorado, which also showed that milling and mining did not cause elevated cancer mortal­ity among the population, with the exception that underground uranium miners who smoked had higher lung cancer rates (17, 18).

So, in spite of the concerns raised about ISR being a health risk or contaminat­ing aquifers, there is no evidence to support any adverse health effects on the environment or on people in areas surrounding ISR facilities. This is a much more efficient method to obtain uranium than conventional mining, it does not pro­duce mill tailings that have to be carefully managed, and it uses far less energy (and hence production of CO2) than conventional mining. The alternative is to depend on coal mining and fracking for natural gas, which can be far more dam­aging to the environment.

ADVANCED REACTOR TECHNOLOGY

The first power reactors (known as Generation I reactors) were low-power pro­totypes built in the 1950s and 1960s to prove the technology; not one of them is operating today in the United States. The reactors built in the 1970s and 1980s were of two main types—pressurized water reactors (PWR) and boiling water reactors (BWR)—and will be discussed in more detail in Chapter 10. These reac­tors are known as Generation II reactors (17). The basic design of PWRs was developed by Admiral Rickover for the US Navy nuclear reactor-powered sub­marines. The Wolf Creek Nuclear Plant is a PWR reactor, as are 69 of the 104 reactors operating through the end of 2012. The other 35 reactors are BWRs that were originally designed at Argonne National Laboratory. Either of these types of reactor works perfectly well. The preference just depended on the company that built them—Westinghouse built PWRs and General Electric built BWRs. This continued a competition between Thomas Edison (the founder of General Electric) and George Westinghouse (the founder of Westinghouse) that began in the 1890s. Edison developed the electric light bulb, but he built his electrical sys­tem on direct current (DC). Westinghouse put his money and prestige on alter­nating current (AC), based on the patent of Nikola Tesla, a brilliant but eccentric Serbian-American electrical engineer and inventor. Westinghouse ultimately won the battle for the electrical network because AC could be stepped up to higher voltages and transmitted over long distances with less loss of power (18).

In spite of there being just two main types of reactors, though, there were actu­ally 80 different designs for the specific reactors, so nothing was standardized (4). This is one of the main factors that led to cost overruns and delays in building the reactors. Another factor is that separate licenses were issued for construction and operation of a nuclear plant. As a result, construction could be finished but opera­tion could be—and sometimes was—halted, leading to extremely costly reactors and giving the whole process a bad name. Much of the delay was caused by anti­nuclear individuals and environmental groups who were adamantly opposed to nuclear power, especially after the accident at Three Mile Island (19). The high inflation rates in the 1980s made delays extremely expensive, with some reactors costing up to $9 billion (20).

Reactor design has not stopped. The next Generation III and III+ reactors have simpler, standardized designs to expedite licensing and to reduce the time and cost of construction, as well as simpler and more stable operation. They are designed for a longer initial lifetime of 60 years, instead of the 40 years for Generation II reactors. They are also designed with passive safety measures to make them much more resistant to accidental core meltdown and release of radiation, as has occurred on three occasions around the world (see Chapter 10 for a detailed dis­cussion of these accidents). Two standardized reactor designs have been approved by the NRC for new nuclear power plants in the United States, and 18 combined license applications have been received by the NRC for 28 reactors. A combined license (COL) reduces much of the time and cost for constructing reactors com­pared to the Generation II reactors because they authorize the licensee to both construct and operate a nuclear power plant at a specific site (21).

The NRC has approved the Westinghouse AP1000 Generation III+ design for a 1,150 MWe PWR reactor, two of which are currently under construction at the Vogtle plant in Georgia and two at the VC Summer plant in South Carolina. The first of these is to come online by 2017. The AP stands for “advanced passive" It incorporates an 800,000-gallon water tank sitting directly above the reactor containment shell to provide emergency cooling passively—even if the electric­ity goes out completely—for three days. This is the most critical time for cool­ing, since the heat output of a reactor drops off very rapidly in the first few days. It is also modular in design, about one-quarter the size of current BWRs, with half the number of valves and one-third fewer pumps, and uses about one-fifth as much steel and concrete (17, 20, 22). Mitsubishi designed another advanced PWR, known as the US-APWR for the US version, that will produce 1,629 MWe but is not yet approved by the NRC (17).

General Electric also continued work on advanced boiling water reactor (ABWR) designs in collaboration with Hitachi and Toshiba. Two GE-Hitachi and two GE-Toshiba reactors were operating in Japan, with two more under con­struction in Japan and two in Taiwan. The four operating reactors were built in 39 months, a huge reduction in construction time compared to existing reactors in the United States. Two are planned for construction in the United States. The ABWR produces about 1,400 MWe, substantially more power than current reac­tors. It is designed for a 60-year lifetime and incorporates passive safety features like the AP1000. It has also been approved by the NRC. An even newer and more economical version—the GE-Hitachi ESBWR (economic simplified boiling water reactor)—uses natural circulation for cooling, with fewer pumps and valves, and can run for six days without electricity (17). To improve security, critical compo­nents are located underground, such as the control room and used fuel cooling pool (20).

AREVA, a French public corporation that specializes in nuclear energy in France and internationally, is building a Generation III+ pressurized water reac­tor known as the European PWR or EPR that is designed to produce 1,750 MWe. The US version is the US-EPR, though the “E” has been changed from “European” to “Evolutionary" One unit is currently being built in Finland with large time and cost overruns, one in France, and two in China. It has four independent and redundant safety systems to minimize risk of an accident (17, 20).

And this is not the end of new designs. An international group of 13 countries currently using nuclear power are collaboratively designing Generation IV reac­tors that are not just evolutionary improvements in existing designs but involve new technologies. These technologies allow higher thermal efficiencies, use dif­ferent reactor physics to burn up nuclear waste, and can use uranium much more efficiently to extend supplies for hundreds of years. They are probably several decades away from commercial implementation, though (23, 24). These will be discussed in more detail in Chapter 11.

A significant new development is the design of smaller nuclear reactors that can be built as modular units in a factory (small modular reactors, or SMRs). These are intrinsically safe and are much smaller than conventional reactors—on the order of 50 to 300 MWe—though several of them might be clustered together. Numerous types of small reactors that incorporate novel features are being designed by the United States, Russia, China, South Korea, Japan, and France. None of these reac­tors has been submitted for licensing consideration to the NRC, though some are expected to be submitted in 2012 (25). NuScale Power has designed a modular 45 MWe reactor that would be built in a factory and then clustered in groups of 12 on site, for a total of 540 MWe, about half the size of a full-scale nuclear reac­tor. This might be useful for smaller cities, and NuScale claims it will be more cost-effective than large reactors—about $2.2 to $2.5 billion for the 540 MW— which would make funding considerably easier. That is not clear, though. David Crane, the CEO of NRG Energy, claims that large nuclear power plants are more economical because engineering and safety costs are spread out over many years (26, 27). A preliminary study of the economics of SMRs indicates that the cost of the reactors will fall substantially as the number of modular units built at factories increases (28). Only time will tell.

The DOE started a cost-sharing grant program to facilitate the development and licensing of SMRs and awarded the first grant to a consortium of Babcock & Wilcox, Tennessee Valley Authority, and Bechtel International in 2012 (29). Babcock & Wilcox makes a 180 MW reactor that is scalable from 1 to 10 or more reactors at a single site and runs for four years before refueling. A second tranche of funding is available for other innovative SMR designs for 2013 (30).

MEDICAL EXPOSURE

All of these different routes of exposure to background radiation add up to 3.2 mSv/yr for the average US citizen. This has not changed, but what changed dra­matically between the NCRP reports of 1987 and 2006 was a huge increase in dose from diagnostic medical tests. These procedures do not include radiother­apy to treat cancer. On average, this amounts to 3.0 mSv/yr for the average US citizen. What are all of these procedures, and how much radiation do we get from them?

There are four general types of medical diagnostic procedures that contribute to the medical exposure: radiographs (X-rays), fluoroscopy, computed tomog­raphy (CT scans), and nuclear medicine. Other common diagnostic procedures such as ultrasound and magnetic resonance imaging (MRI) do not involve ioniz­ing radiation and do not contribute to dose. Radiography includes such things as dental X-rays, skeletal X-rays, and mammograms. Doses from these procedures are generally fairly small—a chest X-ray is only about 0.1 mSv, but a lumbar spine radiograph can be 2-3 mSv (Table 8.1).

Doses for X-rays have not always been so small. Recent experiments done with the original Crookes tubes used to generate X-rays by Heinrich Joseph Hoffmans2 in 1896 demonstrated that the doses were dramatically higher than modern tech­nology. In the modern re-creation of the experiments, the skin dose to image bones in a hand was 74 mGy—about 1,500 times that for a modern machine—and exposure time was about 90 minutes compared to 20 msec now (9). That explains why many early radiologists lost fingers to cancer and developed leukemia; there was little shielding, and radiologists often determined dose by reddening of the skin—known as skin erythema (10). The minimum dose to cause skin erythema was about 200 rads or 2 Gy (11).

Table 8.1 Doses from Common Medical Diagnostic
Procedures

Diagnostic Procedure Dose (mSv)

Chest x-ray (1 film) 0.1

Dental oral exam 1.6

Mammogram 2.5

Lumbosacral spine 3.2

PET 3.7

Bone (Tc-99m) 4.4

Cardiac (Tc-99m) 10

Cranial CT (MSAD) 50

Barium contrast GI fluoroscopy (2 min) 85

Spiral CT-full body 30-100

source: Data from DOE Ionizing Radiation Dose Ranges Chart, 2005.

Fluoroscopy is a real-time imaging medical procedure in which X-rays pass through a body and hit a fluorescent screen, where they are converted to a live image. Active processes such as a beating heart can be observed with this pro­cedure. Frequently, an X-ray absorbing material of high atomic number (Z) is injected into the bloodstream for a coronary angiograph or into the gastrointesti­nal tract to observe blockage in the GI system, for example. These procedures can result in high doses because the X-ray exposure rate is high (an average of 50 mGy/ min) and they may continue for several minutes. A two-minute barium contrast GI scan gives a dose of about 85 mSv (Table 8.1). Conventional radiography and fluoroscopy constitute about 11% of medical radiation diagnostic procedures (1).

CT scans have become the most common medical diagnostic procedure involv­ing exposure to ionizing radiation, with more than 60 million scans done annually in the United States (12), accounting for 49% of all exams (1). CT scans are X-ray procedures in which the body is exposed in slices through a full 360-degree range and the X-rays are measured by a large array of detectors. Sequences of slices provide high resolution 3-D imaging of parts of the body and are an invaluable tool for diagnostic medicine. However, in some cases they are being promoted for screening purposes where there is no indication of disease or medical prob­lems. The doses from CT scans are quite high, with typical doses of 10 to 20 mSv and as high as 80 mSv for a CT coronary angiography (12). Children are being exposed to CT scans at increasing frequencies, but the doses are not necessar­ily being adjusted to account for the smaller size and greater sensitivity of chil­dren. While CT scans are a very valuable diagnostic tool, they should not be done without a good medical reason, since the doses are high enough to entail a slight risk of getting cancer later on. This is not so much of a problem with older people, since their risk of getting cancer from a given dose of radiation is much less than it is for children (13), but it is important that the dose and number of procedures be minimized for pediatric scans and for young people to reduce the long-term cancer risk from the procedure. Increasingly, CT scans are being used for screening for colon polyps (virtual colonoscopy), early-stage lung cancer, car­diac disease, and full-body scans for a variety of diseases (12). Whether the risk from screening CT is greater than the benefit is not yet clear.

Nuclear medicine is a less well-known diagnostic (and therapeutic) procedure. It involves injecting radioisotopes to identify tumors or other physiological con­ditions. A variety of radiopharmaceuticals are used that localize in certain parts of the body. By far the most common radioisotope is technicium 99 metastable (99mTc), which has a 6-hour half-life, but many others are also used. A bone scan gives a dose of about 4.4 mSv, while a cardiac scan gives a dose of about 10 mSv (Table 8.1).

These medical diagnostic procedures have dramatically improved the practice of medicine, so the risk to benefit ratio is quite high. However, the doses are quite large in many cases, so the procedures should not be done frivolously. Together, these procedures account for nearly half of the annual exposure of the average US citizen, but of course there is a lot of variation. For many people there is no expo­sure at all, while others may have a large exposure.

MYTH 3: NUCLEAR POWER IS UNSAFE AND NUCLEAR ACCIDENTS HAVE KILLED HUNDREDS OF THOUSANDS OF PEOPLE

Fear is a powerful emotion, and it is easy to stoke the fear by making claims that nuclear accidents have killed huge numbers of people. The Doomsday Machine (18) claims that perhaps one million people have died from Chernobyl and ridi­cules the experts who project that about 4,000 people will eventually die from can­cer. Helen Caldicott claims that enough plutonium was released from Chernobyl to kill every person on earth (17). But, of course, not a single person actually died from plutonium released at Chernobyl, and no plutonium was released from Three Mile Island or Fukushima. These fearmongers ignore the fact that radiation has been intensively studied for decades; we actually know a very great deal about its environmental behavior, human exposures, and biological effects. It is pos­sible to make specific and reliable predictions if you know the dose and the type of radiation. When you consider the actual doses to which people were exposed from Chernobyl or from Fukushima, you can conclude with reasonable accuracy that about 4,000 people (not hundreds of thousands or a million) will ultimately die from Chernobyl and perhaps there might be a couple of dozen people who will die from the Fukushima accident (see Chapter 10). In the United States—the nation that has more nuclear reactors by far than any other nation—there has not been a single death attributed to radiation from a nuclear power reactor.

Of the three major accidents, Three Mile Island and Chernobyl were caused by operator error and would not have happened if the operators had not shut off emergency cooling water pumps. Design problems also contributed to the acci­dents, especially at Chernobyl. These accidents were teaching moments, so that current nuclear reactors are much safer and the operator training is much better. Fukushima was, of course, precipitated by the worst earthquake in Japan’s history and by huge tsunamis. There are very few if any nuclear reactors anywhere else on earth that are subject to that specific combination, and none in the United States. Of course, had the seawalls been higher and the diesel generators placed on a higher level instead of in the basement, the nuclear meltdowns would probably not have occurred. It was the tsunamis that caused the accident, not the earthquake alone.

I certainly don’t mean to imply that these accidents were not significant, and I don’t want to minimize the potential cancer deaths of 4,000 people ultimately from Chernobyl and the dislocation of hundreds of thousands of people in Chernobyl and Fukushima. That is a tragedy. And yet, I think it is important to put it in perspective, considering the health and environmental hazards of the alternative that would have been used instead of nuclear power, namely, coal-fired power plants. The truth of the matter is that there are risks from any energy source.

As I described in Chapter 3, the dependence on coal for electricity leaves a trail of dead that goes back for over a century. In the 1930s and 1940s, about a thou­sand miners were killed annually in the United States. Over time, that number declined to hundreds per year in the 1960s and 1970s, 45 per year in the 1990s, and about 35 per year in the first decade of the twenty-first century. But that is just one small part of the total deaths. The air pollution caused from burning coal leads to thousands of deaths annually. Over 2,000 people died from black lung disease annually from the 1970s to the 1990s, and several hundred still die every year. Even worse, the sulfur oxides and nitrous oxides emitted by coal-fired power plants are estimated to kill over 10,000 people annually from respiratory disease. And hundreds of people die annually from wrecks with coal trains. In light of this carnage, it is remarkable that there seems to be more concern about the safety of nuclear power than coal. Can you imagine a single event happening to a nuclear reactor in the United States that actually killed people, as a coal mining accident does fairly routinely? That would most likely lead to overwhelming pressure to halt nuclear power.

It is much worse in China. China gets 80% of their electricity from coal-fired power plants and leads the world in both production and consumption of coal (21). Thousands of coal miners die every year in China. In 2008 there were 3,215 coal mining deaths, down from 3,786 in 2007 (22). And that is just the deaths from mining. Air pollution is extremely bad in much of China, largely from inef­ficient factories getting their power mostly from coal. It is estimated that at least 300,000 deaths are attributed to air pollution each year, largely from burning coal (23).

The United States has over 3,500 reactor-years of experience operating nuclear reactors without a single death. It is of course impossible to say that there can never be a nuclear accident in the United States, but the regulatory procedures are much better than they were in 1979 when the Three Mile Island accident happened. New reactors that are scheduled to be built are of a new generation that are inherently safer, with cooling of the reactor core under pas­sive systems that greatly reduce the probability of a serious accident in case of loss of power.

There is no completely risk-free source of energy—a balance between risks and benefits must be made. Accidents have happened. People have died. But it would have been much worse if nuclear power did not exist and coal pro­vided the electrical power that nuclear has provided over the years. And that really is the choice. Is it worth the risk to build more nuclear reactors to replace coal-fired power plants? In my opinion, it is no contest, and the faster the world builds them to replace coal-fired power plants, the better off we and the earth will be.

MYTH 4: URANIUM WILL RUN OUT TOO SOON AND MINING IT GENERATES SO MUCH CARBON DIOXIDE THAT IT LOSES ITS CARBON-FREE ADVANTAGE

Helen Caldicott and others say that there is no point in depending on nuclear power because uranium will be used up too soon, and what is available will be of such low grade ore that it will produce more CO2 to mine it than what will be mitigated by the power produced in burning it (17, 24). It is true that the entire life cycle for uranium needs to be considered, as it does for every other source of energy. If ore grades are too low, then it does take too much energy to obtain the uranium and generates a lot of CO2. But that is really just an economic argument. Nobody is going to mine ore of such low grade that it would cost more to get it than the market value of burning it. There is plenty of uranium to fuel a nuclear renaissance for the next hundred years at ore grades that are economically (and energetically) feasible.

A case in point is Australia, which has by far the world’s greatest known ura­nium resources. The largest uranium mine in the world is Olympic Dam in Australia, and the ore quality at the Olympic Dam mine is low, just 0.05%, but the uranium is a byproduct, with the principal products being copper, gold, and silver (25). This material would be mined anyway, so the extra cost in CO2 from obtaining the uranium is small.

Another factor that nuclear opponents seem to ignore is in situ recovery (ISR) mining, which now accounts for about one-third of uranium mining. ISR is intrinsically safer and less environmentally damaging and is also more efficient, so less CO2 is produced in getting the uranium from low-grade ores. It only applies to sandstone formations that contain uranium, so there is a limit to how much uranium can be mined by ISR, however.

The other important factor for the long-term supply of nuclear fuel is the potential for amplifying the nuclear fuel supply. This is already being done by several countries that recycle their SNF. This not only reduces the waste disposal problem but increases the fuel supply by about 25%. The long-term (many centu­ries) future for nuclear power depends on building fast neutron breeder reactors that use the most common isotope of uranium—238U—to produce plutonium for fuel. And finally, it is possible that thorium reactors may become a major source of nuclear power in the future.

The truth is that there is plenty of uranium to fuel a nuclear renaissance and it will greatly reduce CO2 emissions by eliminating coal usage.

Carbon Capture and Storage

Of all the problems discussed in this chapter regarding the use of coal, the one that has brought serious attention to drastically reducing our appetite for coal is the hazards of global warming caused principally by CO2 emitted from fossil fuels (see Chapter 1). The coal industry is touting “clean coal,” and indeed modern coal power plants such as Rawhide have low emissions of everything but CO2 . Coal can never be clean without solving the CO2 problem. The buzzword of the year is “carbon capture and storage,” or CCS, as the solution to the problem. What does this mean and can it really be a solution?

Carbon capture and storage or sequestration, as it is often called, is a process for absorbing or capturing the CO2 produced by a coal or gas power plant and

storing it permanently (hopefully) in a geological formation. There are various strategies for removing CO2 from a power plant, but the only one with any signif­icant experience is to absorb the CO2 in an aqueous amine solution like ammonia or MEA (monoethylamine). The amine solution then has to be heated at 150°C (300°F) for several hours to remove the CO2, which requires a lot of energy. In fact, the separation process takes 25-40% of the energy produced by the power plant (22). Even with solvent and process improvements, the best that could ever be expected is a 20% loss of efficiency of the power plant (23). If this process were actually available and retrofitted to all existing coal power plants, their electri­cal output would drop by about one-third, so about 200 coal-fired power plants would have to be built just to tread water. And as I explained earlier, energy use is continuing to increase, so additional coal power plants would also have to be built with an inherent inefficiency built in. This, by the way, is in addition to the inefficiencies already built into a modern power plant by scrubbing the sulfur oxides and filtering the fly ash. All of this means that coal mining would also have to increase at a dramatic rate. This is the fundamental problem with CCS.

But this is really the tip of the iceberg. Post-combustion capture of CO2 has numerous problems: “the equipment will be very large, comparable with the footprint size of a coal-fired power plant, large volumes of solvent are needed, heating to regenerate the solvent can produce toxic byproducts, emissions of sol­vents from recovery columns need to be scrubbed and eliminated, consumption of water needs to be reduced, and expired solvent needs to be disposed”(23). And capturing the CO2 is just the first step. After it is captured, it has to be compressed to 70 atmospheres of pressure to form a liquid before being transported through pipelines to a permanent geological storage site. There are already about 2,000 miles of pipeline in the United States that transport about 30 million tons (Mt) of CO2 per year. That is less than 2% of the 2 billion tons of CO2 produced by coal-fired power plants in the United States annually. Creating the needed pipe­lines to transport this volume of CO2 would cause tremendous environmental damage and expense.

The next problem is where to put the CO2. In order to minimize the transpor­tation problem, the site needs to be close to the power plant. Available options are to store it in depleted oil and gas reservoirs or deep salt formations, or use it to enhance oil recovery (EOR) (24). EOR is already being used in Texas, where about 30 Mt of CO2 annually is injected into wells to recover oil that can’t be easily pumped out. However, there is not a large capacity for EOR. Another option is to inject it into deep formations that contain saltwater, where there is much larger capacity. The CO2 is relatively soluble in brine, which ultimately reduces the pos­sibility of leakage, but the brine itself may migrate, and it can take hundreds of years for complete dissolution (24). Furthermore, even though there is apparently enough storage in saline deposits for hundreds of years of power plant operation, earlier estimates are being reduced to decades because only a small fraction of the available pore space seems to be accessible (22).

Major leakage from geological storage could also be catastrophic. This has already happened naturally at Lake Nyos, Cameroon, in 1986. Carbon dioxide from natural geological sources was dissolved in the cold water at the bottom of the lake but underwent a sudden inversion. The result was similar to shaking a soda can and opening it. Carbonated water shot over 200 feet in the air and released the CO2, which is heavier than air. It slowly settled in a valley and smoth­ered 1,700 people (14). Leakage from undersea storage would also cause serious problems by producing carbonic acid and killing marine life in the neighborhood. This is the same problem with ocean acidification, as is currently happening from the CO2 in the atmosphere that is being absorbed by the ocean (see Chapter 1). Lest we think that accidents from deep undersea storage cannot happen, just remember the environmental disaster in the Gulf of Mexico in the summer of 2010 from an oil rig accident that allowed 100 million barrels of oil to pour into the Gulf.

Let’s take a reality check! There are a few CCS projects in operation around the world, but they are experimental and small, capturing a small percentage of the emissions from an actual coal-fired plant. The hope is to go from experimental to demonstration plants by 2014 and finally get to commercial plants by 2020 (22). Storage sites are also being studied and pipeline routes considered, especially in Europe for storage in the North Sea, but these sites are only capable of, at most, a few Mt of CO2 per year currently (22, 25). Steven Chu, the former US Secretary of Energy, points out that the world burns about 6 Gt of coal each year, producing 18 Gt of CO2. The United States is investing a few billion dollars in experimental CCS projects, but that is a drop in the bucket compared to the trillions of dollars that will be necessary to make this a reality. In The Quest, Daniel Yergin says:

If just 60% of the CO2 produced by today’s coal-fired power plants in the United States were captured and compressed into a liquid, transported, and injected into the storage site, the daily volume of liquids so handled would be about equal to the 19 million barrels of oil that the United States consumes every day. It is sobering to realize that 150 years and trillions of dollars were required to build that existing system for oil. (26)

And don’t forget the price to be paid in loss of efficiency of power plants. It is highly unlikely that the target of 2020 will be close to being met; in the meantime, coal power plants keep spewing out CO2. So is this really feasible, or are we just dreaming to avoid making difficult choices to get away from coal and find other sources of energy?

A very interesting answer to this question is to compare the cost and effec­tiveness of reducing CO2 using CCS with using the same resources to develop alternative energy sources such as wind or nuclear. This has been done in a recent paper based on the idea of “stabilization wedges,” which represent one-eighth of the increased emission of CO2 over a 50-year time period (27). That is, one wedge would eliminate one-eighth of the emissions from coal-fired power plants over this time period. This could be done by CCS or by alternative energy sources. The authors estimate that one wedge of CO2 reduction by CCS would cost $5.1 trillion.

If this same resource were instead put into building wind turbines, 1.9 wedges of CO2 would be eliminated and $9 trillion in electricity would be generated. Even better, if this same resource were put into nuclear power plants, 4.3 wedges of CO2 would be avoided and $22.3 trillion in electricity would be generated.

So there you have it. The fundamental problem with CCS is that it does not produce any more power—and in fact reduces power—at an enormous cost with huge uncertainties in feasibility and environmental costs. It will require trillions of dollars in resources that will generate no additional electricity. Almost cer­tainly, those resources will then not be available for alternative energies. Instead, we should forget about CCS entirely and get on with developing other energy sources and begin shutting down coal-fired power plants.

INTERACTIONS OF RADIATION WITH MATTER

To understand the risks of radiation, we must begin at the atomic level to see what radiation does, and we have to consider the different kinds of radiation and, most important, the dose. This is going to be a somewhat technical chapter, but I hope you will stick with it. Ifyou do, you will have a much better understanding ofwhat radiation actually does to cells and how different types of radiation have different consequences.

Let’s begin with the various kinds of radiation and how they interact with atoms. There are four kinds of radiation associated with nuclear power—a, p, у and neu­trons—as discussed in Chapter 6. X-rays and у rays are basically the same type of radiation, and they interact with matter in exactly the same way, so I will con­sider them as one type: electromagnetic radiation. Their interactions with matter depend on their particle nature as photons with energy hf. The rules governing electromagnetic radiation are different from the rules governing charged particle radiation, such as a and p radiation. Neutrons are uncharged particles, so they interact in a different way from charged particles and electromagnetic radiation.

The most damaging thing that radiation can do to atoms is to ionize them— that is, to kick an electron out of an atomic orbit—leaving an atom with a positive

charge: an ion. Ionized atoms can break chemical bonds or change the nature of a molecule. Photons such as X-rays or у rays are energetic enough to ionize atoms and give the electron a lot of kinetic energy, so they are called ionizing radia­tion. Ultraviolet light, another form of electromagnetic radiation, does not have enough energy to ionize atoms, so it is called non-ionizing radiation—though it can still be hazardous but for totally different reasons.

The Hazardous Radioisotopes

The world became aware of the Chernobyl accident not by an announcement from the Soviet government but from a Swedish nuclear power plant worker who came to work on April 27 and set off radiation alarms because radioactive material from the accident had blown over Sweden and gotten on his clothes. The Soviet Union did not admit the disaster until April 28, two days after the accident. The plume of radiation resulting from the explosion and fires spread through the troposphere, with the heavier particles of debris falling locally and smaller radioactive particles and gases traveling over the continent of Europe. The deposition pattern from the radiation plume depended on wind direction and rains, so some areas in Belarus, Russia, and Ukraine, far from Chernobyl, received high levels of radiation, while others received very little. Many parts of Europe got some radiation from the acci­dent, but far below the normal background levels, and some radiation was even measured in the United States (15).

As the uranium fuel in a reactor burns, hundreds of fission products are pro­duced, as discussed in Chapter 9. The radioactivity is almost all from p and у radiation, the least damaging type of radiation. The majority of these are very short-lived, so the amount of radiation after a nuclear accident decays away rap­idly over time. Table 9.1 in Chapter 9 listed some of the major isotopes that build up in nuclear fuel as it burns; three of them that are particularly important in a nuclear accident such as Chernobyl—iodine-131 (131I), cesium-137 (137Cs) and strontium-90 (90Sr)—were highlighted. Because they are so important in deter­mining the radiation hazards after an accident, other critical physical, chemical, and biological properties of these isotopes are listed in Table 10.1.

131I is produced in high amounts in a reactor and it has unique properties that make it very hazardous. It has a short half-life of eight days, so it decays away quickly, but that actually means that a given mass of it is more radioactive than isotopes with a longer half-life. The good thing is that within a few months it is no longer a hazard. It boils at a low temperature, so it is readily volatilized in a loss of coolant accident such as at TMI or Chernobyl. It is readily assimilated and concentrates in the thyroid, since iodine is an essential element for the proper functioning of the thyroid. It has a biological half-life (the time it takes for half of it to be excreted from the body) that is longer than its physical half-life, so nearly all of it that is ingested will decay in the thyroid. Finally, there is a clear pathway for human consumption. It falls from the cloud of radioactivity and deposits on grass and other plants, cows eat the grass and rapidly incorporate it in their milk, then people—especially children—drink the milk. This whole cycle can occur within two days, and if that happens, the 131I concentrates in the thyroid, which is a radiosensitive tissue with a tissue weighting factor (WT) of 0.05.

Fortunately, it is relatively easy to avoid the problem by not drinking contami­nated milk and also by taking iodine tablets that prevent uptake of the radioactive iodine. Unfortunately, because the Soviet government was not forthcoming about the accident, they did not warn people or distribute iodine pills to enough people in time to prevent high doses in many children and young adults. The iodine distribution was very uneven, with the citizens of Pripyat—the nearest town to the reactor where many workers lived—getting iodine pills immediately, but peo­ple in other towns not getting them early enough or at all (16). The Chernobyl

Table 10.1 Properties of the Most Important Radioisotopes after a Nuclear

Accident

Physical

Decay

Boiling

Percent

Main Site of

Biological

Isotope

Half-Life

Mode

Point

Assimilated

Deposition

Half-Life*

131I

8 days

P, Y

363°F

100

Thyroid

29 days

137Cs

30 years

P, Y

1245°F

100

Muscle

110 days

90Sr

29 years

P

2523°F

<30

Bone

200 days

note: *Biological half-lives are complex. They depend on the age of the individual and differ for different tissues in the body. These values are rough approximations. Data from ICRP 56 and ICRP 67.

accident released about 1760 PBq of 131I (15) and was the main health risk.2 This is about 3 million times as much as was released from TMI.

137Cs is the next most important radioisotope. It has a half-life of 30 years, so it can still be hazardous for about 300 years, depending on the concentration. It p-decays to 1 37Ba (barium), which is unstable and promptly emits a у ray, so both p and у radiation come from 137Cs decay. Its boiling point is much higher than iodine, but low enough that in a core meltdown it is also likely to volatilize. Also, 137Xe, a gaseous fission product, decays to produce 137Cs. Another isotope of cesium, 134Cs, is also produced in large quantities in reactors, but it has a half-life of 2.1 years so it not a long-term problem. Cesium mimics potassium, so it is read­ily taken up in muscle and other tissues throughout the body that have an aver­age WT of 0.10. It can fall on plants and be ingested, or it can get incorporated in plants that grow in soils that have high levels of cesium and then get into animals and humans that eat the plants. However, its biological half-life is only 110 days, so a single dose of 137Cs will be excreted in two or three years. Cesium tends to bind to clay soils, so it often becomes relatively immobilized in the soil, though this is strongly dependent on the soil type. About 85 PBq of 137Cs was released in the Chernobyl accident (15) and is the main source of long-term problems with soil contamination and potential health effects.

90Sr is the other important radioisotope that was released from Chernobyl. It has a similar half-life as that of 137Cs at 29 years, but its boiling point is so high that it is very unlikely to ever volatilize in a reactor. The only reason that it was widespread after Chernobyl was because of the explosion and fire that contin­ued to hurl small fuel particles into the air. Strontium is chemically similar to calcium so it primarily concentrates in the bones, which is a radiation-resistant tissue with a WT of 0.01. Only about 30% of strontium uptake is actually assimi­lated in the body and 85% of that is assimilated in tissues that rapidly excrete it. About 6% goes to hard bone that has a long biological half-life, so it remains in the body much longer than cesium. About two-thirds of it is removed from bone by 1,000 days (17). It can be ingested from eating contaminated plants and soil but it is not highly mobile in most ecosystems. About 10 PBq of 90Sr was released in the Chernobyl accident (15) but it is not a major health concern.

Helen Caldicott said that there was enough plutonium released at Chernobyl to kill every person on earth (18). In reality, only 0.013 PBq of 239Pu was released (15) and it was from the explosion, since Pu has a boiling point of5,842°F so it does not volatilize. Because it is so dense, and because it was mostly associated with large fuel particles, nearly all of the plutonium that was released fell in a very localized region, so it was not a health hazard to anyone outside the exclusion zone.

Limitations of Wind Power

Wind has many of the same limitations as solar power—location, intermittency, relatively high cost, and footprint. The location of wind resources is different from that of solar resources. This means that the two types of energy can potentially be complementary. Probably the best example of this is in California, which has both good solar and wind resources. For much of the densely populated part of the country, however, wind resources—similar to solar resources—are not where the highest concentrations of people are. This requires an extensive system of high voltage transmission lines to be built to carry the power from the Midwest to the population centers.

Offshore wind resources are excellent and the wind tends to be less variable, but for years the concern over ruining the views from Cape Cod prevented the development of the Cape Cod Wind Project, the first offshore wind power devel­opment being built in the United States. The opposition from politicians and many environmentalists was finally overcome, and the US Department of the Interior approved the project in 2010. It will cost at least $2 billion to provide 468 MW peak power, which could be up to 75% of the electrical usage of Cape Cod,

image029

F igure 4.5 Wind resources in the United States.

source: Reproduced by permission from the National Renewable Energy Laboratory.

Martha’s Vineyard, and Nantucket. Opponents say it could cost up to $10 billion to upgrade the grid and build transmission lines (42, 43).

INTERMITTENCY

Intermittency is a big problem for wind—the wind doesn’t always blow, so there has to be backup sources of power that can adjust rapidly to make up the shortfall. The capacity factor for wind is the proportion of the total installed power that is actually available to be used in the electrical grid when it is needed. Because of variations in wind on hourly, daily, and monthly time frames, the average capacity factor for the US wind power in 2010 was 27% (44). In other words, of the 50 GW installed in the US, only slightly over one-quarter of that is available on average, or about 13.5 GW. The actual capacity factor depends on the wind power classi­fication, so wind farms in Class 3 areas might have a capacity factor of only 10%, while wind farms in Class 7 areas might be near 40% (see wind map in Figure 4.5). As the best areas for wind get developed, the capacity factor will inevitably go down. According to the EIA (Energy Information Administration), the capacity factor for wind turbines built in 2016 is expected to average 34% (45).

Wind varies throughout the day, but in many areas it is stronger at night than during the day when the peak electrical demand occurs. Thus it is not well matched with the demand curve. This can be a particular problem when there is a large demand for air conditioning on a hot day but the wind is not blowing strongly enough. As an example, grid operators in Texas count on just 8.7 MW for each 100 MW of installed wind capacity to be available during the peak demand on hot days, a capacity factor of just 8.7% (46). This might be partly balanced with solar power that is highest during the day, but most states do not have both good wind and good solar resources.

The Rawhide Energy Station that powers Fort Collins is largely dependent on coal, as I discussed in Chapter 3. However, it also owns an 8.3 MW wind farm in Medicine Bow, Wyoming—one of the windiest places in the country. If you have any doubt about that, just drive across Wyoming pulling a trailer (or look at the wind map)! It is an area rated 7 for wind resources, so it is as good a wind resource as it gets. If the wind turbines were running at full capacity for 24 hours a day, the wind farm should generate 6.0 million kWh per month.

But what does it really generate? I have plotted the actual energy generated per month over a five-year period in Figure 4.6. It is immediately obvious that the wind blows much more strongly in the winter months than it does during the sum­mer months, generating about three times as much energy during the winter as during the summer. This is fairly typical of wind resources, and it is a problem because much of the additional energy demands are for air conditioning during the summer. The graph also illustrates how variable wind energy is, even when averaged over a monthly basis. Given that the maximum generation capacity is 6.0 million kWh per month, the actual output (capacity factor) was half of that for only 6 months out of 5 years! In the summer the capacity factor is frequently about 15%, while in the winter it averages closer to 40%. The overall monthly output averaged over 5 years is 1.8 million kWh or a capacity factor of 30%, slightly better than the US average.

image030

Figure 4.6 Monthly power generation from a wind farm in Medicine Bow, Wyoming. Data from Platte River Power Authority.

image031

—— System Load — System Load Minus Wind Generation — Wind Generation

Figure 4.7 Spring 2010 wind power in Minnesota. The bottom curve shows the wind output; the top curve shows demand. The middle dark curve shows the demand minus the wind generation.

source: Courtesy of the US Department OfEnergy.

The variation on an hourly or daily basis is even greater and does not match the actual demand for electricity very well. Actual output from an installed capacity of 1,500 MW in Minnesota over a two-week period shows how variable wind output is compared to the load demand (Figure 4.7). The peaks in the wind output curve frequently come during the lull in demand because the wind frequently blows stronger during the night when it is not needed than during the day (44).

Because of the issue of intermittency, it is not the total availability of wind that limits its use but how much an electrical network can effectively use, since an elec­trical grid has to deliver power whether the wind is blowing or not. Large-scale electricity is not easily stored, so the supply has to match the demand at any given time. An oversupply means that the tightly regulated voltage and frequency would rise. An undersupply would lead to a voltage dip—a brownout—and reduced frequency. Either of these would raise havoc with electrical devices, since they require very accurate voltages and frequencies (41).

The Bonneville Power Authority has had problems with oversupply of wind power in the Pacific Northwest when big storms come through. Most of the power comes from hydropower sources, but there are strict limits on how much water can be diverted from the turbines to the spillways because the water absorbs nitro­gen and poisons fish. They are trying to cope with the problem by increasing water heater temperatures and heating ceramic blocks in houses of volunteers to soak up the excess energy (47). But, you ask, why not just reduce the amount of wind power when it is in excess? That could be done, but it is uneconomical for the wind power generators. They have invested a lot of money in the wind turbines and want to get everything out of them that they can. This kind of issue will be increasingly common if utilities get too much power from wind sources because it is so unstable.

It is easy to get the impression from advocates of renewable energy that solar, wind, and hydropower can completely power the United States, and even the world (2). More realistic estimates put the fraction of electricity generated by wind at a maximum of about 20%, which has never actually been accomplished by any country (38, 48-50). South Dakota, the state with the best overall wind resources in the country, generated 22.3% of their electricity from wind in 2011. Iowa came in second with 18.8% of its electricity coming from the wind (39). The bulk of the electricity to meet demand comes from baseline power (mostly coal and nuclear), augmented with more variable sources such as natural gas and hydropower that can meet the rapid fluctuations in wind power.

The NREL modeled the effects of up to 30% wind and 5% solar in the West on electrical grid operations. Two months were modeled. During the windy month of April, with highly variable wind, it becomes very difficult for a grid operator to meet the net load requirements. In July—when winds are low, wind contributes only 10-15% of the power, and solar matches the load much better—it is much easier for the grid operator to match the load requirements. Overall, a combined contribution of 23% solar and wind was feasible but increasing that to 35% caused severe problems in matching the load because of the large fluctuations in wind power. Furthermore, the main impact of the wind and solar power is to reduce the most efficient and least carbon intensive fossil fuel plants—natural gas turbine and combined cycle plants—rather than reducing coal-fired power plants, which is the biggest problem for CO2 and other emissions. And you still have to have most of the natural gas power plant capacity available to meet the summer demand (51).

Denmark made a major commitment to wind energy beginning in the late 1980s and has the highest proportion of electricity generated by wind of any country in the world. Currently, the country of 5.5 million people has 5,500 wind turbines that provide 19% of its electrical demand. A detailed analysis of the elec­trical usage shows that Denmark actually has to sell a lot of the power to Norway and Sweden when the wind blows strongly. The reason is complicated and some­what unique to Denmark. Denmark uses fossil fuel plants not only for electricity but also to heat homes. When the wind is blowing hard, the best thing to do would be to shut down some of the fossil fuel plants, but they can’t do that because they need them to provide heat and thus can’t use all of the wind-generated power. As a result, Denmark sells its very expensive wind power at (subsidized) cheap prices when it is in excess because Norway and Sweden have ample cheap power from hydro that they can quickly shut down and use Denmark’s excess wind power. But Denmark then buys back expensive electricity from Norway and Sweden when wind does not supply enough power. In effect, Denmark has hydropower storage available in Norway and Sweden to modulate the variability of the wind power. So in the end, the overall amount of electricity produced by the wind in Denmark that is actually used by the Danish people is about 10%. And for that, the Danes pay the highest cost of electricity in Europe (41).

Germany has also made a major commitment to wind power as well as solar power. By the end of 2010, Germany had installed wind capacity of 27 GW com­ing from 21,607 wind turbines. This is in a country slightly smaller than the state of Montana. In spite of this large number of turbines and installed capacity, wind generates only 6.2% of Germany’s electricity (52). The problem for Germany is that its wind resources—like its solar resources—are not very good, with an aver­age capacity factor of 15% for wind power, compared to 25% in Denmark, 30% in Britain, and 27% in the United States (53).

Can pumped storage4 solve the problem for the intermittency of wind power? Not really. The main problems are the volume of water necessary and the high energy cost of pumping water against gravity. Assuming an overall efficiency of 75% for the pumps to raise the water and the turbines to generate electricity, 5.4 tons of water would have to be pumped 100 meters high to store 1 kWh of elec­tricity. To store the output from an 800 MW power plant for a week would require a reservoir of 25 square miles fluctuating in height by 10 meters (54). And, of course, it would take another reservoir to hold the water on the downhill side. Reservoirs are not highly popular among environmentalists, so it is hard to imag­ine this amount of water storage being acceptable to the public. Add to this the fact that much of the best wind resources in the United States are in the Midwest, which is relatively flat, and it would be difficult to find a high elevation to locate the reservoir. Other types of storage, such as vanadium batteries or compressed air energy storage, also have serious limitations (54). In general, the added cost of energy storage systems make them uneconomical for utilities, so they are unlikely to make major contributions to the use of renewable energy (53).

Multistep Carcinogenesis

Bert Vogelstein at the Johns Hopkins Oncology Center analyzed changes that occur during the development of colon cancer from small polyps that can be easily and safely removed to full-blown colorectal cancer that is invasive and metastatic. He proposed a multistep model of cancer and showed that the evolution from a benign polyp to a malignant cancer involved activation of at least one oncogene and inac­tivation of at least three tumor suppressor genes (23). Subsequent studies showed that there are dozens of different mutations in most cancers, and on average about 13 different pathways are affected (16). These multiple, independent genetic muta­tions explain why there is such a long latency period for cancer to develop after an initiating event. The probability of a single cell getting four or more independent mutations is extremely small. In fact, it would appear to be vanishingly small, to the point that cancer would seem to be very unlikely to ever occur, and yet it does!

One explanation for this conundrum is that some alterations can cause the chromosomes to become unstable and accumulate a large number of mutations and aberrations very rapidly, a condition known as genetic (or genomic) instabil­ity. This might occur, for example, if DNA repair genes are mutated so that DNA damage cannot be repaired effectively. There are many genetic syndromes such as Ataxia telangiectasia, Fanconis anemia, Nijmegan breakage syndrome, Xeroderma pigmentosum, and Werner’s syndrome that involve mutations in DNA repair genes and often cause genetic instability. Individuals with these syndromes have a high probability of getting various kinds of cancer.

Another mutation that can cause genetic instability is the deletion of the ends of chromosomes—the telomeres (24). Telomeres have six bases that are repeated hundreds of times and serve to preserve the ends of the chromosomes when cells divide. Every time cells copy their DNA in the synthesis phase, a short piece at the end of each chromosome is lost, but it doesn’t matter because the ends have identical copies of the telomere sequence. But when the cells have divided a large number of times, the telomeres become short and the cells can no longer con­tinue to divide—they become senescent. That is true for normal cells, but cancer cells often activate an enzyme called telomerase that adds more telomere units to the ends of chromosomes so they can continue to divide. Under certain condi­tions, the ends of telomeres can look like DSBs to cell repair machinery and can be fused with actual DSBs, leading to chromosome fusions and causing genetic instability (25).