Category Archives: Why We Need Nuclear Power

ENRICHMENT

The yellowcake that is produced as a result of mining and milling cannot directly be made into fuel pellets to be burned in a reactor. The reason is that natural uranium comes in different isotopes—99.3% is 238U and 0.7% is 235U (and to be completely accurate, a very small fraction is 234U)—but only 235U is useful for fis­sioning in a nuclear reactor (see Chapter 6). But it takes 3-4% of 235U to allow fission to efficiently occur in the most common types of reactors. That is a prob­lem, since the natural uranium in yellowcake is only 0.7% 235U. In order to get a sufficiently high concentration of 2 35U, the uranium has to be enriched in 2 35U compared to the 238U. How can that be done, since the chemical properties of all uranium isotopes are identical?

That was the big problem facing the scientists and General Groves of the Manhattan Project who were trying to build an atomic bomb. The only physical difference between the two isotopes is a small difference in mass—238 to 235 mass units—due to the 3 additional neutrons in 238U, and this turns out to be the secret to enrich for 235U. Several different approaches were used during the Manhattan Project, but the one that became the standard for uranium enrichment in the

United States was gaseous diffusion. To accomplish the enrichment of enough 235U to build a bomb (which requires about 90% enriched 235U), the most auto­mated and complex plant in the world at the time was built along the Clinch River west of Knoxville, Tennessee: Clinton Labs, now known as Oak Ridge National Laboratory. The building housing the gaseous diffusion enrichment columns was nearly half a mile long, one-fifth of a mile wide, and four stories high, covering 42.6 acres (2). The idea is that lighter molecules will diffuse through membranes with extremely tiny pores more rapidly than heavier molecules. Each enrichment column has a porous barrier that slightly enriches the lighter 235U isotope in one side and has depleted uranium on the other side. The slightly enriched uranium is then fed into another column where it is further enriched, and then another, and so on, for thousands of columns to get the desired enrichment.

Uranium first has to be turned into a gas before it can be separated by gaseous diffusion or other processes. Yellowcake (U3O8) is modified through a series of chemical reactions to form uranium hexafluoride (UF6), a very reactive substance which has the desirable property that it is a solid white powder at room tempera­ture but volatilizes into a gas at 56.5°C (134°F) (34). It is the gaseous phase that is used in the gaseous diffusion process. The slight difference (less than 1%) in mass between the two hexafluoride isotopes of uranium means that the lighter 235UF6 will diffuse slightly more rapidly through a porous barrier than the 238UF6. That makes separating the isotopes a very inefficient and energy-demanding process, and is the main source of CO2 production from nuclear power because coal-fired power plants are often used to power the gaseous diffusion plant. In France, nuclear power is used to drive the gaseous diffusion plant, so CO2 production is very small. The energy to run a gaseous diffusion facility accounts for about half of the cost of nuclear fuel and about 5% of the electricity generated from nuclear power plants (35). The United States has a single gaseous diffusion plant currently in operation in Paducah, Kentucky, which is licensed by the NRC to provide enriched uranium for nuclear power plants (36). It uses 1,760 enrichment stages in four buildings that cover 74 acres and has a power capacity of 3 GW to power the plant (37).

Gaseous diffusion was not the only enrichment method developed by scientists in the Manhattan Project, however. Another method that is also based on the mass difference in uranium hexafluoride isotopes is gas centrifuge enrichment. A large number of rotating cylinders (centrifuges) are connected in series, and gaseous UF6 is fed into them. The centrifuges rotate at extremely high velocities of 50,000-70,000 rpm and must be made to stringent tolerances, which is why it was not technically feasible to use these during the Manhattan Project. The heavier 238UF6 moves toward the walls by centrifugal force, while the lighter 235UF6 remains near the center and is bled off from one centrifuge and fed into the next centrifuge. Thousands of stages of centrifuges are used to get the desired enrich­ment. Gas centrifuge plants can be built in phases with more centrifuges added to increase the capacity. The huge advantage of this technology is that it is far more energy efficient, using only about 2% as much energy as a gaseous diffusion plant and therefore only generating 2% as much CO2 and costing far less.

This has been the enrichment technology of choice for Russia, Japan, and Europe (excluding France). There is one gas centrifuge plant in the United States in Eunice, New Mexico; two others were under construction but are currently on hold. Worldwide, gaseous diffusion accounted for 50% of uranium enrichment capacity in 2000 while gas centrifuge was 40%, with 10% from retiring nuclear weapons. By 2010, however, only 25% was done by gaseous diffusion while gas centrifuge accounted for 65%. By 2017 it is projected that nearly all enrichment will be done by gas centrifuge (35, 36).

It is this technology that is causing such heartburn for the Western countries because of the gas centrifuge plants built by Iran. Iran is already enriching ura­nium to the 3-4% necessary to provide fuel for a nuclear power plant as well as some to 20% for a medical research reactor (38). But there is nothing besides world pressure to stop the enrichment from going on to 90%, which then can be used in an atomic bomb. Even if Iran actually does enrich uranium sufficiently to make a bomb, it is likely to cause them more headaches than advantages. In his book Atomic Obsession, John Mueller argues forcefully that joining the semi-exclusive nuclear club does not give a country power over its neighbors but instead tends to focus the world’s attention on the country and shackle it with economic embar­goes, as in the cases of North Korea and Iran. A country may bluster and posture, but it is actually constrained because it knows that if it actually used a nuclear bomb on a neighboring country, it would be obliterated by the United States. “It seems overwhelmingly likely that, if Iran does develop nuclear weapons and bran­dishes them to intimidate others or to get its way, it will find that those threatened, rather than capitulating to its blandishments, will ally with others (including con­ceivably Israel) to stand up to the intimidation." (39)

A new generation of enrichment technology developed in Australia is known as global laser enrichment (GLE). It uses lasers to specifically ionize 235UF6 instead of 238UF6, creating a charged molecule that can then be separated electrostatically. This is the only separation technology that does not depend on the mass differ­ence between the two isotopes but instead depends on molecular energy bands that are different between the two types of uranium hexafluoride molecules. The laser can be tuned to the proper frequency so that only the 235UF6 will be ionized and hence separated. General Electric-Hitachi submitted a license application to the NRC to build a plant in the United States, which, if approved, could be opera­tional in 2014. This technology is more efficient than gas centrifuge enrichment, leading to lower energy inputs, lower CO2 emissions, and lower cost (35, 36).

CAN NUCLEAR REPLACE COAL?

I argued in Chapter 4 that renewable energy from wind and solar is not sufficient to make a big dent in our dependence on coal for baseload power. So, is nuclear power able to replace coal? Let’s look at some numbers (again!). Coal currently produces about 41% of electricity in the United States. According to the EIA, the approximately 600 coal-fired power plants produced 18 quads in 2011, while the 104 nuclear power plants produced 8 quads from an installed capacity of about 100 GWe (see Chapter 2). Simple math shows that it would take about 230 new nuclear

reactors to replace coal if the reactors were the same average power as current ones. Since Generation III reactors are generally larger by 30-60%, the number could go down substantially to around 150-175. In addition, most of the current reactors would be at the end of their lifetimes in the next 20 years and would have to be rebuilt. Most of these reactors would probably be built at sites where reactors currently exist, minimizing the cost and construction time. There is nothing in principle that says this is impossible, but it would take a crash effort to do so.

The most critical time for new power plants is the next 20-40 years because many coal plants will be retiring by then, anyway. The question is: What will replace them? Could 175 or so nuclear reactors be built in the United States in that time frame? One way to look at it is to recognize that the current 104 reactors were built in about 30 years in the late twentieth century. This certainly is not a good model, though, because the construction was fraught with delays and ineffi­ciencies. The newer generation of more standardized reactor designs and modular components should cut the build time in half, as has already been done in Japan, for example, with the ABWR reactor. So, yes, it is possible, though it would take a huge national effort.

The 2003 MIT study mentioned above evaluated a nuclear growth scenario that would have 1,000 to 1,500 reactors worldwide by mid-century. For the United States, they projected a capacity of 300 GWe, or 200 new reactors, similar to my estimates above. If 1,000 of these reactors worldwide replaced gas-fired plants, they would reduce the production of CO2 by 800 million tons annually; if they replaced coal-fired plants, the reduction would be 1,600 million tons (1.6 Gt) (10). Actually, coal-fired plants in the United States currently produce about 2 Gt CO2, so if all coal plants were eliminated, the total reduction would be 2 Gt in the United States (see Chapter 2). Interestingly, the 300 GWe is about the same capac­ity that the DOE proposed to get 20% of electricity from wind, as discussed in Chapter 4. The big difference, of course, is that that amount of wind power would not reduce the consumption of coal by very much, while that capacity of nuclear power would virtually eliminate coal usage.

Another perspective is to look at what China is planning for nuclear power. China currently has 14 nuclear reactors, with 28 under construction and 53 others planned. Because China is even more dependent on coal than the United States (about 80% of its electricity comes from coal) and has surpassed the United States in generating CO2, it is planning major construction of nuclear power plants. It expects to have about 60 GWe from nuclear by 2020, 200 GWe by 2030, and 400 GWe by 2050. Most of the reactors planned would be Westinghouse AP1000s (31). So, if China can do it, can the United States? It is entirely possible, but it depends on the national will to do so.

A major problem is that the only places in the world with the capability to make the steel reactor pressure vessels are in Japan, China, and Russia (32). Clearly, the United States would need the capability to build reactor vessels here if it were to make an ambitious effort to build new reactors. To do that would require a large number of orders of nuclear reactors, and that will only happen if there is a carbon tax on coal so that it makes economic sense to build reactors.

It is highly unlikely that all coal plants will be retired and replaced by nuclear reactors. More likely, there will be a split between natural gas plants and nuclear plants, and of course renewable energy will make contributions to the energy mix. Various scenarios were discussed in Chapter 2, but they all involved a rather small increase in nuclear power. I believe that nuclear power should be increased by a much larger factor and that it could be done.

You might say that the cheapest energy is that which you don’t use—conserva­tion is the biggest bang for the buck. I certainly don’t disagree with that, but it is naive to think that conservation can solve the problem and that new power plants will not be needed. The previous discussion on replacing coal plants was just talking about replacing the current coal plants with no growth in total electri­cal energy. Even if we become more efficient, though, total electrical demand is expected to grow because of population growth and more electrical gadgets and electric cars (see Chapter 2). To the extent that conservation and efficiency can reduce the need for electrical power, though, the problems of reducing carbon dioxide become easier. Let’s be frank—there are no easy solutions. We will need to do everything—conservation, more nuclear plants, more wind and solar farms, and probably more natural gas plants—to reduce dependence on coal.

HOW DANGEROUS IS BACKGROUND RADIATION?

Large populations of people around the world live in areas that expose them to high levels of terrestrial background radiation. Several populations live in areas with monazite sands containing radioactive thorium: about 90,000 people in Yangjiang, China, are exposed to у doses of about 4 mSv per year, and 100,000 people in Kerala, India, are exposed to median doses of 4 mSv/yr and up to 70 mSv/yr in some cases (2, 14). Long-term epidemiological studies of these popula­tions have not shown any significant cancer risk from these higher background doses of radiation (14). The Guarapari coastal region of Brazil has monazite sands that expose some 30,000 people to dose rates of 5 mSv/yr. Around 7 million resi­dents of the wine country of central France live in granitic areas and get annual doses of 1.8 to 3.5 mSv. A small population of about 2,000 people in Ramsar, Iran, get an average annual dose of 6 mSv with a smaller number getting up to 20 mSv. The 2,740 residents of Leadville get 5.25 mSv/yr. In none of these cases is there evidence that the risk of cancer is increased from these high background doses of radiation (15).

When you add up all of the pieces, the students coming to Colorado State University end up with an average annual background dose of 4.2 mSv, nearly three times what it would be in Florida or Texas, excluding medical exposure (Figure 8.6).

Should they be worried? One way to answer the question is to see whether people in Colorado have a higher than average rate of cancer. In fact, Coloradans have the fourth lowest incidence of cancer of any state in the United States, in spite of the fact that we have the highest level of background radiation of any state in the country, and we have the third lowest incidence of lung cancer in

image054
spite of the high level of radon (16). Clearly, this high rate of background radia­tion is not causing a lot of excess cancers in Coloradans—quite the reverse. Of course, one reason for a lower incidence of cancer is the lifestyle of people in Colorado, who tend to be quite active and do not smoke a lot. Still, it is clear that people living in Colorado do not have to worry about the high back­ground radiation they are exposed to. This is a very important factor when considering the exposure of people to radiation from storage of nuclear waste in a repository, for example, or after a nuclear accident. If the doses are less than background levels that people are exposed to naturally with no negative consequences, then it is not worth worrying about. Simply understanding this elemental fact should remove much of the worry that people have about expo­sure to radiation from nuclear reactors. We are all exposed to radiation—you can’t avoid it—but being exposed to radiation from nuclear waste storage or even a nuclear accident is no different from exposure to natural or medical radiation. It is all about the dose!

MYTH 5: NUCLEAR POWER IS SO EXPENSIVE IT CAN’T SURVIVE IN THE MARKETPLACE

This is becoming the new mantra of the anti-nuclear crowd and has been a long-standing argument by Amory Lovins. It plays a central role in the scathing

denunciation of nuclear power in The Doomsday Machine. But is it true? Is nuclear power too expensive to be a good option for clean electrical power?

It is certainly true that a nuclear power plant costs a lot of money to build. The two new nuclear reactors being built in Georgia are expected to cost $14 billion combined. The capital costs are the largest expense of nuclear power, and they have to be committed for several years before any energy is being generated to pay for it. Building a new nuclear reactor on time and at cost is extremely important, but of course there is no modern record to depend on because all but two of our nuclear reactors were built over 20 years ago, and 60% were built over 30 years ago (26). Past history does not have to determine the future, however. The four new reactors being built in Georgia and South Carolina are being added to cur­rent sites where nuclear reactors already exist. There is a wealth of experience in running nuclear reactors at these sites, and they are accepted by the community.

This is a far cry from the 1970s and 1980s, when the existing reactors were built and there were huge protests at many of the sites. These protests and changes in reactor design led to construction times of up to 10 years or more in some cases, which greatly increased the cost. This was also a time when interest rates had skyrocketed because the OPEC oil cartel caused oil prices to soar, leading to very high inflation. This double whammy led to reactor costs that spiraled out of con­trol. The escalating costs, combined with the fears from Three Mile Island, led to a number of reactors being canceled.

Even so, 104 of those reactors did get built, and they now provide cheap electric­ity. The capital costs are paid for out of current electrical rates but prorated over many years, so current electricity rates are cheap. The cost of providing nuclear power, including the cost of uranium fuel and operating and maintenance costs average about 2 cents per kWh in the United States (27). Even with the capital cost of existing nuclear power plants built into the electrical rates, they are still cheaper than rates from coal-fired power plants.

The current very low prices of natural gas, combined with the relatively small capital cost of combined cycle natural gas plants, makes them the winner for the lowest levelized cost of a new power plant, about 6 to 7 cents per kWh (see Tables 5.1 and 5.2 in Chapter 5). Using a 30-year cost recovery period, the pro­jected cost of nuclear power is about 11 to 12 cents per kWh, but with a 40-year cost recovery, the cost drops to 8 cents per kWh. The cost of electricity from new coal plants is about 11 cents per kWh but is nearly 14 cents per kWh if carbon cap­ture and storage is used. Onshore wind power is about 10 cents per kWh, while offshore wind power is more than twice that, as is solar power. If the risk premium for nuclear power is removed by government loan guarantees, for example, then the cost of nuclear power plants is competitive with natural gas plants at about 7 cents per kWh. The cost of construction is the biggest factor for nuclear power, so if the Generation III plants can be built on time and at cost, with the current low interest rates, then nuclear power economics look good. Natural gas power plants, on the other hand, are not so expensive to build, but the cost of natural gas has historically been highly variable so the operating costs can change rapidly, causing volatility in electrical rates.

States with regulated rates set by public utility commissions, such as those in the Southeast, are far more likely to build more nuclear plants than deregulated states where rates are highly uncertain (28). The vagaries of unregulated power were brought to light by the energy crisis in California in 2000-2001. This deba­cle was caused by several factors, including a lack of electrical generation capac­ity, deregulation that required utilities to buy electricity from the market at spot prices, and fraudulent activities by Enron that led to extremely high prices (29, 30). It was an object lesson in how not to deregulate markets. Regulated states, which are in the majority, are not subject to such extremes in market prices, and the rates that utilities charge for electricity are far more stable. This is a climate in which the long-term costs of nuclear power plants can be amortized, resulting in low, stable rates. Since new nuclear power plants are designed for a 60-year lifetime, they will provide cheap electricity in future years, just as current reactors that were built 20 or more years ago provide cheap electricity now. Investments in nuclear power are truly long-term infrastructure investments that will pay off over a long time.

One other factor that is often raised concerning the cost of nuclear power is the cost of dealing with nuclear waste and with decommissioning reactors after their useful lifetime is over. Utilities that operate nuclear reactors have been paying a tenth of a cent per kWh into the Nuclear Waste Fund, which has accumulated $38.7 billion in Treasury certificates, so that cost is already paid for in the cost of electricity. The Nuclear Regulatory Commission (NRC) regulates the decom­missioning of reactors and requires utilities to set aside funds to pay for decom­missioning, which are estimated at 0.1 to 0.2 cents per kWh (31, 32). In reality, the extra costs for long-term disposal of nuclear waste and decommissioning of reactors add only a few tenths of a cent per kWh to the cost of electricity, so they are not big factors in the cost equation.

The advent of small modular reactors may also change the cost equation. A util­ity could add a 200 MWe modular reactor in a much shorter time frame and at a much smaller cost. As needs grow, additional modular units could be added to meet the capacity needs, but the capital cost outlay would be much more man­ageable. Missouri is evaluating that option to add to its current Callaway nuclear reactor at Fulton (33). The DOE awarded the first development money for an SMR to a consortium of Babcock & Wilcox, the Tennessee Valley Authority and Bechtel International. The cost-sharing award is to facilitate the building of a pro­totype SMR and enable licensing by the NRC (34).

The capital cost of nuclear power plants is a huge factor in the decision to build a nuclear reactor. The cost becomes far clearer if the true cost of CO2 production is realized through a carbon tax of some sort. In that case, coal and natural gas lose their advantage and nuclear becomes compelling. So, yes, it is true that the capital cost of nuclear reactors is a big factor in the decision, and it is unlikely that the market alone will be able to fund these. But the long-term cost is very favorable, so it requires stable utilities that have a long horizon to make the decision to build nuclear power plants.

NATURAL GAS

So, if coal is the problem, is natural gas the solution? Natural gas already plays a major role as a backup power supply for peak demand at coal-fired power plants. The Rawhide Energy Station, for example, has five gas turbines that can provide 388 MWe on a hot summer day to provide electricity for air conditioning (yes, we do need air conditioning in Colorado—at least we think we do! We can thank global warming for that.). Natural gas provides 27% of the energy production for the United States and contributes 24% to electricity production (Chapter 2). In principle, burning natural gas produces about half as much CO2 as an equivalent amount of coal, so a greater reliance on natural gas would at least reduce the prob­lem of CO2 production. But even if all coal-fired power plants in the United States were replaced by plants burning natural gas, they would release about one billion tons of CO2 yearly, which is still a major problem.

Natural gas was produced in geological times by anaerobic decay of organic matter that was then compressed and heated over eons. Slightly different condi­tions created coal, oil, and natural gas. Conventional natural gas rises through pores in permeable rock strata to form domes above oil reservoirs, usually one or two miles deep in the earth, and pure methane can be found even deeper (28). To form a dome containing natural gas, the permeable reservoir rock has to be sealed by an impermeable rock layer that traps the natural gas. In recent years, the tech­nology to develop unconventional sources of natural gas from layers of shale has greatly increased the availability of natural gas. Gas-rich shales are thick layers of rock that are buried deep in the earth. Shale is a type of rock with very small pores, so it serves both as a source and reservoir of natural gas. However, the gas cannot be released from the shale without fracturing the shale. This has led to one of the controversies about natural gas—hydraulic fracturing, or fracking.

Natural gas from wells consists of about 75% methane, a potent greenhouse gas in its own right (see Appendix A), but also contains higher order hydrocar­bons such as ethane, butane, and propane. The natural gas supplied to homes for heating and cooking and to power plants for electricity production is pro­cessed to greater than 93% methane. Methane is also produced from landfills as methanogenic bacteria decompose the organic material, and this is increasingly being collected to power small electrical plants. The other major source of meth­ane is from food digestion in livestock (don’t smoke around farting or belching cattle!). And, yes, we humans generate it too!

Electromagnetic Radiation (Photon) Interactions

X-rays and у photons give up their energy in three different ways, and the way in which they do it depends on the energy of the photon. One way is called the pho­toelectric interaction—this is precisely the interaction that Einstein explained and for which he received the Nobel Prize. A second way is known as the Compton interaction, named for Arthur Compton, the American scientist who discovered it, who also received a Nobel Prize for his discovery. The third way is called pair production, which involves the creation of positive and negative electrons (2).

Photoelectric Interaction

image037

The photoelectric interaction involves a collision of an energetic photon with an inner-shell electron of an atom, as in the Bohr model of the atom (Figure 7.1). All of the energy of the photon is given to the electron, which is kicked completely out

of the atom with the energy of the photon minus the energy that bound it to the nucleus, and the photon disappears. This electron can then cause damage on its own by charged particle interactions. Low energy X-rays are also given off when an electron jumps from a higher shell to fill the vacancy left by the electron kicked out of the atom.

The photoelectric interaction occurs primarily for low energy photons with energies below about 100 kiloelectron volts (keV)1 and its cross section—the prob­ability of an interaction—goes up rapidly with the atomic number (Z) of the atom it interacts with. Mathematically speaking, the cross section is proportional to Z4/ E3, where E is the energy of the photon. This has very important practical applica­tions in diagnostic radiology, the science of using radiation to image bones and other body parts. Our tissues other than bone mostly consist of carbon (Z = 6), nitrogen (Z = 7), oxygen (Z = 8), and hydrogen (Z = 1). Bone, however, has a large amount of calcium with a Z of 20. Low energy photons such as X-rays with ener­gies of about 75 keV will primarily interact by the photoelectric effect, and bone will absorb photons a lot better than the soft tissue. The reason an X-radiograph shows the bones so well is because they absorb most of the photons, leaving a dark image, compared to the soft tissue, which doesn’t absorb the photons very well. The photoelectric interaction is also the basis for using contrast solutions to image body parts such as the colon. A solution of barium (Z = 56) is drunk and, as it flows through the digestive system, low energy X-rays can image its location and determine if there are blockages.

Health Consequences

The Chernobyl accident is the only commercial nuclear power reactor accident in the world (including the recent accident in Japan, to be discussed next) where people actually died from radiation. Two people died immediately in the accident from the explosion and a third was reported to have died from a heart attack. Firefighters and Chernobyl power plant employees were the group most at risk. A total of 237 first-responders were hospitalized and 134 of them were diagnosed with acute radiation syndrome (see Chapter 7) after receiving doses ranging from 0.8 Gy up to 17 Gy (15). Five firefighters died the first night, and an additional 23 died within a month from acute radiation syndrome (19). Bone marrow trans­plants were done on 13 of these people, but they all died from the hematopoietic syndrome. Recall that about half of the people will die from a single dose of 4 Gy. By 2004 an additional 19 people in the most-exposed group died, though not from bone marrow failure (13, 15). In total, 28 people died from acute radiation effects, 2 from blast effects, one from a heart attack, and 19 from uncertain causes.

Many studies have been done since the Chernobyl accident on the long-term health consequences to the people who were exposed. Because there was so little public health information available for the exposed population prior to 1986, it is difficult to make accurate assessments. The International Atomic Energy Agency (IAEA) established the Chernobyl Forum in 2003 to study the environ­mental and health consequences of the accident, and the most definitive report of the accident was published in 2006. The Chernobyl Forum included experts from seven United Nations organizations, including the IAEA, the World Health Organization (WHO), and the United Nations Scientific Committee on the Atomic Effects of Radiation (UNSCEAR). It also included representatives from the governments of Belarus, Ukraine, and the Russian Federation (20). The full report on health effects was published by the WHO in 2006 (21). They project that eventually 4,000 people may die from cancer related to the Chernobyl acci­dent. A report was done by the anti-nuclear organization Greenpeace claiming that there were up to 200,000 deaths from Chernobyl by 2004, but this study lacks credibility. It is strongly at variance with the bulk of scientific published reports on Chernobyl. Virtually all of the authors were from Ukraine and were possibly somewhat biased in evaluating the consequences of the accident (22). Another report—TORCH (The Other Report on Chernobyl)—was commissioned by the Greens/European Free Alliance (EFA) party in Europe to respond to the conclu­sions of the Chernobyl Forum (23). They estimate that 30,000 to 60,000 cancer deaths will result from Chernobyl, far higher than the Chernobyl Forum report. Who is right?

There were four different groups of people who were exposed to significant doses of radiation from Chernobyl. The highest exposed group consisted of the “liquidators,” the emergency and recovery operations people who were involved in containing and cleaning up the accident. This included 240,000 workers exposed to high radiation levels, and eventually up to 600,000 liquidators were registered, though most of the latter were exposed to much lower doses. The average effec­tive dose to these individuals was estimated at 100 mSv. The second population consisted of 116,000 people who were evacuated in 1986 from the highly con­taminated 30-kilometer exclusion zone (Figure 10.1). These people had average doses of 33 mSv. Another 220,000 people were evacuated from a larger area (the strict-control zone) during 1986-2005 with cumulative doses of over 50 mSv over

image059

Figure 10.1 Exclusion zone around the Chernobyl nuclear power plant. The circle is the original 30 km evacuation zone. The black line is the current 30 km zone in Ukraine and Belarus; the wide gray line is the 10 km zone; the narrow white line is the border between Ukraine and Belarus.

those years. Finally, about 5 million people lived in the larger zones that received fallout from the accident and got doses of 10-20 mSv during 1986-2005 (20). Doses to anyone else in Europe were negligibly small.

As a frame of reference, recall that the annual background radiation dose in Colorado is about 4.5 mSv per year, so over 20 years, Colorado residents get cumulative doses of about 90 mSv. The background dose in Kiev is 12 micro — Roentgen3 per hour, which works out to be just under 1 mSv per year, far lower than the natural background in Colorado (24). So the people in the larger zone who got some contamination received an additional 1 or 2 mSv per year in addi­tion to natural background of 1 mSv/yr. This total dose is about half of what I get every year from natural background. It is about the same as the 3.2 mSv aver­age background radiation for all US citizens and half of the total average of 6.2 mSv for US citizens when medical doses are taken into account. The very high estimates of excess cancer deaths from the Greenpeace and TORCH reports are simply not realistic.

Since 131I is the biologically significant isotope released in the highest amounts, you would expect that children who drank contaminated milk would get high doses to the thyroid; indeed, that is what happened. About 5,000 cases of thyroid cancer have been identified in children and adolescents.

There are some interesting factors related to the thyroid cancer cases. First, the incidence of thyroid cancer could have been minimized if iodine pills had been rapidly given out to the population, though workers and their families in Pripyat were given iodine pills within the first 30 hours. Second, there was a general iodine deficiency in this region, with an abnormally high prevalence of goiters in the population (25). This would lead to a vigorous uptake of radioactive iodine. Third, the incidence of thyroid cancer happened more rapidly than was expected based on the Japanese bomb survivors. Fourth, children are much more sensitive to getting thyroid cancer than adults (and usually drink more milk). And finally, thyroid cancer is rarely fatal, with a survival rate of over 95% in the United States. Out of the nearly 5,000 cases of thyroid cancer, only 15 people died from the can­cer up to 2002 (26, 27). It is clear that more cases will arise in the future because solid cancers can develop over a lifetime. Interestingly, the standard treatment for thyroid cancer is surgery followed by high doses of 131I to kill thyroid cancer cells (28). Patients must then take thyroid hormone pills for the rest of their lives.

How about other cancers? Leukemia is the first cancer that was found in the Japanese bomb survivors because it has such a short latency period and the risk is over by about 15 years after exposure. However, there is no solid evidence that leukemia incidence has increased in the exposed population (21, 29). Out of the 600,000 liquidators exposed to an average dose of 100 mSv, the Chernobyl Forum projected that there might be an eventual 4,000 people dying of cancer from the radiation. This is compared to about 100,000 people in that population who will die from cancer due to normal conditions (20). Even though 26 years have passed since the Chernobyl accident, there will continue to be studies of the population, similar to the ongoing studies of the Japanese bomb survivors, and the number of excess cancers will become more defined. It is extremely unlikely that it will far exceed the 4,000 person estimate, however. As I discussed in Chapter 7, the risk of fatal cancer for a population that does not include children is 4% per Sv, when given at a low dose and low dose rate, as is the case for Chernobyl populations. So for 600,000 adults people exposed to an average dose of 100 mSv (0.1 Sv), the expected lifetime number of fatalities is 2,400 people. Estimates of tens to hun­dreds of thousands are simply not scientifically credible.

The largest public health problem resulting from Chernobyl has been the impact on mental health. People were moved out of their homes into communities where they did not feel comfortable or welcome. People had strongly negative attitudes about health and well-being and felt they had little control over their lives. They were convinced that they were doomed to a shorter life expectancy, and this fatal­ism led to a loss of initiative in getting jobs and taking care of their personal lives. Life expectancy in general has declined in the Russian Federation, quite apart from Chernobyl, and the average life expectancy in 2003 was just 59 years for men. There is also a sense of victimization by and dependency on the govern­ment. All of these factors have combined to create a huge public health problem as a legacy of Chernobyl (20, 30). Many of these problems might have been allevi­ated by government transparency, immediate action after the accident, and estab­lishing a legacy of trust that honest, accurate information was being disseminated.

These are issues that are applicable after any accident. Honesty and transparency by the government involved (and accurate reporting by the media) are essential in minimizing stress and psychological factors that were also the biggest problems after TMI and after the Japanese accident to be discussed later.

Cost

Assuming a capacity factor of 33% (substantially higher than the current 27%), the EIA estimates the levelized cost of new wind power in 2017 at $96 per MWh, with a range of $77 to $112 per MWh. While this is about 50% more expensive than power from natural gas plants, it is about the cheapest cost for renewable energy sources (30). Recall that the levelized cost of PV solar is about $153 per MWh. The actual cost will depend greatly on the need for building large trans­mission lines to get the electricity from the good wind locations to the population centers that need it. The cost is nearly all in the construction of the wind farm and the cost of wind turbines, since the ongoing maintenance costs are relatively small and the wind is free. However, wind turbines have a lifetime of only 15 to 20 years, after which a large capital input will again be necessary to replace the turbines and possibly even the towers (38, 48).

Many of the subsidies available for solar energy also apply for wind energy, which are necessary to make the technology competitive. In the past, new wind power capacity has dropped dramatically in years when federal tax credit subsi­dies were not available, and that is also likely to be the case in the future (48). The federal production tax credit (PTC)—a credit of 2.2 cents per kWh—has been available for wind power developers since 1992, but it was scheduled to expire by the end of 2012. Vestas, a major manufacturer of wind turbines that has a plant near Fort Collins, says the market for wind turbines “may fall off a cliff” if the PTC is not renewed (55). In the last-second budget deal early in the morning on New Year’s Day 2013, the PTC was renewed for another year.

According to the Congressional Research Service, renewable energy sources received $6.7 billion in federal subsidies in 2010 through production tax credits and direct grants (56). If the production tax credit had not been extended by Congress, wind power developers expected that there would be virtually no additional wind power developed in 2013 (57). Small household wind turbines can still qualify for a 30% federal tax credit until 2016. The American Recovery and Reinvestment Act of 2009 provided $21 billion for renewable energy generation by wind and solar (58).

The US Department ofEnergy (DOE) published a study in 2008 that proposed a plan for the United States to get 20% of its electricity from wind power by 2030, which they estimated would require 300 GW of wind power (48). However, wind currently provides only 3.6% of our electricity from about 60 GW installed (see Chapter 2). To increase this to 20% of current electricity usage would require 333 GW, but the EIA projects that electricity demand will grow by 0.9% annually, so 395 GW would be necessary by 2030. That is an additional 345 GW of installed capacity. The study expected that 50 GW would come from offshore projects. The EIA estimates that it costs $2.43 million per MW of installed onshore wind capac­ity and $5.97 million per MW for offshore wind capacity (59). Thus, it would cost about $717 billion for onshore wind capacity and an additional $300 billion for off­shore wind capacity to meet this goal. The total cost exceeds $1 trillion, and it still would not replace many gas plants and few, if any, coal plants because of the need for backup power. Furthermore, this analysis does not consider the cost of addi­tional transmission capacity needed, which will add at least $60 billion more (48).

After years of debate, the US Department of the Interior approved the first off­shore wind farm in the United States off the coast of Cape Cod—the Cape Wind project—in 2010 (42). This will not be cheap energy, though. The EIA estimates the levelized cost of offshore wind power to be $244 per MWh, with a range of $187-350 per MWh, making it some of the most expensive energy available (45). While there are ample offshore wind resources, the cost to utilize them will be extremely expensive and will likely be heavily subsidized by taxpayers and will be very expensive for customers. Cape Wind entered an agreement with National Grid to more than double the current rate of 9 cents/kWh to 20.7 cents/kWh in 2013. In addition, the rates would increase by 3.5% for the next 15 years after that, reaching a rate of 34.7 cents per kWh by the end of the contract. The estimated $2 billion to build the wind farm, which is likely an underestimate, will cost taxpay­ers about $600 million that will go to Cape Wind because of available federal tax credits (60). Wind energy is not cheap!

WHAT ARE THE RISKS?

Okay, all this theory and molecular biology is great—at least I think it is—but what you really want to know is, what are the chances you will get cancer if you are exposed to radiation, right? And how do we know what the chances are?

Scientists have developed methods to study the transformation of normal cells growing in culture into cells that have a malignant pattern of cell growth— known as transformed cells—but the problem with these experiments is that they depend on mouse cell lines. This is because it is very hard to transform human cells into cancerous cells. These kinds of experiments can be used to compare the relative ability of different physical and chemical agents to transform cells, however. Certain polycyclic aromatic hydrocarbons,3 for example, turn out to be much better than radiation at transforming cells (26). Even so, the results of these transformation experiments cannot be used to determine the quantitative, dose-dependent carcinogenic potential of radiation for humans. Mice are also useful to study certain aspects of carcinogenesis, but mouse studies are also not capable of determining the risk of a certain dose of radiation for humans.

The most reliable animal studies for determining human risk of cancer are on dogs. Scientists at Colorado State University did a life-span study of beagle dogs to determine the risk of cancer when dogs were irradiated in utero at various stages of development or at different ages after birth. Relatively large doses of 16 or 83 rads of у rays were given to the dogs, and they were followed through their life­times. Young dogs developed a variety of cancers after irradiation in late fetal and neonatal stages. Dogs irradiated as juveniles also developed thyroid cancers (27, 28). While these studies show that dogs are a good model for certain human cancers, they are not sufficient to establish the risk of cancer to humans from a certain dose or radiation.

So how do we know the risk? The only way to know is to actually study human populations, but of course it is unethical to irradiate humans to see if they get cancer. However, various groups of people have been exposed to radiation for different reasons: people who have been treated with radiation for various condi­tions for a long period; many Japanese who survived the bombing with atomic weapons in World War II; people who were in the vicinity of atmospheric testing of nuclear weapons in the 1950s and 1960s; people who were exposed from the Chernobyl nuclear accident; and uranium miners. There are many other groups exposed either accidentally or during the course of their work. The studies of can­cer incidence in these populations have been reviewed recently in a report by the National Academy of Sciences National Research Council (BEIR VII)(29).

In the 1930s and 1940s, thousands of patients in Great Britain with ankylos­ing spondylitis, a painful disease of the spine, were given radiation to reduce the pain, and some of them later developed leukemia. Thousands of women in the United States and Canada had high doses of radiation to the chest from fluoros­copy procedures, and later some of them developed breast cancer. Up until the 1950s, children emigrating from North Africa to Israel were treated for ringworm by irradiating the scalp, and some of these children later developed leukemia or thyroid cancer. Uranium miners breathing in radon—along with smoking, a con­founding factor—developed lung cancer. Some Marshall Islanders developed thy­roid cancer after exposure to 131I from thermonuclear bomb testing in 1954, and thousands of kids developed thyroid cancer after the Chernobyl nuclear power accident in 1986 (7). But by far the most useful data come from an extremely careful long-term epidemiological study of the survivors of the atomic bombs dropped on Hiroshima and Nagasaki, Japan, in 1945.

FUEL FABRICATION

Once the uranium has been enriched to 3-4% 235U—depending on the specific requirements of a reactor—it is converted to uranium oxide (UO2) and then has to be made into fuel pellets that fit in the reactor. This process is identical to that used at the Melox facility in France to make MOX fuel (see Chapter 9). The only difference is that the MOX fuel pellets are made of reprocessed plutonium oxide mixed with the uranium oxide, instead of solely uranium oxide in normal nuclear fuel pellets. Basically, the uranium oxide powder is compressed under extremely high pressure to form pellets about the size of a pencil eraser. The pel­lets are heated to a high temperature to drive off any water and oils, ground to extremely uniform sizes, cleaned and inserted into zirconium alloy tubes 12 to 14 feet long. The tubes are filled with helium gas, sealed, tested for integrity, and are now fuel rods. The fuel rods are assembled into a matrix of 14 x 14 or up to 17 x 17, depending on the reactor, and ultimately about 200 fuel assemblies go into the reactor core. One-third of the fuel assemblies are used up and switched out every 18 months or so.

The energy density of a nuclear fuel pellet is remarkable. Each pellet weighs about 7 grams and contains the same energy as 140 gallons of oil, 157 gallons of regular gasoline, 17,000 cubic feet of natural gas, and 1,780 pounds of coal (40). A couple of handfuls of pellets are equivalent to a train car full of coal. This energy density is what makes nuclear reactors so efficient. The entire fleet of 104 US reac­tors, each producing about one GW of power, uses only 19,000 tons of uranium in a year (41). But the United States produces only about 1,500 tons of uranium per year, so where does the rest come from?