Как выбрать гостиницу для кошек
14 декабря, 2021
All of the exciting developments describing the electron orbiting around the nucleus did not mean that the nucleus was forgotten. The three types of radiation—a, p, and у—had been discovered as particles that were emitted from the nucleus of radioactive elements. But there were big problems with the nucleus. Rutherford had shown that the nucleus was a compact center of the atom filled with protons. Since protons have a positive charge and repel each other more strongly as they come close together, something must be holding them together, some new force different from electromagnetism. Not only that, but the masses of nuclei other than hydrogen were at least twice as large as the mass of the protons in the nucleus. For example, the a particles that Rutherford had shown to be helium nuclei had two protons but a mass equal to four protons. Rutherford, in fact, proposed in a lecture in 1920 that there must be a neutral particle in the nucleus composed of a proton and an electron with no net charge to account for the extra mass—an idea that was wrong in the details but right in principle (10).
And new questions were being raised about the radiation coming from radioactive elements. Marie and Pierre Curie’s daughter, Irene Curie, and her husband, Frederic Joliot, were continuing the work of Marie Curie at the Radium Institute in Paris. They were studying the element beryllium by bombarding it with a particles from polonium and found that the beryllium emitted radiation they thought was у rays. James Chadwick, a colleague and former student of Rutherford, did not believe that they were у rays. In a 10-day burst of frenetic activity in 1932, he bombarded different elements with the radiation coming out of beryllium and found that protons were knocked out of the nucleus of the different elements. In fact, he found that the energies of the protons exceeded the energies of the supposed у rays. But if the beryllium radiation was actually neutral particles with a mass nearly identical to protons, then the results were simple to explain. The effect is similar to a pool cue ball hitting a group of pool balls and knocking one of them out. Chadwick named the particle a neutron and changed the understanding of the nucleus (10). The new model of the nucleus consisted of protons and neutrons of nearly identical mass that must have a force holding them together, later identified as the strong nuclear force. Because the strong force does not depend on charge, it acts equally on neutrons and protons, which are often referred to as nucleons.
Separating out the plutonium and uranium from the fission products is not the end of the story. The plutonium oxide that was sealed in canisters is shipped once or twice a week by gendarme-escorted truck from La Hague to a plant in the Provence region of France just north of Avignon. The drive from Avignon to the
Melox plant is a trip through Van Gogh land filled with vineyards and cypress trees swaying in the wind and the French Alps in the distant east.
I met my host, Joe Faldowski, at the security office. We went to a meeting room and met Jean-Pierre Barriteau, the director of International Projects for AREVA. He told me that France gets over 10% of its electricity from mixed-oxide (MOX) fuel, which is derived from the plutonium recycled at La Hague and made into new fuel pellets at Melox. In France, 21 of 58 light water reactors use MOX for fuel, while Germany has 10 reactors that burn MOX. The United States has used four MOX fuel assemblies in a reactor as part of a demonstration program, but that has now ended, and no US reactors currently burn MOX fuel. In a refueling operation, about 30% of the fuel assemblies can be from MOX while the rest are conventional uranium fuel assemblies.
Joe took me on a tour of the plant to see how the pellets are made and processed into the final assemblies. The plutonium coming from La Hague is first combined with depleted uranium10 in a grinder machine, making a primary blend of about 30% plutonium. About 60% of the plutonium is fissile 239Pu and 241Pu, and the rest is 238Pu, 240Pu, and 242Pu. This is later processed into the final concentration of plutonium for the specific contract by a customer, usually about 8%. Because the concentration of plutonium in spent nuclear fuel pellets is about 1%, it takes 8 recycled fuel pellets to make one MOX fuel pellet containing 8% plutonium. The powder is poured into a machine that compresses the powder into pellets. The pellets are heated to about 1,700°C, which eliminates cavities and water for more efficient fission and reduces their size to about the size of a pencil eraser, then ground to tolerances of 10 micrometers. Very stringent quality control measures assure that all pellets meet specifications.
The pellets are cleaned, inserted into 12-foot-long zirconium alloy (ZIRCALOY) tubes (about 300 per tube) with a spring added at the end to compress them, and helium is added before the tubes are welded shut to make the final fuel rods. The helium is added to improve heat transfer and reduce the operating temperature of the fuel; a void space is left to accommodate gaseous fission products that are produced when the fuel is burned. The rods are tested for integrity and linearity and a leak test is done to detect any helium coming from them when put in a vacuum. If all is well, they are then put into a fuel assembly that depends on the specific reactor requirements, but is typically a matrix of 17 by 17 rods. They are then ready to be sent to the country that ordered them to be put into a nuclear reactor to generate electricity.
The Melox plant is very impressive and the quality control measures are extreme. Radiation is not nearly as much of a problem here as at La Hague because it is only plutonium and uranium being used instead of all of the highly radioactive fission products. Plutonium and uranium are a particle emitters, except for 241Pu, which is a p emitter. Little shielding is necessary for a particles, since they can travel only a few centimeters in air and can be stopped by a piece of paper. However, there is also some emission of у rays and neutrons, so the MOX fuel pellets are more dangerous than the usual uranium oxide fuel pellets. Security is tight and it would be very difficult to divert material from here. Both the International Atomic Energy
Agency (IAEA) and Euratom constantly monitor the Melox plant as well as the La Hague reprocessing plant to make sure the plutonium inventory is accounted for.
At another great French lunch with Joe and Jean-Pierre, I asked whether the plutonium could be diverted by a terrorist group and made into a bomb. They said that the plant is very secure and terrorists would have a very difficult time getting in and getting any material. There is a potential problem in the fact that all of the plutonium is transported from La Hague to Melox by road, but the military escorts the trucks, and it would be very difficult to steal the material. Even more important, the mixture of isotopes in the plutonium would make it almost impossible to make a bomb. Certainly it could not be done by a terrorist organization. The reason is because the mixture of plutonium isotopes, especially the 240Pu, which undergoes spontaneous fission and emits neutrons, makes it extremely difficult to build a plutonium bomb that does not fizzle in a premature explosion (35).
Plutonium used in nuclear weapons is made in a specially designed reactor and the fuel is taken out very quickly, after about 100 days rather than 3 or 4 years in a power reactor, so that it is nearly all 239Pu, with less than 7% of the contaminating isotope 2 40Pu. This is called weapons-grade plutonium, as contrasted to reactor-grade plutonium (1). That is how the five officially acknowledged nuclear weapons countries (United States, Russian Federation, France, United Kingdom, and China) and rogue countries such as North Korea actually build nuclear bombs. They do not take plutonium from reprocessed fuel to make the bombs because it is only about 60% 239Pu, making it impossible to make a bomb that does not fizzle with a greatly reduced yield, perhaps equivalent to one or two kilotons of dynamite. Furthermore, the high radioactivity of 240Pu and 238Pu makes the reactor-grade plutonium thermally hot and dangerous, making it very difficult to work with (36). Because of the contamination of 240Pu that causes pre-ignition, a plutonium bomb cannot be built in a gun design where one sub-critical piece is shot into another to reach criticality, but has to be made as an implosion device. This is not easily done and takes the resources of a nation to accomplish. Richard Rhodes tells the story of how the greatest scientists in the world developed the technology to do this, and it wasn’t easy (35)! So the biggest concern about reprocessing—and the reason that President Carter canceled the US reprocessing program—is based on a faulty notion that terrorists could readily divert plutonium from a reprocessing plant and make plutonium bombs.
It is apparently true that the US weapons labs were able to build and explode a plutonium bomb in 1962 using reprocessed reactor fuel from a British reactor, but the 239Pu concentration is not public information (1). However, it is interesting that the plutonium came from a British reactor. Only three reactors existed then, and they produced a lower level of the contaminating 240Pu than US power reactors. These reactors were of a type called Magnox and were designed for dual use, either for power or for producing weapons-grade plutonium. Thus, it is likely that the plutonium for the bomb was closer to weapons grade than normal reactor grade. Furthermore, reactors of this type are all retired now (37). Thus, it is an oft-repeated red herring to say that plutonium taken from reprocessed fuel of a power reactor could be used in a nuclear weapon by a terrorist group. It is just not feasible because of physics!
AREVA is helping to build a plant at the Savannah River Site in South Carolina to convert plutonium from nuclear weapons into MOX fuel. Work on the Savannah River plant was begun in 2007 and is expected to be finished in 2016. In contrast, Melox was built in five years (1990-1995). Joe said that US utilities are very conservative and reticent to change their fuel to MOX, so the new plant will give utilities experience with burning MOX fuel. It makes a lot of sense to extract usable fuel from spent nuclear fuel and from nuclear weapons. This is truly turning swords into plowshares.
I asked about Princeton professor Frank von Hippel, who is opposed to reprocessing and who claims that burning MOX requires fast neutron reactors (7). Jean-Pierre said that it clearly is not true since they and other countries are using MOX in conventional reactors. Used MOX fuel can be reprocessed again, but it is not economical at present to do so and it does become degraded with a higher proportion of non-fissile (but fissionable) isotopes of plutonium, such as 240Pu, 242Pu, and 238Pu. However, it is true that the used fuel after MOX does need to be burned in a fast neutron reactor (see Chapter 11) such as the Phenix reactor that operated in France for 30 years and is now being decommissioned. A new fast neutron reactor, the SuperPhenix, was canceled by the government in a deal with the socialists for political reasons, not problems with the reactor.
Michael McMahon told me, “In France the used MOX is never considered to be a ‘waste’ and there are no plans to dispose of used MOX in a geological repository. The used MOX is considered to be a strategic energy resource. Current plans in France are to have a next generation Fast Reactor prototype (called ASTRID) operational in 2020" In other words, France is planning for the long-term use of the uranium as well as the plutonium in nuclear fuel to get the maximum output of energy while minimizing nuclear waste. Shouldn’t we do the same?
Now that we have journeyed through the land of nuclear waste disposal, what can we conclude? Is nuclear waste disposal truly the Achilles’ heel of nuclear power, or can it be managed so that nuclear power can grow and supplant much of the coal used to produce electricity? Are we really consigning future generations to a high risk of cancer?
First, let’s recognize that there is no immediately pressing problem with spent nuclear fuel. It is being managed quite well at nuclear power plants by storage for a few years in cooling pools to let much of the heat and radiation decay away. Moving the waste into dry cask storage on-site or in regional sites is the next step and it is widely agreed by industry experts, by the Nuclear Regulatory Commission, by scientists, and by the National Academy of Sciences that this can be done for the next century if needed. There are significant advantages of storing the spent nuclear fuel this way because it is safe from terrorists and it becomes easier to store long-term as the heat and radiation decay. This interim storage solution is strongly supported by a recent MIT report on the fuel cycle (10) and by the report of the Blue Ribbon Commission (31).
The next question is whether to recycle or to simply permanently store the waste after dry-cask storage. France and other countries have shown that recycling is indeed feasible and as a result, their waste problem is greatly diminished. The vitrified fission products can be safely stored in a geological repository such as Yucca Mountain or WIPP (if the law were changed) for a few thousand years until the radiation has decayed away to safe levels. There would be no danger whatsoever to any current or future population from doing this. Also, this extends the supply of nuclear fuel enormously because the unused 235U and the 239Pu that is created from 238U can be used in reactors again. In the long run, fast-neutron reactors can be built to burn up nearly all of the plutonium isotopes and other actinides produced in the reactor. In a world that will run out of fossil fuels eventually and will heat up to truly dangerous levels if we actually burn them all up, it will be increasingly necessary to look at spent nuclear fuel as a resource that can be reprocessed and provide greenhouse-gas-free energy.
The United States currently has no capability for reprocessing spent nuclear fuel, and there are many experts who think it is not a good strategy (7, 38). The two major concerns expressed by opponents of reprocessing are that it is too costly and that it could lead to proliferation of nuclear weapons. A 1996 National Academy of Sciences study estimated that the cost of reprocessing and building new reactors to use up the plutonium and other transuranics could cost between $50 and $100 billion dollars and raise the cost of generating electricity by 2-7% (39). AREVA estimates the difference in cost between an open cycle and recycling at just 2% (4). At the present time, it is not necessary to do this because spent nuclear fuel can be safely stored in dry cask storage. For the future, though, as other fuels become more expensive, this may well be a relatively inexpensive option. Furthermore, countries such as France, England, and Japan have already built these systems, and it has not apparently been a huge burden to them. In fact, France has some of the lowest electricity prices in Europe. The United States is contracting with AREVA and US partners to build a plant in South Carolina to reprocess plutonium from US nuclear weapons to make MOX fuel that can be burned in current reactors. This is not the same as recycling spent nuclear fuel, but that technology could be developed in the future after the United States has more experience with using MOX fuel in reactors.
The other issue frequently cited by those opposed to recycling is that it would lead to the proliferation of nuclear weapons. But the nuclear genie is already out of the bottle! Recycling of spent nuclear fuel is already occurring in several countries, so it would not be a big change if the United States also started recycling also. Five countries are officially acknowledged as nuclear weapons states by the Nuclear Non-Proliferation Treaty (NPT), while Israel, India, and Pakistan have not signed the NPT but have nuclear weapons. North Korea originally was a signatory of the NPT but withdrew in 2003 and has tested nuclear weapons (40). The possibility that terrorists could steal reprocessed plutonium from nuclear power reactors and make weapons is very small because of the complex mixture of plutonium isotopes present. We already live in a nuclear world and have since 1945. That is not going to change.
My own view is that a sensible strategy for the United States is to plan on dry cask storage for the next 50 to 100 years while developing the capability to recycle spent nuclear fuel, build additional reactors to burn the MOX fuel, and provide electricity while reducing CO2 emissions. Yucca Mountain could then be redesigned as a permanent storage for vitrified waste of the fission products. As I said at the beginning of the chapter, it is primarily a political problem, not a scientific or engineering problem, to make the decisions to have a sensible policy in place. It is time to move forward with a sensible and long-term strategy for the future.
Location matters because solar exposure is best in the Southwest, where fewer people live, but not very good in the Northeast, where the population density is high. A good way to see the problem is to compare the map for solar exposure in Figure 4.1 to the map of lights in the United States at night (Figure 4.3), which is representative of population density. For solar to play a large role in meeting the nation’s electricity demand, large transmission lines would need to be built to transport it from the Southwest to more populated areas. Large losses occur in transmitting electricity over thousands of miles because of heat in the wires, so it is inefficient to transport electricity over long distances. The losses depend on wire diameter, voltage, and even the weather because of ionization of air along the
Figure 4.3 Lights at night in the continental United States give a good indication of the population density. source: Photo courtesy ofNASA. |
wire. For 765 kV high tension braided power lines, the losses can be 6-7% per 1,000 kilometers (600 miles) (24). Transporting electricity over a couple of thousand miles would cause losses of 20% or so, which would dramatically reduce the already inefficient solar power available to the end user. These losses can be minimized by using high voltage DC transmission lines, but they are more expensive and there are currently very few available in the United States.
There is considerable opposition from affected landowners to the huge towers and multiple lines necessary to transport high voltage electricity over long, or even relatively short, distances. A big controversy has arisen in Colorado over Xcel Energy’s plan to build a 146-mile, 235 kV transmission line with 150-foot tall towers from the best solar resources in Colorado—the San Luis Valley—to the populated Front Range. The transmission line is opposed by the owner of a 171,000-acre ranch through which it would cross, based on environmental and viewshed concerns (25). Similar concerns arise wherever large transmission lines are proposed.
The earth formed about 4.65 billion years ago, and life began to evolve on earth about a billion years after the earth formed—the earliest forms of bacteria are found in rocks that are 3.5 billion years old (10). The earth was far more radioactive at that time, and there was much more radiation coming from space because there was less atmosphere to block it. Therefore, life evolved in an environment of high levels of radiation. In light of this fact, it should not be surprising that bacteria that developed repair mechanisms to cope with the damage to DNA caused by the radiation would be selected for over time by evolution. Eukaryotic cells, including fungi such as yeast and higher eukaryotes such as mammalian cells, evolved later with even more sophisticated processes to repair DNA. As a result, we are actually quite resistant to radiation because our cells have been coping with it for a long time!
Let us briefly consider the types of repair pathways that exist to repair DNA damage. There are three basic types of repair—mismatch repair, excision repair, and double strand break repair (11). Mismatch repair is a very important proofreading type of repair that makes sure that the DNA gets replicated correctly. The four bases—adenine (A), cytosine (C), guanine (G), and thymine (T)—have to pair properly when DNA is replicated: normally A can only pair with T and G can only pair with C. Occasionally, the wrong pairs form, so mismatch repair enzymes identify these, take out the wrong base, and replace it with the correct one. This can only happen shortly after DNA replication occurs. Mismatch repair is not related to radiation damage, but it is very important to avoid random mutations from normal DNA replication. Mutations in the genes for this type of repair often end up causing colon cancer.
There are two forms of excision repair—nucleotide excision and base excision. Nucleotide excision repair is primarily involved with repairing damage from UV radiation from the sun. UV radiation causes a chemical reaction that binds adjacent thymine bases together, forming a T-T bond called a thymine dimer. These thymine dimers are excised by enzymes that identify the bulge they make in DNA, then cut out about 24-32 nucleotides2 from the surrounding DNA and fill in the gap with the right nucleotides. Mutations in these repair genes often end up causing skin cancer. Individuals who have a genetic syndrome known as Xeroderma pigmentosum have mutations in nucleotide excision repair genes, and they are extremely sensitive to UV light.
Base excision repair is capable of repairing a wide variety of damage to bases from radiation. Cells have a large number of specific enzymes called glycosylases that identify specific types of damage to DNA bases and cut the base out while leaving the DNA strand intact. The deoxyribose sugar (the D in DNA) is then excised and a new nucleotide (the sugar and base) is inserted, completing the repair. This is a very important pathway for radiation damage, since thousands of bases are damaged by 1 Gy of radiation (11).
The most important type of repair for ionizing radiation is repair of DSBs because a double strand break is potentially catastrophic for a cell. There are two methods of repair of DSBs: non-homologous end joining (NHEJ) and homologous recombination (HR) (12, 13). NHEJ is the simplest of the methods to repair DSBs and can be thought of as simply sticking back together the “sticky ends” of broken strands of DNA. Of course, it takes a lot of cell machinery to accomplish this, but NHEJ has the capability of sticking any two broken ends of DNA back together. Thus it is very efficient, but not always accurate since it can’t tell whether the right pieces of DNA are getting stuck back together. Often there can be pieces of more than one chromosome that are broken and are in close proximity. NHEJ may fuse the wrong pieces of chromosomes back together, causing various kinds of chromosomal aberrations that can lead to cell death or sometimes cause cancer.
The other pathway for DSB repair, HR, is more complex but is very accurate. It depends on having another strand of the DNA to copy, which happens only after the DNA has replicated in what is called the synthesis or S-phase in the cell cycle. The broken DNA at the strand break is matched up with the replicated copy of the chromosome (called a chromatid); enzymes can then clean up the broken ends and copy the other chromatid exactly. This process completely repairs the DSB with no errors, but it is not always available. The majority of DSBs in human cells are repaired by the NHEJ pathway.
These various pathways for repairing DNA damage explain why one Gy of radiation can cause such extensive damage to the DNA yet is seldom lethal to the cell. Our cells are capable of being exposed to a surprisingly large amount of radiation with no permanent damage. This is perhaps one of the most important ideas in this book, particularly since many people think that radiation is extremely damaging and that exposure to even a small amount of radiation will inevitably lead to cancer. It is simply not true!
Most of the damage in chromosomes from a particles is so extensive (see Figure 7.5) that it cannot be repaired and usually ends up killing the cell. That is why the radiation weighting factor is so large for a particles (Table 7.2)—they are about 20 times more effective than electrons or у rays in damaging DNA and killing cells. However, since a particles cannot penetrate through even a single layer of cells, they can only do very localized damage. The main hazard for a particles is when they are inhaled in the lungs and remain trapped. This is the reason that radon is an important factor in causing lung cancer (more about radon later). It is also why plutonium is hazardous if it is inhaled and stays in the lungs.
In spite of these accidents, nuclear power remains a safe source of electric power. The only accident that caused loss of life was at Chernobyl, and as I have explained here, it was an accident that could not have occurred in any other country because of the unique type of reactor. Fewer than 50 people died from the accident, even though it is expected that about 4,000 more will eventually die from cancer. This is out of a total of more than 14,500 reactor-years of experience in producing nuclear power around the world, according to the World Nuclear Association.
To put that in perspective, recall from Chapter 3 that coal mining in only the United States routinely led to over 1,000 deaths annually in the 1930s, 140 in the 1970s, and 45 in the 1990s. That is completely apart from the 2,000 annual deaths from black lung disease among miners from 1970 to 1990 and hundreds a year currently. And that is just in the United States. Currently, more than 3,000 coal miners die yearly in China, and the toll from air pollution is far greater than that (67). And, of course, there is the problem of CO2 and global warming, which will ultimately affect all people worldwide. And yet, people seem to accept this terrible human toll without huge demonstrations and threatening to shut down coal plants every time there is an accident. Is there something unique about radiation and nuclear power that raises fears that seem out of proportion to the actual risks?
Psychologists have done extensive analysis of how we perceive risks. In his book How Risky Is It Really, David Ropeik analyzes our perception of risk and what factors affect how strongly we perceive a risk. The list of factors that affect our risk perception includes the following: Can we trust the government or industry involved? Is the risk greater than the benefit? Do we have control? Do we have a choice? Is it natural or man-made? Does it cause pain and suffering? Is it uncertain? Is it catastrophic or chronic? Can it happen to me? Is it new or familiar? Does it affect children? Is it personified in an individual? Is it fair?
Nuclear power hits a lot of these hot buttons in our risk perception, making it appear to be much riskier than the objective facts would indicate that it is. After the nuclear accidents discussed here, governments and the nuclear industry were not clear in communicating what was happening, causing a widespread lack of trust that the full story was being given. People do not have a sense of personal control over nuclear power and may not feel they have much choice. We are all exposed to natural background radiation, but people are far more frightened of man-made radiation from nuclear power, even if it is lower than background levels. People are very frightened of cancer compared to heart disease, for example, because of pain and suffering, and of course radiation can cause cancer. Radiation is not well understood by most people, so it seems unfamiliar and adds to uncertainty. An accident can potentially be catastrophic. And it can affect children. Given all of these risk perception factors, it is no wonder that many people are afraid of radiation.
But there are no risk-free sources of energy! Nuclear power actually has a remarkably good safety record and it provides a large, carbon-dioxide-free source of electricity for much of the industrialized world. It would be nice if wind and solar could take the place of coal, but they can’t. At best, they can keep up with the growth of energy usage. I am convinced that nuclear power needs to grow in the future to reduce the dependence on coal for electricity. The new nuclear power plants being built are intrinsically much safer than the ones that were described here (see Chapter 5).
It is my hope that the information in this chapter has provided a better understanding of the consequences of nuclear accidents so you will be able to evaluate the risks and benefits in a more objective fashion, with less of an emotional impact from the unknown and scary aspects of radiation.
1, A nuclear accident at the small 3 MW US Army experimental reactor known as SL-1 (Stationary Low Power Plant No. 1) occurred on January 3, 1961, in Idaho. It
was caused by an operator intentionally removing the single control rod, making the reactor go critical and suffering a steam explosion that killed the three operators. It was suspected, but never proven, that it was a case of nuclear suicide by the operator (1).
2. 1 petabecquerel (PBq) is equal to 1015 disintegrations per second or 0.27 megacuries (MCi). See Appendix B for more details on radiation units.
3. A Roentgen is a measure of exposure to X-rays or у rays. It is approximately equivalent to a rem.
4 . Chornobyl is a town just outside the 10 km zone and is the headquarters for the recovery effort.
5. Iodine-131-equivalent radioactivity is a comparative measure that converts other radionuclides into their equivalent radioactivity of 131I. 137Cs is multiplied by a factor of 40 and 134Cs is multiplied by a factor of 4 because of their different half-lives to give their 131I equivalent.
You may say that nuclear power is not really CO2-free and you would be right. But that is also true of wind and solar power. A complete life cycle analysis of energy sources would include CO2 produced when mining for uranium, enriching the
uranium, and making fuel pellets. Similar things are also true of solar and wind. Solar requires a lot of energy and resources to make the solar panels. Wind farms use enormous amounts of concrete for the bases (which requires mining), large amounts of steel for the towers, petroleum products for plastics, and a lot of fuel for constructing the huge wind farms. Numbers vary considerably for these activities, but a range in the amount of CO2 -equivalent greenhouse gases in grams produced for each kWh of energy is 9-21 g/kWh for nuclear, 10-48 for wind, and 100-280 for solar. For comparison, coal produces 960-1,300 and natural gas produces 350-850 g/kWh (6). A conservative estimate is that nuclear power produces about 2% of the CO2 emissions of coal. While the exact numbers are disputed, it is generally agreed that nuclear and wind are similar in CO2 emissions, solar is substantially higher, but they are all much better than coal or natural gas. Because of new technologies for enriching uranium—the step that uses most of the energy and produces most of the CO2—the emissions of CO2 from the nuclear fuel cycle will go down substantially in the future, and nuclear power will be even more important in mitigating global warming (see Chapter 11).
We are constantly bombarded with cosmic radiation from outer space as well as primordial radiation from natural radioisotopes in the earth. We get external doses from uranium, thorium, and radium in rocks and internal doses from radionuclides in foods. We also get internal doses from breathing in radon that comes from radium in the rocks. With the exception of doses from the food we eat, which depend somewhat on our personal dietary choices, our background levels of radiation depend mainly on where we live, how much time we spend outdoors, and on characteristics of the homes we live in.
Some day the earth will weep, she will beg for her life, she will cry with tears of blood. You will make a choice, if you will help her or let her die, and when she dies, you too will die.
John Hollow Horn, Oglala Lakota, 1932
Time is running short! When the Intergovernmental Panel on Climate Change (IPCC) published its first scientific report in 1990 on the possibility of human — caused global warming, the atmospheric concentration of carbon dioxide (CO2) was 354 ppm. When I began writing this book about four years ago, the concentration of CO2 was 387 ppm. It is now 397 ppm and rising. In spite of Kyoto, in spite of Copenhagen and Cancun, atmospheric CO2 continues its inexorable upward path. And the earth continues to warm.
The United States and the world are not yet serious about changing policies to stop this spiral. Too many politicians and others have their heads buried in the sand and refuse to acknowledge the continuing deluge of data showing that the world is indeed warming. 2010 was the warmest year—and the decade from 2000 to 2010 was the warmest decade—for at least the last 100,000 years. A serious debate is ongoing among geologists to decide if the earth has formally passed out of the Holocene epoch of the last 12,000 years into the Anthropocene epoch, in which 7 billion humans are the primary factor driving climate (1). Sea levels continue to rise, the oceans are acidifying, glaciers and ice sheets continue to melt, the Arctic will likely be ice-free during the summer sometime this century, and weather extremes have become commonplace around the earth. Plant and animal species are migrating to higher latitudes at 17 kilometers per decade on average, and alpine species are moving to higher altitudes at 11 meters every decade (2). Changes like this have occurred in the past, but over time spans of thousands to tens of thousands of years, giving species time to adapt.
There are those who argue that species have always had to adapt to a changing climate or die and therefore they will handle the current changes. While there is
some truth to that, it ignores the fact that many species are already under great pressure from the impact of humans on habitat. We have taken over the entire earth and are changing it to meet our desires, regardless of the impacts on the species that share the earth and on which we depend. These combined anthropogenic impacts are leading to extinction rates that are 2 to 5 orders of magnitude above historical rates. As a result, we are in the midst of the sixth mass extinction of biodiversity, though this is the only one that has human fingerprints on it (2). We are playing a dangerous game with the earth and are ignoring the potential consequences. It is time to get serious about recognizing what we are doing to the earth and drastically reduce our production of CO2.
About three-fourths of the global CO2 emissions come from fossil fuels and about one-fourth from deforestation. The United States cannot solve the world’s CO2 problem by itself, but we play an outsized role in the production of CO2—generating nearly one-fifth of the world’s CO2 by burning fossil fuels—and must take a leading role in reducing our emissions. According to the Energy Information Administration (EIA), 82% of our energy comes from fossil fuels, including coal, natural gas, and petroleum (see Chapter 2). Petroleum is primarily used for transportation, so reducing its usage requires building vehicles with much better gas mileage. The United States emits 2.3 Gt of CO2 from petroleum usage (3). The CAFE (Corporate Average Fuel Economy) standards for cars and light trucks in the United States are scheduled to reach 54.5 mpg by 2025, according to an agreement between President Obama and 13 car manufacturers reached in 2011, more than double the current 27 mpg average (4). This is a big step in the right direction, and even more can be done. Plug-in electric vehicles can reduce the use of petroleum even further, but at the cost of finding carbon-free sources of electricity. While the use of petroleum is clearly a major problem for generating CO2, that is not the focus of this book, except to the extent that electricity may play a role in reducing the use of petroleum.
Currently, 40% of all energy used in the United States is devoted to producing electricity. The energy used to generate electricity will almost certainly increase in the future because of population growth, the increasing use of electricity to power cars, and the rapid increase in what Daniel Yergin calls “gadgiwatts”—the multitude of electronic gadgets that have become essential to our modern world, including computers, cell phones, iPods, iPads, iPhones, huge flat-screen TVs, microwaves, and the list goes on and on (5). In spite of energy conservation, the use of electricity is expected to increase by over 30% by 2040, while total energy consumption is expected to increase by about 10% (6). This can be reduced by strenuous efforts in efficiency, but overall, electricity use will almost certainly grow in the future. The big question is: What can be done to provide the electricity in a manner that drastically reduces the production of CO2?
A giant but unwelcome experiment was done in the worldwide Great Recession of 2008-2009. Electricity demand actually decreased in the United States from pre-recession 2007 and total CO2 emissions decreased by 3% in 2008 and 7% in 2009 (7). But I don’t think anyone wants to see CO2 emissions reduced by such catastrophic economic crises with associated high levels of unemployment. There is some good news here, though. The EIA estimates that energy-related US CO2 emissions will not reach the 2005 level of 6 Gt until 2040 (6). But that is still too much! The goal of the Kyoto Protocol was for the United States to reduce CO2 emissions to 7% below the level of 1990. Is it possible to reduce CO2 emissions from electricity generation to near zero and still have a robust economy? It has often been argued that only by drastically cutting back on our standard of living can we reduce our detrimental effects on the environment. That is a false alternative, though. It is possible to provide our electrical demand through environmentally friendly power sources that generate little or no CO2.
As I have argued in this book, coal is the big problem for electricity production, and the CCS (carbon capture and storage) technology is not going to solve the problem. Coal needs to be essentially eliminated as a power source because of its multitude of health and environmental consequences. Coal provides 41% of our electricity now, and the actual amount of coal used is expected to increase through 2035, even though the percentage of electricity generated by coal would be slightly decreased by then. That is untenable if we want to reduce CO2 emissions. Of the 30 Gt of CO2 emitted by all countries from energy sources, over 13 Gt comes from coal, and of that, the United States generates about 2 Gt. China generates more than three times as much as the United States, however, while India generates about half as much as the United States (3). Clearly, the world is hooked on coal and needs to be weaned off it if CO2 is going to be reduced sufficiently to mitigate climate change.
The only way that a serious reduction in coal usage is likely to happen is through the implementation of a carbon fee of some sort that makes the actual cost of carbon more realistic than the current market price. Coal plays such a major role in electricity production in the United States and in the world for two connected reasons: it is plentiful and it is cheap. But it is a Faustian bargain that we have ignored for a long time. It is time to begin paying the actual cost, including the costs to the environment from mining and global warming, as well as the direct health costs from air and water pollution. If a carbon fee were implemented that took these costs into account, coal would no longer be a bargain, and other carbon-free sources would become economically attractive, including both nuclear energy and renewable energy.
James Hansen (8) argues strongly for a “fee and dividend” method that would enact a fee on the producers of coal, oil, and gas per ton of CO2 that would be released by burning the fossil fuel. This fee would, of course, raise the price of gasoline, electricity, home heating, and many other things in society. To help reduce the impact on individuals, the fees collected would be returned to citizens as a uniform dividend. How would that help reduce the CO2 problem? It would give a strong economic incentive to reduce your personal use of carbon. If you drive a gas guzzler and live in a mansion, you will use a lot of energy and pay a high price, but the dividend will be small in comparison. If you drive a highly efficient car and live in an energy-efficient house and conserve in other ways, you will come out ahead. Al Gore argues similarly for a carbon tax with a tax rebate to citizens (9).
An alternative approach is the “cap and trade” system that was very successful in reducing acid rain from sulfur oxides emitted by coal-fired power plants in the 1980s. A cap and trade system would use the efficiencies of the marketplace to achieve a desirable result, in this case reducing CO2. The government sets a cap on allowable CO2 emissions from industries such as power plants, and issues allowances for each ton of CO2 that a particular power plant comes in under the cap. That power plant can bank the allowances or trade or sell them to power plants that exceed their cap. It provides a financial incentive for a power plant or a utility to reduce their emission of CO2 but doesn’t specify how they might achieve that. They might, for example, use more wind power or nuclear power to get a lot of allowances. The higher-emitting power plants then find themselves at a cost-disadvantage in the marketplace because they would have to buy expensive allowances, and the operators would have an incentive to close down the plants and build more efficient plants (9). President Obama pushed for a cap and trade approach, but Congress has adamantly refused to pass a cap and trade bill. Europe passed a cap and trade bill in 2003 with a cap that would reduce emissions by 20% from 1990 levels by 2020 (5). However, their system has been fraught with problems, largely from giving out too many credits, so there is an oversupply. As a result, the price of the carbon credits has plummeted to €2.75 in early 2013. At that price, it does not reduce the demand for carbon, making coal still attractive (10). California began to implement a cap and trade system in late 2012 that issues allowances for carbon and then establishes a cap and a market for them. The goal is to reduce carbon emissions to 1990 levels by 2020 (11). If it works, it will be a good test case for the United States to implement a cap and trade system.
In March 2012, the US Environmental Protection Agency (EPA) proposed new rules for carbon emissions from new power plants that would essentially prevent any new coal-fired power plants from being built unless they had carbon capture and storage (CCS) technology, which is not available on a commercial scale and has numerous problems (see Chapter 3) (12). And other EPA rules on nitrous and sulfur oxides and mercury emissions mean that old coal plants will have to finally upgrade or be shut down. About 14% of coal plants, accounting for 4% of total electrical capacity, will have to be retired in the next five to eight years (13). This attrition of existing coal plants is moving in the right direction—especially because these plants are the most inefficient and most polluting and generate the most CO2 per kWh produced—and the rate of attrition should be accelerated so that they are all closed down over the next 20 to 30 years.
But what will take their place? That is the huge question on which so much depends. There is no single answer, but President Obama is correct that we need to have multiple strategies. The first answer that nearly everyone can agree on is that a greater emphasis on efficiency can reduce demand so that perhaps not all of the coal-fired power plants need to be replaced. That is certainly the cheapest thing to do, and it can happen relatively quickly. Every kWh of electricity saved through greater efficiency—replacing incandescent bulbs with CFL or LED bulbs, replacing old appliances with Energy Star compliant appliances, upgrading insulation and weatherstripping to save on air conditioning and heating—is a kWh that doesn’t need to be generated by an electric power plant. The advent of smart metering to give people more control over how and when they use electricity may reduce the electricity people use, though this is still in the beginning stages and the results are not yet in. The energy guru Amory Lovins believes that efficiency and alternative energy can completely solve the problem (14). However, most energy experts recognize that increased efficiency is not nirvana and that we will still need to plan for additional electric power.
The second answer is that renewable energy can help. But, as I discussed in Chapter 4, energy from the sun and wind have major difficulties associated with intermittency, location relative to population centers, footprint, and cost that limit their contributions to about 20% or less of electricity production. And even worse, they do not effectively contribute to the baseload electricity that coal provides. Baseload is the minimum electrical demand over a 24-hour day that must be provided by a constant source of electricity. Solar and wind power contribute principally to the intermediate demand that fluctuates during the day, but they still require backup—usually with natural gas power plants—for when they are not available (15). Numerous states have adopted RPS (renewable portfolio standards) that require renewable energy to provide up to 30% of electricity, but it is very unlikely that this is actually achievable and I doubt that many states will even achieve 20% as environmental issues associated with wind and solar power become more prominent. Nevertheless, an increase from the current 4% to 20% would be an enormous help. But it does not solve the coal problem. A good example to demonstrate this is Germany, which has made a great effort and investment to increase both solar and wind power, but the amount of CO2 generated from coal usage did not change at all from 1995 to 2007. It did go down about 15% by 2009 but that is because of the severe world recession that cut energy use throughout the Western world. Carbon dioxide emissions decreased by about the same percentage in the United States (3). And Germany is going to make things worse because they are planning to shut down their nuclear reactors as a response to Fukushima and will depend on poor-quality coal even more in the future—a very bad choice indeed.
Natural gas has become the new darling of the energy world with the advent of fracking for shale gas, which has dramatically increased the world supply. As a result, there is a glut of natural gas in the United States now and prices have plummeted to below $3 per thousand cubic feet from $13 in the summer of 2008 (16). This very low price—and the relatively smaller capital cost of a gas-fired combined cycle plant—make the economics of replacing coal plants with gas seem to be the natural choice. Certainly natural gas is better than coal when it comes to emissions, though as discussed in Chapter 3, the commonly stated 50% reduction in CO2 for an equivalent amount of power is not really true because of the loss of methane from mining and from leakage in the pipes. In fact, natural gas may have an advantage of only 25% or less. Nevertheless, a big part of the reason that energy-related CO2 emissions are expected to stay below 2005 levels is because of the increasing use of natural gas to replace some coal-fired power plants.
The environmental issues associated with fracking are still a major concern and—unless they can be resolved satisfactorily—a major switch to natural gas would be a mistake. Furthermore, natural gas prices have historically been very volatile because of variable supply and usage. In contrast to coal, which is used primarily for baseload electricity production in the United States, natural gas is used about equally for electricity, residential and commercial heating, and industrial processes. If there is a huge new demand for natural gas to provide baseload electricity in addition to intermediate and peak load power, it could impact these other areas and cause prices to rise substantially. Certainly natural gas is part of the equation for reducing CO2, but it would be a mistake to try to make up for a large fraction of coal plant retirements with natural gas plants. It would not solve the CO2 problem, and all of the energy eggs should not be put into one basket.
So now we come to nuclear power, the alternative to coal for stable baseload power that can truly cut the emissions of CO2 to nearly zero. Can there be a “nuclear renaissance” that would give us reliable, relatively cheap electricity for the next 100 years and beyond without the environmental burdens of fossil fuels? Can we go back to the future? I argued in Chapter 5 that about 175 Generation III reactors could replace all of the coal-fired power plants in the United States. This would take a major national effort but it would also require a major national effort to get 20% of electrical energy from wind and solar. Neither of these goals will be achieved unless there is a cost associated with CO2 production through “fee and dividend” or “cap and trade.” And that will only happen if there is a strong public demand that we get serious about reducing CO2 emissions and halting global warming.
Our journey through the world of the atom and nuclear power has exposed many of the myths that are used by anti-nuclear activists to argue against nuclear power (17-19). Let’s explore these myths a bit more specifically.
Besides the gases emitted from coal-fired power plants, a huge amount of fly ash containing HAPs is produced from the burned coal. At Rawhide, 70,000 tons of fly ash are produced annually, which has to be captured in bag filters and disposed of. Some of it is used to make cinder blocks and sheetrock, but the vast majority is simply buried as a dry waste that is then covered with two feet of soil and planted in native grasses. This is relatively benign, but that is not how many coal power plants handle ash. Many power plants store the ash as wet sludge in large holding ponds, which can leak or spill. About 100 million tons of ash and sludge are produced annually in the United States from coal-fired power plants (10). In 2008 coal ash sludge broke through the dike of a 40-acre holding pond in Tennessee, covering 400 acres up to six feet deep with toxic sludge, contaminating a river and damaging a dozen houses (11). And this was a small ash holding pond. The Plant Scherer coal-fired plant in Monroe County, Georgia—the largest coal-fired plant in the United States—has an ash holding pond that is about 19 times as large as the one that failed in Tennessee, with over 1,000 tons of coal ash deposited daily (12). In 1972 an impoundment dam failed in Logan County, West Virginia, and spilled 130 million gallons of toxic sludge into Buffalo Creek, killing 125 people, injuring 1,000, and leaving 4,000 people homeless (13).
Bag filters and electrostatic filters are not able to remove the smallest particles resulting from the fly ash, however. Particulates are classified in terms of their diameter in microns (millionths of a meter). PM10 particles have a diameter of 10 microns or less (roughly the size of a cell in your body) and PM2 5 particles have a diameter of 2.5 microns or less, a small fraction of the width of a human hair. These smaller PM2 .5 particles are not easily removed from the coal-fired plant exhaust and are the most hazardous. They can accumulate in the deepest recesses of the lung and lead to respiratory diseases such as emphysema and lung cancer. A study done for the Clean Air Task Force estimated that there were over 30,000 premature deaths in the United States from power plant emissions in 2000 (14). A more recent Clean Air Task Force study estimated that there would be 13,200 deaths in 2010 (15). While studies vary considerably in estimating the annual number of deaths from these particles, it seems clear that the numbers are in the thousands or tens of thousands. A study by the National Academy of Sciences put the annual cost of damages due to air pollution from coal at $62 billion in 2005 (16).
We are now in a position to be a little more quantitative about radioactive decay of nuclei and see what decay processes are allowed. All of the action in radioactive decay is in the nucleus, not in the electrons circulating the nucleus. The nucleus consists of equal numbers of protons and neutrons for the elements from helium to oxygen, except for beryllium, which has an extra neutron. In order to talk about radioactive nuclei a bit more simply, some terminology is in order. Physicists talk about the atomic number of an element, which is the number of protons (and also the number of electrons) in the atom and defines the element. It is given the symbol Z, which is used by physicists to indicate charge. Every element has a unique Z value that completely defines the chemical properties of the element because it defines the number of electrons, and electrons determine chemistry. The other important number to define a nucleus is the atomic mass, given by the symbol A. The atomic mass is just the number of protons and neutrons, so the number of neutrons is the atomic mass minus the atomic number or n = (A — Z).
For low Z elements, those with atomic number below about 15, the atomic mass is generally twice the atomic number, meaning that there are equal numbers of neutrons and protons. But as the atomic number gets larger, the number of neutrons increases rapidly. This is necessary because the repulsive force from the positive charge of the protons becomes too large as the nucleus gets bigger and the strong force that holds them together only operates over a very short distance, just on its immediate neighbors.4 By adding additional neutrons, the strong force is increased to hold the nucleus together.
A given element may exist in several versions, known as isotopes, with different numbers of neutrons but the same number of protons. Chemically the isotopes are identical to each other so they cannot be separated by chemical means. Generally there are only one or a few isotopes of an element that are stable, and the other isotopes are unstable, or radioactive. Unstable isotopes undergo radioactive decay by emitting a, p, or у radiation, and they are often called radionuclides or radioisotopes.
Physicists have developed a shorthand notation to illustrate the isotopes and their radiation. The element is indicated by X, the atomic number (number of protons) is Z and the atomic mass (number of neutrons and protons) is A. The number of neutrons is just A — Z. A generic element and some examples of specific elements and isotopes are:
AX 2He 126c 292U 2f2u 294Pu
Helium (He, also an a particle) and carbon (C) have equal numbers of protons and neutrons (2 each for He and 6 each for C), while the two uranium isotopes have a much larger number of neutrons than protons. Since the atomic number and the element name are redundant, it is convenient to shorten the notation in writing to just the element symbol and the atomic mass A, for example 238U, and it can also be written U-238. Uranium always has an atomic number of 92, so it does not have to be specified. Plutonium has an atomic number of 94, and the most common isotope has an atomic mass of 239.
As the Curies, Rutherford, and others learned, new elements are created when different isotopes undergo radioactive decay and emit a or p particles. There are some rules that govern these decay processes, which are conservation laws. In any radioactive decay, the charge Z is conserved, the total number of nucleons (protons plus neutrons or A) is conserved, and energy is conserved. We have to take into account Einstein’s famous law (E = mc2) that energy and mass are equivalent and mass can be converted into energy, so it is really mass-energy that is conserved. In fact, mass is converted into pure energy, which is what gives radiation its energy.
The process of radioactive decay is a random, statistical process. It is impossible to say when a specific nucleus will undergo radioactive decay, but it is possible to measure how long it takes for half of the radioactive nuclei in a sample to decay. This is called the half-life, and it is characteristic of a particular radionuclide. Half-lives can vary from seconds to billions of years.
Let’s look at some examples of radioactive decay.5 Radium, the radionuclide isolated by Marie and Pierre Curie, has 88 protons and 138 neutrons for an atomic mass of 226, and it emits an a particle. Since an a particle is a helium nucleus, it has 2 protons and 2 neutrons. The total charge has to be conserved, so the decay product has to have 2 fewer protons than radium and an atomic mass that is 4 less than radium. Its decay scheme is:
288Ra -> 286Rn + 2a + energy
so radium decays into radon and emits an a particle, with a lot of energy given off in the process. The energy comes from the fact that the mass of the radium nucleus is greater than the sum of the masses of the radon and the a particle. It is generally the very heavy atoms that undergo a decay, such as radium, polonium, uranium, and plutonium. The total atomic mass and atomic number are equal on both sides of the arrow, so the conservation laws are upheld. In any a decay, the atomic number Z of the resulting nucleus is smaller by 2 and the atomic mass A is smaller by 4.
Now let’s consider a radionuclide that undergoes p decay. Recall that a p particle is equivalent to an electron, which has such a small mass in comparison to the nucleons that it is not considered in the decay equation. Becquerel first discovered radioactivity in his fortuitous experiment with a uranium salt, and it turns out that he was actually measuring p decay, though he didn’t know it at the time. Uranium consists primarily of the isotope 5’92U, which a-decays into 5^0Th (thorium). But thorium-234 itself is radioactive and it undergoes p decay, giving off an electron called a p particle. A problem immediately comes to mind. The electron has a negative charge but according to the conservation of charge rule, net charge cannot be created. What happens is that a neutron turns into a proton and an electron, thus conserving charge, and the electron comes flying out of the nucleus as a p particle.
It turns out not to be so simple, if that is indeed simple! Enrico Fermi, a brilliant experimental and theoretical physicist from Italy, realized that the p particles had a broad distribution of energy, whereas the a particles emitted by a nucleus had a precise energy. Why does it matter? The energy of the particles coming out of radioactive decay depends on the change in mass of the nuclei and particles involved, according to E = mc2. The principle of conservation of energy seemed to be violated in the case of p decay since the ps did not have a specific energy but rather a range of energies. So Fermi postulated that another unknown particle, which he named a neutrino (“little neutral one” in Italian), must also be emitted with the remaining energy. Together the p and the neutrino had the exact energy needed. Neutrinos hardly interact with anything and they have an extremely small mass, so their existence was purely theoretical for a long time. The p decay of thorium is complex, so let’s look at a simple example of p decay, that of a natural but fairly rare form of potassium, potassium-40 (or K-40; K is the symbol for potassium, from the neo-Latin word kalium). This reaction, by the way, is one of the natural ways in which you get exposed to radiation by eating foods rich in potassium, such as bananas (see Chapter 8).
19K -> 2°Ca + IP + v (antineutrino)
The charge on both sides of the equation add up properly (19 = 20 -1), the atomic mass is conserved because the p particle is not a nucleon and has very little mass, and energy is conserved because of the antineutrino. Fermi originally called it a neutrino, but when a proper theory was developed, it turned out to be an antineutrino, indicated by the bar above the symbol. The generic equation for P decay is:
n -> p+ + p- + v
Things get even weirder than this. In 1928 Paul Dirac had predicted that an electron should have an antiparticle called an antielectron, which is identical to an ordinary electron except it has a positive charge. It was later shown that all elementary particles should have antiparticles with opposite properties. Any particle meeting its antiparticle ends up annihilating both particles and releasing pure energy (13). There is another type of p decay known as p+ decay that involves not an electron but an antielectron that has a positive charge and is called a positron, proving Dirac’s prediction. In this case, a proton turns into a neutron, a positron (P+) and a neutrino:
p+ ->n + p+ + v (neutrino)
A specific example of p+ decay is an isotope of oxygen that turns into nitrogen:
18O ■» + +1P + v
Things seem to be getting very strange here. We have positive and negative electrons, odd new particles called neutrinos and antineutrinos that are nearly impossible to observe, neutrons and protons that turn into each other, and quarks with partial charges that make up neutrons and protons! Can we make any sense out of this? Perhaps we cannot, but the theories are quite well developed to explain these odd events. Fermi explained p decay by postulating a new force called the weak force that acts over extremely small distances in the nucleus. It is weak in comparison to the strong force that holds neutrons and protons together. Fermi’s theory predicted the creation of neutrinos and antineutrinos and showed that neutrons and protons could decay according to the equations given above by creating electrons and neutrinos or their antiparticles. But it was not a complete theory. It was a theory sort of like Bohr’s theory of the atom before quantum mechanics was developed. In 1958, Richard Feynman and Murray Gell-Mann revised Fermi’s theory substantially and predicted the occurrence of new particles called W bosons that were produced in the process of neutron or proton decay and which then decay into electrons and neutrinos. These particles were later discovered in high-energy accelerators. In 1967, Steven Weinberg and Abdus Salam independently proposed a complete theory that united the electromagnetic force that explains charged-particle interactions and the weak force that explains p decay. This theory is known as the electroweak theory and, as you might guess, it predicted other particles that have subsequently been found (9).
Now you know where a and p radiation come from and how they are produced in radioactive decay processes. But where does у radiation come from? One way to think about the nucleus is that it also can have quantized energy levels like those of the electrons in the Bohr atom. A nucleus is normally in its lowest energy level, or ground state. When it undergoes radioactive decay by emitting an a or P particle, the nucleus is often left in an excited state at a higher energy level. The nucleus will then adjust to its ground level state by emitting a у ray that takes off the energy, just like the photon that is given off when an electron jumps from one energy level to a lower one. Many, but not all, radioactive nuclei emit not only an a or p particle but also a у ray when they decay. A у ray is also a photon, but by definition it comes only from the nucleus. X-rays come from electrons jumping between energy levels in an atom or from electrons moving at high speed and then abruptly being deflected or stopping when they hit atoms. That is how Rontgen produced X-rays in his evacuated tubes. There is no fundamental difference between у rays and X-rays, only the way in which they are produced.