Category Archives: An Indispensable Truth

The Cost of Fusion Energy

Figure 9.44 shows how the COE from fusion compares with that from other energy sources [28]. Each entry has two bars showing a minimum and a maximum value, the difference depending partly on location and partly on technology. For fossil fuels, the maximum is the cost including the expense of carbon sequestration. For fusion, the maximum and minimum represent the range of the reactor models ABCD described above. These data for other energy sources are from the IEA report of 1998. Fuel prices and interest rates have fluctuated so violently in recent years that the comparison has not been updated. However, the levelized COEs of nonfu­sion sources are available for 20 056 and 2010.7 The data for 2010 are shown in Fig. 9.44. For comparison, the fusion COE given in Fig. 9.44 is reproduced in Fig. 9.45. That graph shows also the breakdown between capital costs and

image370

Fig. 9.44 Comparison of the cost of electricity from conventional and renewable energy sources [28]

image371

Fig. 9.45 Estimated cost of electricity in Europe from nuclear, fossil-fuel, and renewable sources assuming a 5% discount rate7. The color code gives the breakdown among capital costs, operation and maintenance (O&M), and fuel costs. For nuclear plants, there is charge for nuclear waste management. For fossil-fuel plants, there is a cost for carbon management under certain assump­tions. The estimated cost range for fusion plants has been added. The solar photovoltaic (PV) and solar thermal costs have to be plotted on a different scale

operation and maintenance costs, as well as the estimated cost of carbon capture and sequestration for fossil-fuel plants. The data are from different time periods, but the difference is insignificant in view of the uncertainties involved. It is seen that the COE from fusion plants will be competitive with that from other renewal sources and from fossil-fuel plants with carbon management.

image372

Fig. 9.46 External costs of fusion compared with other energy sources [27]

It is interesting to note that the large variability of the COE is reflected in the IEA’s 2010 report7. The figures for each energy source vary greatly from country to country. In addition, the sensitivity of the estimates to such factors as corporate taxes, discount rate, and fuel cost is emphasized.

Not included in the above analyses are external costs, which include damage to the environment, general health, and human life. Such costs have been evaluated by site to eliminate location biases. For instance, one considers the difference when a fusion plant is put in place of a coal plant in the same location. It turns out that the external costs of fusion are extremely low, ranging from 0.07 to 0.09 euro cents per kWh. Comparison with other energy sources is shown in Fig. 9.46.

The net present value of fusion takes into account the probability of success or failure. Though this obviously has a high degree of uncertainty, there is a large margin for error, since the annual world energy expenditures exceed the annual cost of fusion development by 1,000 times. It has been estimated that if fusion captures 10-20% of the electricity market in 50 years, the discounted future benefit of fusion is $400-800B; or, if the probability of failure is counted, it is still $100-400B. This means that development of fusion is worthwhile even if fusion captures only 1% of the world electricity market [27].

Scientific Summary

In Chap. 1, we summarized the scientific evidence for global warming caused by carbon dioxide emitted by human activity, especially the burning of fossil fuels. In Chap. 2, we summarized the known facts on fossil-fuel reserves, especially the critical shortage of oil. We showed the difficulty of and dangers in extracting the last reserves as well as the expense in sequestering the greenhouse gases emitted in their use. In Chap. 3, we surveyed alternative energy sources and found that none of them, except nuclear energy, can provide dependable backbone power, although many are suitable as supplementary power sources.

F. F. Chen, An Indispensable Truth: How Fusion Power Can Save the Planet,

DOI 10.1007/978-1-4419-7820-2_11, © Springer Science+Business Media, LLC 2011

In Chaps. 3-5, we introduced the concept of fusion power and explained why a magnetic bottle holding a hot plasma is needed to fuse hydrogen into helium to get energy from water. In Chaps. 7 and 8, we explained the physics of plasma contain­ment in a device called a tokamak and summarized all the difficult problems that have already been solved. In Chap. 9, we gave details on all the extremely difficult engineering problems that have yet to be solved. Finally, in Chap. 10, we showed other ways to achieve fusion power which have not yet been explored extensively but which may make better reactors than the tokamak.

Organic Solar Cells

Organic solar cells have been invented which are cheaper and easier to make than thin film and which have great promise in small, personal applications. The best of these are made of polymers (a general name for plastics) with long chemical names abbreviated as P3HT and PCBM. They have different bandgaps and different affini­ties for electrons and holes. Rather than separating them into layers as in CdTe cells, these two polymers are mixed completely together to form what is called a bulk heterojunction material. The mixture melts at a temperature below 100°C and, in liquid form, is easily coated onto a substrate, where it solidifies. The substrate can be a piece of cloth! By cooling the mixture at a particular rate, it self-organizes into connected clumps where the P3HT and PCBM are separated. A cartoon of this is shown in Fig. 3.45.

image138

Fig. 3.45 Self-organization of two materials, A and B, in a bulk heterojunction organic solar cell [17]

When a photon strikes a P3HT region (A), it creates an electron-hole pair. The electron then follows the A path to the top transparent electrode. (Electrode is defined in footnote 45.) The hole is attracted to the PCBM (B) region because of the natural electric field that arises between the two materials, and the hole fol­lows the B path to the metal electrode. Similarly, when a photon strikes a B region, the electron jumps into the A region, the hole stays in B, and both charges move to their respective electrodes following the strands of A and B. When the two elec­trodes are connected through a load, the electron current provides the solar power. The fortuitous way these polymers organize themselves avoids all the complicated layers of silicon or CdTe in conventional cells, but the trick is to get the right self­organization by slowly cooling the mixture with careful temperature control.46

The first experiments used a polymer layer less than a quarter of a micron (1/4000th of a millimeter) thick and less than a tenth the size of a postage stamp. A sunlight-to-electricity conversion efficiency of 4.4% was achieved [18], together with a high filling factor (defined above) of 67%. Many efficiency claims are decep­tively high because small samples collect sunlight from the edges as well as the top, but in this case a proper test was done at the National Renewable Energy Laboratory to avoid this. Further improvement was made in 2009 using a polymer called PBDTTT, whose chemical name would take up two lines. The partner material was not a polymer but carbon in the form of fullerene, commonly known as buckyballs, the familiar spherical carbon lattices made of triangles and named after Buckminster Fuller. This organic solar cell was 6.77% efficient, had high output voltage, and captured more of the infrared energy than the previous model [19]. The current was also reasonable in spite of the crooked paths that the electrons have to follow.

With efficiencies comparable to those of amorphous silicon cells, organic solar cells have great possibilities because they are inexpensive and can be put into almost anything, such as hand-held electronic devices and fabrics. They have already been built into backpacks to charge iPods and cell phones. They are not suitable for large installations, however, because the polymers are attacked by oxygen and last only one or two years. However, they will last almost indefinitely in an oxygen-free environment such as the inside of double-glazed windows.46

image139

Fig. 3.46 Cartoon of a dye-sensitized solar cell [17]. A is a nanoparticle, B is a conducting liquid, and C is a layer of dye on each particle

Further in the future are such inventions as dye-sensitized and quantum-dot solar cells. Dye-sensitized cells, also called Gratzel cells, consist of nanoparticles of titanium dioxide (TiO2), each only about 20 nm in diameter, coated with a layer of dye, as depicted in Fig. 3.46. (The prefix nano indicates sizes measured in billionths of a meter or millionths of a millimeter.) TiO2 is a large bandgap semiconductor, so by itself it would absorb only ultraviolet light. The dye, however, is excited by sunlight of any desired color and can inject an electron into the nanoparticles. The electron then hops from one particle to another to get to one electrode. This leaves the dye with an electron missing, so it has to grab one from the electrolyte (a con­ducting liquid containing iodine) in which the particles are immersed. Efficiencies of 11-12% have been observed in the laboratory, but what it would be in production is unknown. Since a part of the cell is liquid, it has to be sealed, which is rather inconvenient. Solid or gel electrolytes have been tried, but their efficiencies are very low, 4% or so [17].

Since the electrons have to jump numerous times to get to the positive electrode, the motion can be speeded up by using nanowires or nanotubes instead of nanopar­ticles. Figure 3.47 shows how this would work. The nanowires are heavily coated with dye, and electrons can readily flow along them right to the electrode at the bottom. In this case, the wires are made of zinc oxide (ZnO) instead of TiO2. Carbon nanotubes have also been used. The tubes, 360 nm long, have a surface area 3,000 times that of a flat surface [21], but of course no amount of surface area can collect more sunlight than falls on the surface facing the sun. Efficiencies of 12% have been observed in the laboratory.

A further improvement can be obtained by replacing the dye with quantum dots (QDs), which are nanocrystals of InP (indium phosphide) or CdSe (cadmium sele — nide). These are really small, only about 3 nm in diameter. They can be coated onto TiO2 or ZnO nanowires to replace the dye coating in Fig. 3.46 or 3.47a. By varying the size of the dots, different colors of the solar spectrum can be absorbed. When a photon hits a QD, an electron-hole pair is created, and the electron falls into the nanowire and is carried straight to an electrode, as in a dye cell. QD cells

image140

Fig. 3.47 (a) Diagram of a dye-sensitized cell using ZnO nanowires [20]; (b) microphotograph of actual nanowires [17]. This figure is turned 90° relative to Fig. 3.46

can have higher efficiency than dye cells because they can violate the theoretical limit shown in Fig. 3.44. They can give both higher voltage and higher current [22]. Normally, when a photon has more than enough energy to push an electron across the bandgap into the conduction band (Fig. 3.32), the extra energy goes into the electron. These “hot electrons” then cool and drop down to the bottom of the conduction band, so the output voltage is only the bandgap voltage. In QDs, the hot electrons cool much more slowly and can get into the circuit before losing all their energy, so the cell’s output voltage can be higher than assumed by the simple theory. Furthermore, the hot electrons can have enough energy to create more electron — hole pairs by themselves, without photons. This increases the cell’s current over the theoretical limit.

Though quantum-dot solar cells are still in the experimental stage, the way to make nanowires [23] and QDs [24] is well documented. They share all the advan­tages of organic solar cells in small applications and have the prospect of much better efficiencies. They have not been proved to be suitable for solar farms.

Heat can drive electric currents directly by the Seebeck effect, giving rise to thermoelectric power, which is illustrated in Fig. 3.48. If we apply heat to one side of a thermoelectric material, the hot particles at the top move faster than the cold particles at the bottom, so particles tend to drift from top to bottom. Now if on the right side, we have an electron-rich (n-type) material, the electrons will be driven from the top electrode to the bottom electrode. To close the circuit, we put an electron-deficient material (p-type) on the left, where the holes will drift down­wards, and we connect the two bottom electrodes to a load. The electrons will then flow through the wire from right to left to fill the holes. Since the electrons are negative, the electrical current goes from left to right. A working arrangement might look like that in Fig. 3.49. Solar concentrators are used to increase the heat applied to the thermo-photovoltaic (TPV) cell, and the bottom of the cell has to be kept cool by water or air flow.

Подпись: vacuumПодпись:

image143
Подпись: lens

image145Emitter (tungsten

(T >2000 °С)

TPV ce

Back surface

reflector

Water ( or forced-air ) heat exchanger

Fig. 3.49 Illustration of thermo-photovoltaic solar cell (Basic Research Needs for Solar Energy Utilization, US Department of Energy Office of Science workshop, April 2005)

This idea is still in the initial stages of testing the thermoelectric efficiencies of compounds like PbTe, Bi2Te3, AgSbTe2, and AgBiSe2 and formulating new ones. Note that the latter two are type I-V-VI semiconductors [25]. Research is also proceeding on using nanowires and quantum well structures for this purpose [26, 27].

Fusion: Energy from Seawater[6]

Fission and Fusion: Vive La Difference!

The energy of the nucleus can be tapped two ways: by splitting large nuclei into smaller ones (fission) or by combining small nuclei into larger ones (fusion). The first yields what we know as atomic or nuclear (fission) energy, together with its dangers and storage problems. The second gives fusion energy, which is basically solar power, since that is the way the sun and stars generate their energies. Fusion is much safer than fission and requires as fuel only a little bit of water (in the form of D2O instead of H2O, as will soon be clear). Fission is a well-developed technology, while fusion is still being perfected as an energy source. The object of this book is to show how far fusion research has gone, how much further there is to go, and what we will gain when we get there.

Binding Energy

How can we get energy by fusing two nuclei when normally we have to split them? To understand this, we have to remember that atomic nuclei are composed of protons and neutrons, each of which weighs about the same1 but has a different electric charge: +1 for protons and 0 for neutrons. When these nucleons (a general term for protons and neutrons) are assembled into a nucleus, they hold themselves together with a nuclear force measured by the so-called binding energy. The size of this binding energy varies from element to element in the periodic table, as shown in Fig. 4.1. There we see that elements near the middle of the periodic table are more tightly bound than those at either end. At the peak of the curve, with the highest binding energy, is iron. It is labeled as Fe56, 56 being its atomic number, meaning that this is the number of nucleons in its nucleus.

Energy is released when elements are transmuted into other elements which have higher binding energy. Starting with a heavy element like uranium, one has to

Подпись: IF -
Подпись: 210 Подпись: 240
Подпись: 270

image171Number of nucleons in nucleus

Fig. 4.1 Binding energy vs. atomic number for all elements from hydrogen to uranium (redrawn from Wikipedia. com). The energy units will be explained later split it to get atoms with lower atomic number. If one starts with a light element like hydrogen, one has to fuse two nuclei together to get higher atomic number and move toward the peak of the curve. As labeled, fission goes from right to left, and fusion goes from left to right.

You may wonder why binding energy is increased in both fission and fusion. Would not that require an input of energy rather than yield an output of energy? Yes, it is confusing; but to move forward without such distractions, the explanation is relegated to Box 4.1. Figure 4.1 would make more sense if we turn it upside down and plot binding energy downwards. This is done in Fig. 4.2. There we see that both fission and fusion go downhill, generating energy in the process.

Mother Nature Lends a Hand

Many a frustrated physicist has complained that Mother Nature is a bitch. After the instability problems we described in previous chapters, fusion physicists would have agreed had the problems not been so challenging but soluble. There have even been several pleasant surprises when unexpected benefits were found that could not have been foreseen when fusion reactors were first envisioned. Some of these effects are now well documented; others still cannot be explained. The most remarkable of these surprises is the H-mode, a high-confinement mode on which present designs depend. It is so important that it deserves its own section, which follows this one at the next major heading.

High-Temperature Superconductors

In 1986, compounds were discovered that became superconducting at a critical temperature as high as 30 K. Since then, research to find better materials has been intense. The goal was to get the critical temperature above 77 K, the point at which nitrogen becomes liquid. Liquid nitrogen is much, much cheaper and easier to produce than liquid helium, which is liquid below 4 K. The 73°C difference between 77 and 4 K does not seem much. We encounter such a change every time we boil a cup of coffee. However, since one can never go below absolute zero, it is the distance from absolute zero that is important. Seventy-seven kelvin is 19 times farther from 0 K than is 4 K; and, of course, there is no shortage of nitrogen. The goal has already been achieved; three superconductors have been found that work at liquid nitrogen temperatures. The record as of 2009 is 135 K, well above 77 K. Typically, the compound is complicated: HgBa2Ca2Cu3Ox. Until searches can be made by computer, finding new compounds will be slow; but it is a reasonable expectation that large-scale production of a high-temperature superconductor will be possible by the time DEMO is built. Maybe a room-temperature superconductor will have been found by that time. The machine would be much simpler and cheaper.

Mirror Machines

Although these coils provide good stability, they do not enclose a large volume of plasma. They can, however, be used to stabilize a large volume of plasma attached to them. A series of large machines called tandem mirrors was built at Livermore with a long region of uniform B-field, which has neutral stability and is stabilized at the ends with yin-yang or baseball coils. One of these, the TMX, is shown in Fig. 10.25. The end coils of these machines became more and more complex as each difficulty was overcome. Intense heating produced enough density in the base­ball coil to stabilize the main plasma in the weaker central region. Thermal barriers used electrostatic potentials to keep the plasma hot in the baseball coils. Sloshing ions were used to shape these potentials. Circularizer coils matched the flattened plasma in the baseball coil to a round one on either side. Anchor coils with higher field were the final plugs at the ends.

The successor to TMX was to be the MFTF-B installation, whose size can be appreciated in Fig. 10.26, which shows one of the yin-yang magnets being moved using the old Roman method of rolling logs. In spite of an earthquake occurring while the coil was being lifted into place, the machine was finished just in time for the entire mirror project to be canceled, much to the dismay of its leader, Keith Thomassen, and, of course, Dick Post.

The 27-m long Gamma 10 machine at Tsukuba, Japan, however, continued to operate and has improved confinement by increasing the potential barrier confining the ions [24]. Instabilities have also been eliminated by producing electric field shear [25]. Results from tandem mirrors, however, pale compared with those from toruses. Densities peak at 4 x 1018 m-3 (4 x 1012 cm-3), ion temperatures at a keV or

image396

Fig. 10.25 Diagram of the tandem mirror experiment [23]. The flat bars represent neutral beams heating the plasma in the stabilizing coils

image397

Fig. 10.26 Moving the MFTF-B yin-yang magnet (An old diagram or picture originally from Lawrence Livermore National Laboratory.)

two, electron temperatures around 250 eV, and energy confinement times of order 10 ms. In addition, control of electric potentials sometimes requires plasma contact with conducting walls. Though the present state of the art on magnetic mirrors does not suggest their reactor relevance, they may be useful for other tasks that do not require net energy output. These include creating plasmas for transmutation of nuclear wastes or energy production in fission-fusion hybrids. First and foremost, however, is the proposed use of mirror machines as economical neutron sources for materials testing, as described in Chap. 9. Such a machine burning D-T fuel would produce 2 MW/m2 fluxes of 14-MeV neutrons over a sizable area using only 200 g of tritium per year [26].

Plug-in Hybrids

Until the battery problem is solved, electric hybrids will continue to evolve. The next step is the plug-in hybrid, in which the battery is charged overnight from the grid. Since most people in cities usually drive no more than 30 miles (50 km) a day, a slightly larger battery will store enough energy for that, so that the gasoline engine need not be started except on weekends. Air quality in cities would be greatly improved. There are actually two types of plug-in hybrids. The usual one works like the Prius: the battery is charged from the grid as well as by the gaso­line motor. Two motors drive the car. In a series hybrid, a small motor runs only to charge the battery. The propulsion is entirely electric. The savings in fossil-fuel consumption and GHG emission have been estimated in a report by Electric Power Research Institute and the Natural Resources Defense Council (EPRI — NRDC).54 It turns out that it matters whether the battery is sized to give 10, 20, or 40 miles of electric driving.

The EPRI-NRDC report considers scenarios, nine in all, depending on whether PHEVs (plug-in hybrid electric vehicles) have a low, medium, or high penetration into the market, and whether the power industry makes a low, medium, or high effort to reduce their emissions. Although the nine results vary by a factor of 4, they are all good. The GHG reduction in 2050 is predicted to be between 163 and 612 million metric tons (in the USA). An idea of how they expect PHEVs to take over the market is shown in Fig. 3.53. If no progress is made in battery technology (which is unlikely), PHEVs will take over more than half the car market!

Table 3.1 compares various kinds of hybrids with normal cars.55 The data are for 12,000 miles of driving in year 2010. The normal hybrid generates its own electricity and therefore uses more gasoline than PHEVs, though less than gaso­line cars. PHEV10 is a PHEV that can go 10 miles on one charge. PHEV20 and 40 have bigger batteries to go 20 and 40 miles. All the hybrids are assumed to have

image155

Fig. 3.53 Expected penetration of plug-in hybrids into the market by 2050 footnote 54

Table 3.1 Comparison between normal cars and hybrids of various types

Type of car

Normal gas

Normal hybrid

PHEV10

PHEV20

PHEV40

Gasoline (gallons)

488

317

277

161

107

Electricity (kWh)

0

0

467

1,840

2,477

Fuel economy (mpg)

25

38

38

38

38

Cost of electricity

0

0

$55

$215

$290

Cost of gasoline

$1,464

$951

$831

$483

$321

Total for 12,000 miles

$1,464

$951

$886

$698

$611

a gas motor averaging 38 miles/gallon. The PHEVs use more electricity from the grid and less gasoline, so their carbon footprints are smaller. Remember that electricity generated at a power plant uses less oil than electricity generated in the car. If the power plant uses hydroelectricity or nuclear power, the carbon footprint is more than halved.

How much money will a plug-in hybrid save? This depends, of course, on the battery size in the PHEV and on local prices; but here is an example. The break­down between electricity usage and gas usage in Table 3.1 is based on some data on driving habits. In electric drive, a Prius-type hybrid uses 150 W-hrs of electricity per kilometer.56 This works out to be 0.24 kWh/mile. In 2009, the average cost of residential electricity in the USA was 11.7 0/kWh. The cost of 2,477 kWh in the PHEV40 case is then 2477 x $0.117 = $290. In the PHEV40 column, we see that 107 gallons of gasoline are consumed. If we assume a price of $3.00/gallon, the gas cost is $321 and the total fuel cost is $611. These are the figures in the last column of Table 3.1. The other columns are calculated the same way. As for the “normal” cars, all the energy comes from gasoline, so there is no electricity cost. We see that hybrids save on the cost of fuel, but these savings may not offset the premium one pays for hybrids at present. For the plug-in hybrids, there is a “sweet spot” around the PHEV20, whose fuel costs are much lower than for the PHEV10 but not much higher than the PHEV40. Since most people do not drive 40 miles every day, the extra cost of a large battery is not worth it. However, individuals are not “most people”; they can buy a plug-in suited for their own driving habits.

There has been some concern about the effect of numerous plug-in hybrids on the grid. Since charging a PHEV on household current can take upwards of eight hours, most people would want 240-V service installed. Then charging can be done in 2-3 h. At this rate, however, as much as 6.6 kW of electricity is drawn. Each car that is plugged into that service is like adding three houses to the grid, each house with their lights on and air conditioner working.57 If every household has a plug-in, the local grid would have to be boosted. However, the EPRI-NRDC study shows that the industry experts are not worried. They show a profile in which 74% of the charging is done between 10 p. m. and 6 a. m., with a small daytime peak between 10 a. m. and 4 p. m. There are minima around 8:30 a. m. and 5:30 p. m. when people are commuting. The grid can handle that load, at least for the present.

Batteries

Electric cars can go a long way toward relieving our dependence on oil, but the bottleneck is the battery. We are spoiled by having cars that can go 300-400 miles (500-600 km) without refueling and can be filled up in 10 min. There has been no path-breaking invention in batteries in the last few decades. Figure 3.54 shows where we are. Each rectangle is the range occupied by one type of battery according to how much it weighs and how big it is compared to the energy it can store. Lighter batteries are to the right, and smaller batteries are near the top. At the bottom left is the old stand-by: the lead-acid battery used in conventional cars. It is heavy and big for the amount of energy it carries. The only improvement over the last 50 years is that they are now sealed, so that we don’t have to check the fluid level and add water every week or so. The first experimental electric cars carried a load of lead- acid batteries. One battery is only good for starting a car and keeping its headlights on for a few hours; it cannot move a car very far. The small carbon-zinc and alka­line batteries we use for small appliances and toys are off the chart because they are not rechargeable. Nickel-metal-hydride (NiMH) batteries, however, are success­fully used in cars, notably the Prius. These were chosen because they are safer than lithium and have proven reliability. The best we have at present is the lithium-ion battery. As Fig. 3.54 clearly shows, “lithiums” are lighter and smaller for the same amount of energy. They are used to power laptop computers, cell phones, cameras, and other small appliances. Their safety and reliability are, however, worrisome for use in cars. There is hope, however, because electric cars like the Tesla Roadster have shown that, if cost is not a consideration, sport-car performance can be

image156

Fig. 3.54 Performance of major types of batteries. For each type, the horizontal axis shows the energy stored per unit weight in watt-hours per kilogram, and the vertical axis shows the energy stored per unit volume in watt-hours per liter. Adapted from Basic needs for energy storage, Report of the Basic Energy Sciences Workshop for Electrical Energy Storage, Office of Basic Energy Sciences, US Department of Energy (July 2007)

achieved with a 6800-cell Li-I battery good for 244 miles. With a 288 HP (215 kW) motor, the car goes 125 mph (200 kph) and accelerates 0-60 mph in 3.7 s. Charging the battery on 240 V takes 17 kW in 3.5 h.

Aside from cost, lithium batteries have two main problems. Safety is the main concern, since lithium batteries have been known to explode, as they did in some laptops a few years ago. When a short circuit occurs in such a battery, the chemicals can burn and cause short circuits in neighboring cells, which release more heat, starting a runaway reaction. Unlike hydrogen, which cannot burn without oxygen from the air, lithium batteries have the oxygen inside. The solution is to divide the lithium battery pack into small isolated units which are then connected together with wires. The second problem is life span, which depends on how often the battery is recharged. Even if it is not used, a lithium battery can lose as much as 20% of its capacity per year [33], as many laptop owners have found to their dismay. The number of charge-recharge cycles is limited to several thousand. For cars, 5,000 cycles would be good for 10 years for most drivers, and this is close to present technology. However, it would be hard to build enough extra capacity for the car to maintain its driving range for 10 years. Charging a lithium battery too fast or overcharging it could cause plating of the electrodes, which shortens it life. These problems are slowly being solved as companies move into this rapidly expanding market. The target price set by the US Advanced Battery Consortium for electric car batteries is $300/kWh. Lead-acid batteries cost about $45/kWh, compared with NiMH batteries, which cost $350/kWh for small ones to $700/kWh for ones used in cars. Right now Li-ion batteries cost $450/kWh [33]. Perhaps economy of scale will bring the prices down as electric cars overtake the market.

Perfecting the Magnetic Bottle*

Some Very Large Numbers

The last chapter had a lot of information in it, so let us recapitulate. To get energy from the fusion of hydrogen into helium as occurs in the sun and other stars, we have to make a plasma of ionized hydrogen and electrons and hold it in a magnetic bottle, since the plasma will be too much hot to be held by any solid material. The way a magnetic field holds plasma particles is to make them turn in tight circles, called Larmor orbits, so that they cannot move sideways across magnetic field lines. However, the ions and electrons can move along the field lines in their thermal motions without restraint. Consequently, the magnetic container has to be shaped like a doughnut, a torus, so that the field lines can go around and around without ever running into the walls. The field lines also have to be twisted into helices to avoid a vertical drift of the particles that occurs in a torus but not in a straight cylinder. Ideally, each field line will trace out a magnetic surface as it goes around many times without ever coming back exactly on itself. The plasma is then confined on nested magnetic surfaces which never touch the wall. This ideal picture will be modified in this and later chapters as we understand more about the nature of these invisible, nonmaterial containers.

We’ve gotten an idea of what a magnetic bottle looks like, but how large, how strong, or how precise does it have to be? The sun, after all, has a tremendous gravi­tational force to hold its plasma together; but we on earth have much more limited resources. The size of a fusion reactor will be large if it is to produce backbone power. The torus itself may be 10 meters in diameter, and the reactor with all its components will fill a large four-story building. A better picture will be given in the engineering section later in this book. For experiments on plasma confinement, however, much smaller machines have been used. The figure-8 stellarators, for instance, were only about 3 meters long. Modern torus experiments are about half or a quarter the size of a reactor.

The temperature of the plasma in the interior of the sun is about 15,000,000 (1.5 x 107) degrees, but a fusion reactor will need to be about ten times hotter, or ‘Numbers in superscripts indicate Notes and square brackets [] indicate References at the end of this chapter.

F. F. Chen, An Indispensable Truth: How Fusion Power Can Save the Planet,

DOI 10.1007/978-1-4419-7820-2_5, © Springer Science+Business Media, LLC 2011 150,000,000 (1.5 x 108) degrees. We can use the electron-volt (eV) to make these numbers easier to deal with. Remember that 1 eV is about the amount of energy that holds a molecule together. Remember also that the temperature of a gas is related to the average energy of the molecules in the gas. It turns out that 1 eV is the average energy of particles in a gas at 11,600 K or roughly 10,000 K. So instead of saying 150,000,000°, we can say that the temperature is 15,000 eV or 15 keV. By that we mean that the particle energies in the gas are of the order of 15 keV. When we say degrees, do we mean Fahrenheit, Centigrade, or Kelvin (absolute)? For this discussion, it doesn’t matter, since Fahrenheit and Centigrade degrees dif­fer by less than a factor of 2, and Centigrade and Kelvin differ by only 273°. We do not really care whether the sun is at 10 million or 20 million degrees! It makes a difference to scientists, who use degrees Kelvin, but not for this general overview.

Why do we need a plasma temperature as high as 10 keV? This is because posi­tive ions repel one another with their electric fields, and they must have enough energy to crash through the so-called Coulomb barrier before they can get close enough to fuse together. In Chap. 3, we discussed why a hot plasma is a better solu­tion than the beams of fast ions. Here, we give more details. Figure 5.1 shows a graph of the probability of deuterium-tritium fusion plotted against the temperature of the ions in keV.1 Note that the probability peaks at around 60 keV, but the ion temperature does not have to be that high because the ions have a Gaussian distribu­tion of energies. When the ions are at 10 keV, there are enough ions in the tail of the distribution (Fig. 3.3), near 40 keV, which fuse rapidly enough. Note that at the sun’s 2 keV, the reactivity is very low; so low that ions stick around for millions of years before they undergo fusion. But on earth we do not have that kind of time!

Exactly how much time do we have? A magnetic bottle cannot hold a plasma forever because a plasma will always find a way to escape. From Fig. 5.1, we see that the lower the temperature, the slower is the fusion rate, so the plasma contain­ment time has to be longer. The relation among plasma density (n for short), ion temperature (T for short), and confinement time t was originally worked out by J. D. Lawson and is commonly known as the Lawson Criterion. A modified form of this

image194

Fig. 5.1 Probability of DT fusion vs. ion temperature

image195

Fig. 5.2 Lawson criterion for DT fusion (For those already familiar with fusion, the ordinate is actually nTE, the energy confinement time in units of s/cm3. The curves were recomputed using the modern data of Bosch and Hale [1] and assuming a thermal conversion efficiency of 30%. The time te is more honest than the particle confinement time, called t here, because it includes losses in the form of electromagnetic radiation.)

is shown in Fig. 5.2. The criterion says that the product of density and confinement time — that is, n x t — has to be higher than a value that varies with T. There are two curves. The lower one, marked BREAKEVEN, stands for scientific breakeven, in which the fusion energy just balances the energy needed to create the plasma. Real breakeven would include all the power needed to run the rest of the plant, requiring higher nt. The upper curve, labeled IGNITION, is the nt required for a self-sustaining plasma, in which the plasma heats itself without additional energy. That happens because one of the products of a DT reaction (see Fig. 3.2) is a charged a-particle (a helium nucleus), which is trapped by the magnetic field and stays in the plasma keeping the D’s and T’s hot with its share of the fusion energy. Clearly, the goal of fusion research is to reach ignition, and present plans are to build an experiment that can generate enough a-particles to see how they thermalize.

Now we can answer the question as to how long we must hold the plasma. The breakeven curve in Fig. 5.2 says that nt must be at least 1014 sec/cm3 (marked as 1E+14 on the graph). A reasonable value for the plasma density n is 1014/cm3 (100 trillion ion-electron pairs per cubic centimeter). Therefore, t is of the order of 1 sec. We have to hold the plasma energy in a magnetic bottle for at least 1 sec, not a million years, as in the sun. This has already been achieved, albeit not at such a high density. The progress in fusion can be appreciated when one recalls that the confinement in figure-8 stellarators was about 1 microsecond. Our work has paid off a million-fold.

To confine the plasma in a stellarator, the magnetic field has to be carefully made. Figure 5.3 shows the average distance an ion travels, in kilometers, before it makes a fusion collision.2 The curve is lowest at ion energies of around 60 keV, since the fusion probability in Fig. 5.1 peaks there. At the more normal energy of 40 keV, as explained above, an ion covers about the circumference of the earth as it goes around and around a torus! One might think that a magnetic bottle cannot be made this accurately, but it turns out that confining single particles is not a problem.

image196

Fig. 5.3 Ion mean free path for fusion vs. energy, at a density of 1014/cm3

After all, storage rings in atom smashers can hold protons for hours or even days. Toroidal fusion experiments do not use focusing magnets as in particle accelerators, but electrons have been shown to be confined for millions of turns even in a primi­tive stellarator [3]. To hold a plasma is much harder because the ions and electrons can cooperate with one another to form their own escape paths. The accuracy of the magnetic field is not the problem.

So far we have considered the shape of the magnetic field but not its strength. A hot gas like a plasma exerts a lot of pressure, and the magnetic bottle has to be strong enough to hold this pressure. How does this pressure compare with our everyday experience? Pressure is density times temperature. Let’s first talk about temperature. Room temperature is about 300 K. Expressed in electron-volts, this is 300/11,600=0.026 eV. A fusion plasma has a temperature of, say, 15 keV, about 600,000 times higher. Fortunately, the density is much lower. Atmospheric density is about 3 x 1019 molecules/cm3, while a fusion plasma has about 2 x 1014 particles/ cm3, about 150,000 times fewer. So the net result is that the magnetic field has to hold a pressure about 600,000/150,000 times higher than normal: roughly 4 atmo­spheres (atm). This is the pressure at which water comes out of a faucet or that felt by a diver at 40 m depth, as one can figure out from the well-known fact that atmo­spheric pressure is about 1 kg/cm2 or 15 lbs./sq. in.2. Four atmospheres is not a huge number, but the pressure has to be exerted by a massless magnetic field! The strength of a magnetic field is measured in Teslas (T), each Tesla being 10,000 gauss (G), which may be a more familiar unit to old-timers. A magnetic field can exert a pressure of about 4 atm/T. Thus, the field strength required to hold a fusion plasma is about 1 T (10,000 G). This is a conservative number, because actual machines go up above 3 T. This is to be compared with the earth’s magnetic field, which is about 0.5 G, or with the strength of a memo-holding refrigerator magnet, about 40 G. However, MRI (Magnetic Resonance Imaging) machines need about 1 T. You can hear the field during an MRI exam because the field has to be oscillated, causing parts of the machine to rattle and hum. To create a 1-T field requires large, heavy “coils” consisting of copper windings or superconductors imbedded in a solid material to hold them in place. This is not a problem and is routinely done in fusion experiments. Though it is the magnetic field that applies pressure to the plasma, the field is held in place by the current-carrying coils; and it is the coils that ultimately bear the pressure. This is also not a problem because the coils have to be made quite sturdily in any case.

Computer Simulation

Before describing some effects that are not yet completely understood, we should mention the basis for believing that these problems are not insoluble. That’s the important subject of computer simulation. In the 1970s and 1980s, when unantici­pated difficulties with instabilities arose, computers were still in their infancy. To the dismay of both fusion scientists and congressmen, the date for the first demonstration reactor kept being pushed forward by decades. The great progress seen in Fig. 8.1 since the 1980s was in large part aided by the advances in computers, as seen in Fig. 8.2. In a sense, advances in fusion science had to wait for the development of computer science; then the two fields progressed dramatically together. Nowadays, a $300 personal computer has more capability than a room-size computer had 50 years ago when the first principles of magnetic confinement were being formulated.

Подпись: Fig. 8.10 Hokusai’s painting of the Big Wave
image275

Computer simulation was spearheaded by the late John Dawson, who worked out the first principles and trained a whole cadre of students who have developed the science to its present advanced level. A computer can be programmed to solve an equation, but equations usually cannot even be written to describe something as complicated as a plasma in a torus. What, for instance, does wavebreaking mean? In Hokusai’s famous painting in Fig. 8.10, we see that the breaking wave doubles over on itself. In mathematical terms, the wave amplitude is double-valued. Ignoring the fractals that Hokusai also put into the picture, we see that the height of the wave after breaking has two values, one at the bottom and one at the top. Equations cannot handle this; Dawson’s first paper showed how to handle this on a computer.

So the idea is to ask the computer to track where each plasma particle goes without using equations. For each particle, the computer has to memorize the x, y, z coordinates of its position as well as its three velocity components. Summing over the particles would give the electrical charge at each place, and that leads to the electric fields that the particles generate. Summing over their velocities gives the currents generated, and these specify the magnetic fields generated by the plasma motions. The problem is this. There are as many as 1014 ions and electrons per cubic centimeter in a plasma. That’s 200,000,000,000,000 particles. No computer in the foreseeable future can handle all that data! Dawson decided that particles near one another will move together, since they will feel about the same electric and magnetic fields at that position. He divided the particles into bunches, so that only, say, 40,000 of these superparticles have to be followed. This is done time step by time step. Depending on the problem, these time steps can be as short as a nano­second. At each time step, the superparticle positions and velocities are used to solve for the E — and B-fields at each position. These fields then tell how each particle moves and where they will be at the beginning of the next time step. The process is repeated over and over again until the behavior is clear (or the project runs out of money). A major problem is how to treat collisions between superparticles, since, with their large charges, the collisions would be more violent than in reality. How to overcome this is one of the principles worked out by Dawson.

image276

Fig. 8.11 Electric field pattern in a turbulent plasma (from ITER Physics Basis 2007 [26], quoted from [14]. The plot is of electric potential contours of electron-temperature-gradient turbulence in a torus)

Before computers, scientists’ bugaboo was nonlinearity. This is nonproportionality, like income taxes, which go up faster than your income. Linear equations could be solved, but nonlinear equations could not, except in special cases. A computer does not care whether a system behaves linearly or not; it just chugs along, time step by time step. A typical result is shown in Fig. 8.11. This shows the pattern of the electric fields generated by an instability that starts as a coherent wave but then goes non­linear and takes on an irregular form. This turbulent state, however, has a structure that could not have been predicted without computation; namely, there are long “fingers” or “streamers” stretching in the radial direction (left to right). These are the dangerous perturbations that are broken up by the zonal flows of Chap. 7.

The simulation techniques developed in fusion research are also useful in other disciplines, like predicting climate change. There is a big difference, however, between 2D and 3D computations. A cylinder is a 2D object, with radial and azimuthal directions and an ignorable axial direction, along which everything stays the same. When you bend a cylinder into a torus, it turns into a 3D object, and a computer has to be much larger to handle that. For many years, theory could explain experimental data after the fact, but it could not predict the plasma behavior. When computers capable of 2D calculations came along, the nonlinear behavior of plasmas could be studied. Computers are now fast enough to do 3D calculations in a tokamak, greatly expanding theorists’ predictive capability. Here is an example of a 3D computation (Fig. 8.12). The lines follow the electric field of an unstable perturbation called an ion-temperature-gradient mode. These lines pretty much follow the magnetic field lines. On the two cross sections, however, you can see the how these lines move in time. The intersections trace small eddies, unlike those in the previous illustration. It is this capability to predict how the plasma will move under complex forces in a complicated geometry that gives confidence that the days of conjectural design of magnetic bottles are over.

The science of computer simulation has matured so that it has its own philosophy and terminology, as explained by Martin Greenwald [15]. In the days of Aristotle, physical models were based on indisputable axioms, using pure logic with no input from human senses. In modern times, models are based on empiricism and must agree with observations. However, both the models and the observations are inexact. Measurements always have errors, and models can keep only the essential elements. This is particularly true for plasmas, where one cannot keep track of

image277

Fig. 8.12 A 3D computer simulation of turbulence in a D-shaped tokamak (courtesy of W. W. Lee, Princeton Plasma Laboratory)

every single particle. The problem is to know what elements are essential and which are not. Computing introduces an important intermediate step between theory (models) and experiment. Computers can only give exact solutions to inexact equations or approximate solutions to more exact (and complicated) equations. Computer models (codes) have to be introduced. For instance, a plasma can be represented as particles moving in a space divided into cells, or as a continuous fluid with no individual particles. Benchmarking is checking agreement between different codes to solve the same problem. Verification is checking that the computed results agree with the physical model; that is, that the code solves the equations correctly. Validation is checking that the results agree with experiment; that is, that the equa­tions are the right ones to solve. Plasma physics is more complicated than, say, accelerator physics, where only a few particles have to be treated at a time. Because even the models (equations) describing a plasma cannot be exact, the development of fusion could not proceed until the science of computer simulation had been developed.