Как выбрать гостиницу для кошек
14 декабря, 2021
Although articles on solar power appear often in public media, it is not always made clear that there are many ways to capture that energy, and that these methods are quite different from one another. First, there is local solar vs. central solar. Locally, sun falls on every rooftop, and there is no excuse not to use this free energy. Centralized solar power plants are another matter. These take up large areas and have to transmit the energy from sparsely populated to densely populated regions. The plants also have to compete with coal and nuclear plants on cost.
Fig. 3.21 Distribution of average solar energy incident onto the earth, with the darker colors indicating more sunlight (http://images. google. com). This shows that solar power is most abundant in the least populated regions of the earth |
Fig. 3.22 Distribution of average solar energy incident onto the USA, with the red colors indicating more sunlight (http://images. google. com). This shows how difficult it would be to transport solar energy from the southwest to where it is needed in the northeast |
There is also a big difference between solar thermal and solar electric. In solar thermal, sunlight is used to heat a liquid, typically water, and that heat is either used directly for heating or is used to generate electricity. Local use of solar thermal is very simple: water heated on the roof can directly reduce one’s gas or oil bill. Centralized solar thermal is literally done with mirrors. Acres of mirrors motorized to follow the sun focus the sunlight into a boiler on top of a tower. There a liquid such as water or liquid salt is rapidly heated and stored in tanks on the ground. Since heat is hard to transport long distances, the hot liquid is used in a steam generator to produce electricity. Most of the energy is then lost in the thermal cycle, as was explained in Chap. 2.
Solar electric is commonly called photovoltaic or PV. There are two main kinds of PV: silicon and thin film. Solar cells made of silicon are expensive, and there are several kinds of these: polycrystalline, amorphous, and microcrystalline. Polycrystalline silicon solar cells can be very efficient, but these are so expensive that they are used where cost does not matter, as in space satellites. Amorphous silicon is less efficient but much less expensive, and they could be competitive in the market. The new microcrystalline silicon cells under development may turn out to be a good compromise. The fastest growing segment, however, is in thin-film solar cells. These are much cheaper than silicon ones, use very little material, and can be used for both local and central power. Although thin-film cells are the most inefficient of all, the possibilities for their deployment are tremendous. For instance, windows could conceivably be coated with thin-film cells. The following sections will tell how these various solar energy methods work.
In a generic reactor, fuel rods are carefully spaced inside the moderator — water, say — so that each neutron generated inside a fuel rod and slowed down in the moderator produces just one neutron when it causes fission in another fuel rod. The fuel is uranium oxide, UO2, a black powder created from UF6, pressed into pellets, sintered, and ground to size. The pellets are slid into thin tubes about the diameter of a pencil and 5 m long. The pellets cannot be large because the heat generated inside has to escape to the coolant. Also, since most of the uranium is U238, the neutrons have to get out of the pellet into the moderator before they are absorbed by the U238. The coolant is usually the same kind of water as the moderator, but it gets hot and carries the output energy. Hundreds of fuel rods make up a fuel assembly, and hundreds of assemblies make up the fuel load, which can weigh 100 tons. The fuel lasts about four years, and one-fourth of it is renewed each year. There have to be enough fuel to make up a critical mass, ensuring that at least one neutron from each reaction will find another U235 nucleus to split. The fuel assemblies have to be spaced just right inside the moderator for this to happen. When fuel assemblies are renewed, they are shuffled so that the new ones and the half-used ones are distributed evenly. The heat produced is carried away by the coolant and is used to generate electricity at 30% efficiency in steam turbines. One ton of fuel can generate 30 MW of power and 40 GW-days of energy.
Walking past Harold Furth’s office one day, I saw this huge Chiquita Banana balloon hanging down from the ceiling. “What’s going on?” I asked. “Welcome to banana theory,” he replied, “the fruitful approach to fusion!” This was the beginning of a new understanding of how particles move in a torus. We knew that bending a cylinder into a torus would induce vertical drifts, and we knew how to counteract those by twisting the field lines into helices. But there were more subtle toroidal effects that we did not know about for the first 15 years. To explain banana orbits, we first have to describe magnetic mirrors.
If a magnetic field is not uniform — that is, if its strength changes as you move along a field line — it can reflect a charged particle and cause it to go backwards. This is the same effect that makes two permanent magnets repel each other when you turn one around so that their polarities don’t match. There are toys that use this repulsion effect to suspend a magnetic object in midair. In Fig. 4.3b in Chap. 4, we showed that an electromagnet can create a magnetic field with coils of wire carrying a current. The ions and electrons gyrating in their circular orbits in a magnetic field are like electromagnets, since they are like one-turn coils carrying a current, even if the current is lumped into one charged particle. Figure 6.4 shows the field of a gyrating ion immersed in the nonuniform field of a normal electromagnet. The ion’s magnetic field is always in the opposite direction to that of the field it’s immersed in. Why? Because a physical system always tries to fall into the
Fig. 6.4 Reflection of an ion heading into a stronger magnetic field. The field generated by the ion’s gyration is shown in red |
lowest energy state. By canceling part of the background magnetic field, the ions can lower the total magnetic energy. Electrons will do the same even though they have negative charge. They rotate in the opposite direction, but being negative, they carry current in the same direction as the ion do.
In Fig. 6.4, an ion, carrying the magnetic field that it generates, moves to the right. The field lines on the right are of a background magnetic field generated by large coils outside the plasma. The field lines generated by the current of the gyrating ion are shown in red. The opposing fields push the ion backwards, like two permanent magnets with opposite polarity. The ion’s motion to the right is slowed up. The ion is moving into a stronger field, since the black lines are getting closer together. When the external field gets too strong, the ion cannot go any farther and is reflected back. How far the ion goes depends on how fast it was moving from left to right. However, not all ions will get reflected because the background field has a maximum strength. If the ion comes in with enough energy to go through the maximum, it gets slowed up there, but it is able to go through and regain its velocity on the other side. A converging magnetic field is a magnetic mirror that can reflect all but the fastest ions. This mechanism of magnetic mirroring was used by Enrico Fermi to explain the origin of cosmic rays. There, the interstellar magnetic fields are moving very rapidly, and they can push ions up to very high energies. Why can’t we use magnetic mirrors to trap and hold a plasma? Indeed, we can, but magnetic mirror systems have not worked out as well as tokamaks. Mirrors will be described in Chap. 10.
Now we can get to the bananas. Tokamaks also have magnetic mirrors, but they hinder rather than help the confinement. Recall from Fig. 4.14 in Chap. 4 that the magnetic field is always stronger on the inside of a torus, near the hole, than on the outside because the coils are closer together in the hole, and therefore the field near one coil also gets contributions from the neighboring coils. That means that there is a nonuniform magnetic field, and particles going from a weak field to a strong field might get reflected. Ideally, particles travel along helical field lines on a magnetic surface and never leave it. However, magnetic mirroring prevents this, as shown in Fig. 6.5.
Fig. 6.5
In this figure, the dashed line is a helical field line. An ion does not actually follow this line exactly unless its Larmor radius is zero. When it gyrates in a finite-sized circle, it will drift slowly from one line to another, as shown in Fig. 4.10, if the magnetic field strength is not the same on every side of its orbit.1 The helical twisting cancels out the vertical drift on the average, but the averaging is disrupted by the mirror effect. The actual ion orbit is like the one shown by the solid line in Fig. 6.5. This ion starts out on the outside of the torus, where the field is weak, and it loops around toward the inside, where the field is strong. If it is not moving fast enough, it will be reflected by the magnetic mirror effect and come back on a slightly different path. Only ions with enough energy parallel to the field line will make it around to the inside of the torus and sample all parts of a magnetic surface as we envisioned in our earlier naive picture of magnetic bottles. If we project the path of the ion in Fig. 6.5 onto the cross section of the torus shown there, it will look something like Fig. 6.6.
These are the so-called banana orbits. In each case, the outside of the torus is on the right side of the cross section, and the strong field near the hole in the doughnut is on the left. The small banana in panel (a) is for a particle with small velocity parallel to the magnetic field; it gets reflected before it gets very far toward the inside. The dashed line is the path of a passing particle, one that gets through the mirror and can come all the way around. In panel (b), the particle has larger parallel velocity and goes farther to the left, describing a larger banana. The limiting case is shown in panel (c), where the particle nearly makes it through the mirror. Tom Stix whimsically dubbed this the WFB, the World’s Fattest Banana.
Banana orbits were discovered theoretically. They have never been seen in experiment because it would be very hard to track the path of a single ion or electron in a plasma with more than a trillion particles per cubic centimeter. However, theory predicts the consequences of banana orbits, and these unfavorable effects are well established by experiment. It’s easy to understand why these bananas bear
Fig. 6.6 Banana orbits of particles with increasing parallel velocities |
bitter fruit. When an ion makes a collision, instead of jumping from one Larmor orbit to an adjacent one, it jumps from one banana orbit to the next; and banana orbits are much wider.2 Instead of the very slow rate of “classical” diffusion that we described in Chap. 5, the rate of plasma transport across the magnetic field is much faster in a torus than in a straight cylinder. The rate of banana diffusion can be calculated easily and is called neoclassical diffusion. It is a characteristic of toruses that was not initially foreseen. The good news is that it is still a classical effect; that is, it can be calculated using a known theory. Figure 6.7 shows how banana diffusion differs from classical diffusion. At the left-hand side, the collision rate between ions and electrons is very small, so small that an ion can traverse one or many banana orbits before making a collision. In the middle, flat part of the curve, the trapped ions (those making banana orbits) make collisions during a banana orbit, but the passing particles, being faster, do not. In the right-hand part, the collision rate is high enough that all particles make collisions in traversing the torus. Under fusion conditions, the plasma is so hot and so nearly collisionless that it is well into the banana regime, at the extreme left of the graph. Therefore, it is clear that the banana diffusion rate is much higher than the classical one, shown by the straight line at the bottom.
One might think that the closer a torus is to a cylinder, the smaller the banana effects will be. The aspect ratio A of a torus is the major radius R divided by the minor radius a, as shown in Fig. 6.8. A fat torus would have small A and a skinny one, large A. One would think that large A would have smaller banana diffusion, but this is not always true. It depends on many subtle effects which can cancel one another. The Kruskal-Shafranov limit states that q (the inverse rotational transform) has to be larger than 1. For a given value of q, banana diffusion is actually larger for large A. This is primarily because the ion has to go a long way around the torus before it turns around, and it is drifting vertically the whole time.
An even stranger, counter-intuitive effect has to do with the width of a banana orbit. It turns out that this width depends only on the strength of the poloidal field Bp and not on the toroidal field Bt. Remember that Bp is only the small field generated by the plasma current that gives the field lines a small twist. The banana width is approximately the Larmor radius of an ion calculated with Bp instead of Bt. This is much larger than the real Larmor radius, calculated with Bt. Since banana diffusion goes by steps of the size of a banana width, which depends only on the relatively weak Bp, does this mean that the much stronger toroidal field is useless? No! The toroidal field is needed to make the real Larmor radius small so that we can consider only the movement of the guiding centers, not the actual particles. If the toroidal field were eliminated,4 the gyration orbits would be so large that magnetic confinement would be no good at all, and furthermore there would be nothing to hold the plasma pressure.5
Figure 9.2 is a more realistic drawing of the ITER machine than shown in Chap. 8. The plasma will occupy the D-shaped vacuum space surrounded by tiles. These tiles are the plasma facing components (PFCs), commonly called the “first wall.” They have to withstand a tremendous amount of heat from the plasma and yet must not contaminate the plasma and be compatible with the fusion products that impinge on them. Early tokamaks used stainless steel, but clearly this is not a high-temperature material. Current tokamaks use carbon fiber composites (CFCs), a light, strong, high — temperature material that is used in bicycles, racing cars, and space shuttles. Just as rebars are used to strengthen concrete, carbon fibers are used to strengthen graphite. However, carbon cannot be used in a reactor because it absorbs tritium, which would not only deplete this scarce fuel but also weaken the CFC. After all, hydrocarbons like methane and propane are very common, stable compounds; and tritium is just another form of hydrogen and can be captured by the carbon to form hydrocarbons.
Tungsten is a refractory metal, but it is high-Z; that is, it has a high atomic number and therefore has so many electrons that it cannot be completely ionized. The remaining electrons radiate energy away, cooling the plasma. The good thing about hydrogen and its isotopes is that they have only one electron, and once that electron is stripped free of the nucleus by ionization, the atom can no longer emit light. Beryllium is a suitable low-Z material, but it has a low melting point, and so has to be cooled aggressively. In preparation for ITER, the European tokamak JET is being upgraded with a beryllium first wall. In short, the first-wall material must not absorb tritium and must have a low atomic number, take high temperatures, and be resistant to erosion, sputtering, and neutron damage.
ITER, of course, is only the first step. There are large steps between ITER and DEMO and between DEMO and a full reactor. Some large numbers on the first wall
Fig. 9.2 Diagram of ITER, showing the “first wall” and openings (ports) where experimental modules can be inserted for testing [29] |
Table 9.1 Loads on the first wall
|
are given in Table 9.1. We see that the step between ITER and DEMO is much larger than between DEMO and Reactor. Hence the call for a materials-testing facility intermediate between ITER and DEMO.
The fusion power is given in gigawatts. A typical power plant generates 1 GW of electricity; and perhaps 5 GW of fusion power is needed to give that, since the tokamak needs power to run, and there is still a heat cycle in a steam plant to produce electricity. The heat flux impinging on the first wall is about 0.5 MW/m2. This translates to 50 W/cm2 or about 300 W/sq. in. This is not much more than the surface of an electric iron, though the total heat is considerable. The real problem is in the divertor, which has to handle most of the heat from the plasma. Divertors will be covered later.
The neutron load is the energy of the 14-MeV neutrons from the D-T reaction which pass through the first wall. This energy is not deposited in the first wall, but the neutrons damage the wall. The neutron load summed over the life of the wall is what matters. This is much larger for a reactor than for ITER, since ITER is just an experiment, while a reactor should last about 15 years before it has to be revamped. The neutron damage is measured in displacements per atom (dpa). The longer the material is exposed to a neutron flux, the more times one of its atoms will be knocked out of place by a neutron. After many dpa’s, the material will swell or shrink and become so brittles as to be useless.
Beryllium melts so easily that it cannot be used in a reactor. Boron coating has been tried successfully, but also cannot take high temperatures. Tungsten seems to be the best available wall material because it does not erode or sputter easily and has a high melting point of 3,410°C. However, it is a high-Z material and also cannot be machined easily. A liquid lithium first wall has been considered, but it is no longer proposed.1 Silicon carbide (SiC) is a promising material that has been studied extensively in the laboratory but does not have a known method for manufacturing in large quantities [1]. How SiC compares with other materials in operating temperature is shown in Fig. 9.3. These temperature ranges are for irradiated materials so that the swelling and fracture effects caused by neutrons are included. Carbon fiber-reinforced graphite (C/C) can take high temperatures, but carbon cannot be used because of tritium retention. Tungsten and molybdenum are classical refractory metals but will cool the plasma if they sputter into it. Silicon carbide reinforced with layers of SiC fibers (SiC/SiC) seems to be the ideal material for the first wall if it can be made without impurities. It takes high temperature, is quite strong, and is resistant to radiation damage. It can last for the 15-year life of the
reactor. Its properties have been measured in fission reactors [2]. The main drawback is a thermal conductivity lower than for other materials.
The latest high-tech material is a SiC matrix/graphite fiber composite [1], which has increased thermal conductivity in addition to the other good properties. These advanced materials cannot be designed with existing computer programs, which are applied only to metals. Some reactor studies assume that SiC first-wall material will be available. Though SiC composites have tremendous potential, much research and testing remain to be done before they become a reality.
Expenses and income are both functions of time. Costs start accruing when a power plant is proposed and initial studies are made, for instance, on environmental impact. Land is purchased, the plant is designed, equipment is ordered, and construction begins. This takes many years. The plant is finished and begins producing power. Profits begin to be made, year by year. At the same time, there are expenses for operating the plant, and for repairing and replacing equipment. To get a reasonable number for the COE, one has to adjust all the expenses and income forward or backward to the same date. Time is money. This is called discounting. It is done with another formula:
I, (C + OM + F + R + D) t /(1 + r) ‘
I,,Et /(1 + r)’
This is a formula unfamiliar to physicists but may be more familiar to readers involved with business or finance. Here C is the capital cost, OM is operation and maintenance, F is the fuel cost, R is the cost of replacements, D is the cost of decommissioning at the end of life, and r is the discount rate. In the denominator, E is for earnings. The sum is over time t. To derive a value at time zero for an expense or income occurring at another time, a discount has to be applied. The discount rate is like an interest rate but includes also expectations of what the market will be like, how much inflation there will be, and factors like those. Financiers normally assign a discount rate between 5 and 10%.
Suppose we want to calculate the COE as of the start of planning. We set that as t=0. For simplicity, let us do the accounting annually, not monthly or daily. Suppose it takes five years to get ready, five years to build the plant, and it has been operating for another five years. For years t = 1-5, we have the money Cl — C5 spent in those years, which is only interest on money borrowed, salaries, and rental for office space. For years t=6-10, C will be much larger, as the plant is built. For years 11-15, we have C+OM+F for those years, and also E earned in those years. Each year’s amounts are divided by (1 + r) raised to the power t in order to get the value as of t=0. Both the numerator and the denominator are summed over the years, and the ratio is the COE. In later years, there will also be values for R and D.
To get a better idea of what discounting means, let us consider a simple example. Suppose you borrow $1M to build a machine, taking five years to do so. At the end of the five years you sell the machine for $1M. However, you could not have sold that machine for $1M at Year 0, since that machine did not exist yet and you could not make any money with it. It has a smaller discounted value at Year 0, given by C/(1+r)5, according to the formula above. If C=$1M and the discount rate is r=5%, we have a value at t=0 of C/(1.05)5, which works out to be only $0.784M. The reason is that you had to pay compound interest during the five years. One million dollars compounded annually at 5% is $1M times (1.05)5, which is $1.276M. You had to pay $0.276M in interest, so you made only $0.724M, and that is closer to the value of the machine at t=0. Actually, you did not have to borrow all the money at once, so the discounted, or levelized, value is $0.784M, which is exactly the reciprocal of $1.276M.
This exercise points out that a large part of the cost of any power plant, regardless of its power source, is interest during construction. If the discount rate is 7.5% (halfway between 5 and 10%), and the plant takes five years to construct, summing over the discounted value of one-fifth of the capital cost for each of five years shows that 20% of the cost is interest and other financial factors. The levelized COEs of all different kinds of power plants (except fusion) in many different countries have been analyzed in exhausting detail by the International Energy Agency.6
One hundred years from now, what will a fusion reactor look like? Most of the ideas described in this chapter will have been discarded, and a few will have been combined. Once the period of patched-up Rube-Goldberg-like experiments is over, private industry will develop a simpler and cheaper system that self — organizes into a stable configuration and keeps itself hot without much external power. The reactor will probably have a roundish shape, like that of a compact tokamak. The fuel will probably be p-B11, which does not require tritium breeding and generates few neutrons. Or it could be d-He3, though He3 would have to be made in an auxiliary fission reactor. The magnetic surfaces will be closed and have the interior good curvature of a spherical tokamak (Fig. 10.12). They may look like those of the Chandrasekhar-Kendall-Furth force-free configuration shown in Fig. 10.56.
There will be natural divertors at the top and bottom. The exterior regions above and below the divertor necks can be expanded like those of an axisymmetric mirror (Fig. 10.27) to create more good curvature for stabilization. High-energy alpha particles leaving the divertors can be channeled into direct converters to generate high-voltage DC directly. The central core can be slid up or down continuously to be refreshed without a shutdown.
This is a dream, but we can hope.
Fig. 10.56 Conceptual magnetic configuration of a third-generation fusion reactor (http://www. frascati. enea. it/ProtoSphera/ProtoSphera%202001/3.%20Chandrasekhar-Kendall-Furth. htm.) |
Conclusions
Ah, but a man’s reach should exceed his grasp, or what’s a heaven for?
Never have these oft-quoted words by Robert Browning been more pertinent. The very existence of man depends on his ability to get energy when nature’s bounty runs out. We may not succeed in creating our own Promethean fire, but it’s within reach.
Fusion is a solution to both climate change and energy shortage. Fusion energy is inexhaustible and nonpolluting.
Fusion will cure our dependence on oil. There will be no need to wage wars in the Middle East. With unlimited energy, there will be electricity or hydrogen to run cars.
With unlimited energy, desalination can provide fresh water in all coastal regions.
Fusion cannot explode or be proliferated.
Fusion does not need to disturb the environment or wildlife habitats. Reactors can be located on the sites of aging coal or nuclear plants. In particular, they can be located near population centers. No new cross-country transmission lines need to be built urgently.
Fusion is the only energy source that can sustain mankind for future centuries and millennia. The sooner we get it, the less we need to spend on temporary solutions.
There is no dearth of ideas on new ways to make solar cells, but these are not yet practical. Solar power has a great advantage in the development stage over other technologies such as wind, nuclear, or fusion. New ideas can be explored on a small scale. No large machines or wind turbines have to be constructed. Experimental solar cells can be as small as 1 cm2. This means that new ideas can be developed profitably by small companies, thus shifting the research burden to the commercial sector. Large, government-funded installations are still needed for commercial viability, but not for testing new ideas. These ideas fall into the category of Generation III solar cells, as shown in Fig. 3.44.
In this graph, the efficiency of solar cells is plotted against their cost per square meter and per peak watt. The three elliptical areas are where Generations I, II, and III lie. Generation I comprises the single-junction silicon cells, costing more than $3.50 per peak watt and achieving efficiencies no higher than 18%. Generation II contains the thin-film and organic cells, which are much cheaper but have low efficiencies. Generation III includes multijunction cells with efficiencies above 40% and new ideas which are still in the thinking stage. The efficiencies of these solar cells can go above the 31% of the theoretical maximum known as the
Fig. 3.44 The three generations of solar cells, plotted according to cost and efficiency [16]. The horizontal axis is in dollars per square meter, while the diagonal lines give the cost in dollars per peak watt. The horizontal dashed line is a theoretical limit explained in the text |
Shockley-Queisser limit. The limit applies to single-junction cells in unconcentrated sunlight whose photons produce only one electron each and whose excess energy is lost as heat. Generation III cells go higher by violating these conditions. For instance, concentrating the sunlight can give more than one electron per photon, and new nanomaterials can capture the excess energy as current [16].
Organic waste from human activities or natural swamps contains energy. Many societies already produce methane from cow dung or even human waste. Low-tech companies have sprouted up to make biofuels from deep-fry oil, left-over beer, or even onions. Almost all of these efforts are to produce fuel for transportation, which has already been treated in this chapter. There is only one application to general energy production. This is to mix biomass with the fuel in a fossil-fuel plant. The same amount of power can be generated with less coal.89 Small plants burning only biomass would be very inefficient.
Artificial photosynthesis is an interesting development that does not generate energy. Using chlorophyll, plants convert water, carbon dioxide, and sunlight into carbohydrates and oxygen. Daniel Nocera at the Massachusetts Institute of Technology has been able to split the water molecule in the laboratory using special catalysts and energy from solar cells (or the grid). Two electrodes, one of indium — tin oxide and the other of platinum, are immersed in a solution containing cobalt and potassium phosphate.90 When a voltage is applied, oxygen bubbles came out at one electrode and hydrogen at the other. The catalysts reform themselves. This process does not produce energy; it produces hydrogen, which can store solar energy during the night.
The inventiveness of the human mind has spawned a large number of crazy ideas for generating energy or slowing global warming. Some ideas are described in the Solar Power and Geoengineering sections. For instance, there is a plan to put square miles of silicon solar panels into synchronous orbit around the earth, convert the solar power into microwaves, and then beam the microwaves back to earth. Another is to place a huge mesh of wires at the point where the sun’s and earth’s gravitational fields cancel. The mesh scatters sunlight so that not as much falls onto the earth, thus reducing global warming (and perhaps trigger the next ice age). There are wind scrubbers that catch CO2 as it comes by in the wind. Dumping huge amounts of iron filings into the ocean would spawn huge blooms of plankton which absorb CO2. These ideas appear often in the popular literature.919293 Astute readers will recognize the ridiculous ones and have a good laugh.
We saw in Chap. 5 that the plasma in a fusion reactor has to have a temperature of at least 10 keV (about 100 million degrees), but most of our deliberations have been about the problem of keeping a plasma from leaking out of its magnetic container. Isn’t heating to 50 times the temperature of the sun a bigger problem? The problem is nontrivial, but there have been no unexpected effects comparable to, say, microinstabilities. The simplest way to heat a plasma is to drive a current through it.
Fig. 7.13 Diagram of a D-shaped tokamak with divertors (drawing by Tony Taylor of the DIII-D tokamak configuration at General Atomics, San Diego, California) |
A current is needed in a tokamak anyway to produce the poloidal field. This is ohmic heating, which happens whenever there is resistance in a wire carrying a current, such as in a toaster. The plasma in a tokamak can be considered as a one — turn wire loop, even though it is a gaseous one. It has a resistivity due to electron — ion collisions. When a voltage is applied around the loop, the electrons carry the current; and when they collide with ions, their velocities get randomized into a bellshaped distribution, raising the temperature. The usual way to apply an electric field to loops of wire is to use a transformer, a common household device. It is the heavy piece of iron found in fluorescent lights and in the power bricks of electronic devices like cell phone chargers. Very large transformers are used to convert the high voltage of the power line (as much as 10,000 V) down to the household 115 V AC that we use in the USA or the 230 V in Europe. We know about these because they sometimes blow up, causing a power outage.
The first tokamaks used transformers for ohmic heating, as illustrated in Fig. 7.14. A pulse of current in the primary winding (shown as the three turns on the outer legs) drives a larger current in the plasma, which forms a one-turn secondary winding. This method was OK for small research machines, but the transformer would be too large in a large machine. Instead, one can use an air-core
Fig. 7.14 Use of an iron-core transformer to drive ohmic heating current |
Fig. 7.15 Use of an air-core transformer to drive ohmic heating current |
transformer without the iron, as shown in Fig. 7.15. What are shown are toroidal coils, known as OH (ohmic heating) coils, which go around the torus the long way. A pulsed current in the OH coils induces a current in the opposite direction in the plasma. This is inefficient compared to an iron transformer, but it is easier to drive a large current in the OH coils than to create the space for a large iron transformer. The “Equilibrium Field Coil” in that figure generates the vertical field described at the end of Chap. 6. Note that Fig. 7.15 is intended only to show the principle; actual “poloidal-field coils” are numerous toroidal coils located mostly on the outside of the torus and combine the currents necessary for equilibrium, ohmic heating, and shaping of the plasma.
At this point, the words poloidal and toroidal have been used so often that it may be well to review what these terms mean to avoid any further confusion. A toroidal line goes along a doughnut, or even a pretzel, the long way, tracing out a circle in the case of a doughnut and a figure-8 in the case of a pretzel. A poloidal line goes the short way around the cross section of a doughnut, encircling the dough but not the hole. What is confusing is that magnetic and electric fields are generated differently by currents flowing in coils. For magnetic fields, a toroidal field is generated by poloidal coils which pass through the hole and encircle the plasma. Thus, the main toroidal magnetic field of a tokamak is generated by poloidal coils called toroidal-field coils! These are the blue coils seen in Fig. 7.15. A toroidal coil generates a magnetic field passing through the coil. Thus, the largest red coils in Fig. 7.15 generate a more or less vertical magnetic field, which is poloidal even though it does not actually encircle the plasma the short way. For electric fields, the opposite is true: a toroidal coil will generate a toroidal current. Thus, the smaller toroidal red coils in Fig. 7.15 are used to induce toroidal currents in the plasma. These are the OH coils. It is not necessary to understand this. Creating the fields we need is straight electrical engineering, and there are no unexpected plasma instabilities!
Ohmic heating cannot be the primary heating method in a fusion reactor for two reasons. First, OH cannot raise the plasma temperature high enough for fusion because, as explained in Chap. 5, the plasma is almost a superconductor at those temperatures. Collisions are so rare that the plasma’s resistance is almost zero, and resistive heating becomes very slow. Second, transformers work only on AC, whereas a fusion reactor must be on all the time in a DC fashion. The current induced in the secondary depends on an increasing current in the primary, and that current cannot increase forever. That is why tokamaks up to now have been pulsed, though very long pulses, of the order of minutes, are now possible. Other heating methods are used which can operate in steady state. Remember, however, that aside from ohmic heating, a current is necessary in a tokamak for producing a rotational transform — the twisting of the field lines. Fortunately, there are other ways to generate DC current for that purpose. One way is to launch a wave in the plasma that can push electrons along the magnetic field. Another is the “bootstrap current,” a naturally occurring phenomenon that we describe in the “Mother Nature Lends a Hand” section. Stellarators are toroidal machines that do not need a current, since the rotational transform is generated by twists in the external coils. Hence, stellara — tors avoid the problem of current drive. They may ultimately be the way fusion reactors are constructed, but up to now we have had much more experimental experience with tokamaks.
Another way to heat a fusion plasma to the required millions of degrees is Neutral Beam Injection or NBI to those who like acronyms. This is now the preferred method, and it works as follows. Neutral atoms of deuterium with high energy (between 100 and 1,000 keV) are injected into the plasma. Being neutral, these atoms can cross the magnetic field. Once inside the plasma, the atoms are rapidly ionized into ions and electrons, producing beams of energetic deuterium ions. The velocity of the neutral atoms is adjusted so that they can go far into the plasma before they are ionized. Once ionized, the beam becomes a beam of fast deuterium ions, and these give their energy to electrons by “electron drag” and to the plasma ions by colliding with them, raising their temperature. Neutral atoms cannot be accelerated by an electric field because they have no charge, so to make a neutral beam one must start with charged particles. One can start with a positive ion, accelerate it, and then add an electron to make it neutral; or one can start with
Fig. 7.16 Neutral beam injectors on a tokamak |
a negative ion and then strip its extra electron to make it neutral. It is easier to do the latter. Hydrogen has an affinity for electrons, so negative deuterium (D-) ions are not hard to make. They are then accelerated in a relatively simple accelerator. The extra electron in D- is loosely bound, so it is easily stripped off when the beam passes through a little bit of gas; and a fast neutral is formed. Neutral beam injectors are very large and tend to take up more space than the tokamak itself. Figure 7.16 shows what a tokamak looks like when surrounded by neutral beam injectors. These beams can be aimed in different directions to give momentum to the plasma. Normally, it is best to use co-injection; that is, injection in the same direction as the tokamak current. This method of heating is powerful and necessarily changes the conditions of the plasma from what simple theory would predict. On the other hand, adjusting the beam affords another way to control the plasma.
There are three other major methods for heating worth noting: ion cyclotron resonance heating (ICRH), electron cyclotron resonance heating (ECRH), and lower-hybrid heating (LHH). In cyclotron heating, a high-frequency electric field is launched into the plasma, and its frequency is adjusted to match the gyration frequency of the particles as they rotate around in the magnetic field. These circular Larmor orbits were shown in Fig. 4.9. The electric field changes its direction at the cyclotron frequency, so that as the particle moves in a circle and changes its direction, the electric field follows it so that it is always pushing the particle. Those particles that start out out-of-phase are decelerated by the field but then get into phase and are pushed. They collide with one another to thermalize, thus raising the temperature of the whole gas. This works for both ions and electrons, but the technology is entirely different.
ICRH requires power generators with frequency in the tens of MHz (million cycles per second). This is in the radiofrequency range, between the bands used by AM and FM radios. Therefore, the generators are like those used by radio stations, only more powerful. The antenna, however, is not mounted on tall towers. It is a series of coils inside the vacuum chamber of a tokamak but outside the plasma, so that it does not get damaged.
ECRH requires generators of the much higher cyclotron frequency of electrons, around 50 GHz (billion cycles per second). This is in the microwave range. Microwave ovens and some telephones operate at the standard frequency of 2.4 GHz, some 20 times lower. The magnetron that is used in microwave ovens puts out about a kilowatt of power. In fusion, special gyrotrons have been developed which can produce tens of megawatts continuously. As in a microwave oven, ECRH does not need an antenna; the waves go through a hole. A very useful feature of cyclotron heating is that it is localized. Cyclotrons work because the frequency does not change with particle energy (until it goes beyond an MeV), but the frequency does change with magnetic field. Since the magnetic field in a tokamak is not the same everywhere, this means that only the plasma located at the right magnetic field gets heated, and this position can be changed by changing the frequency. We have seen how the profile of the tokamak current can change the magnetic topology and the q-value of the rotational transform. Localized heating can change all this, giving operators a way to control the stability of the tokamak.
Heating can also be accomplished by launching waves into the plasma using different frequencies and different types of antennas. These waves bear names like lower-hybrid wave or fast Alfven wave and belong to a large array of waves that can exist in a magnetized plasma. By contrast, the unmagnetized, un-ionized air that we breathe can support only two kinds of waves, light and sound. It remains to be seen whether wave heating will be practical in a real fusion reactor.
Helium is not a rare gas if we can afford to fill the world’s balloons with it. Actually, balloons account for only 16% of helium use. Cooling of semiconductors accounts for 33%, and the rest is used for industrial and scientific purposes. The atmosphere contains four billion tonnes of He, but it is not economical to extract it by cryo — distillation. Most of our helium comes from natural gas as a by-product. Thus, helium comes from fossil fuels and will be depleted in several decades along with natural gas, as discussed in Chap. 2. In this chapter, we have seen how critically fusion reactors, as envisioned today, will depend on helium in both extremely hot and extremely cold places. In the first wall and blankets, gaseous helium is used as a high-temperature coolant. The vacuum system uses liquid helium to cool the cryo — pumps. In the magnet system, liquid helium is what produces superconductivity. It is a closed system, but there are leaks. It is estimated that ITER will lose 48 tonnes of helium a year, about 0.15% of the world’s current consumption. But if eventually fusion produces a third of the world’s power, those reactors would need the world’s supply of helium for a whole year just to start up [4]. At some point the helium losses, say, 10% of the inventory, would exceed what comes from natural gas. You will remember that helium is one of the products of the D-T reaction. At only a few percent burnup, however, this “ash” is a negligible contribution to the total demand. Helium is needed in other industries as well; for instance, in medical equipment. The shortage is so acute that a rationing system was proposed in the USA in 2010.