Как выбрать гостиницу для кошек
14 декабря, 2021
Before describing some effects that are not yet completely understood, we should mention the basis for believing that these problems are not insoluble. That’s the important subject of computer simulation. In the 1970s and 1980s, when unanticipated difficulties with instabilities arose, computers were still in their infancy. To the dismay of both fusion scientists and congressmen, the date for the first demonstration reactor kept being pushed forward by decades. The great progress seen in Fig. 8.1 since the 1980s was in large part aided by the advances in computers, as seen in Fig. 8.2. In a sense, advances in fusion science had to wait for the development of computer science; then the two fields progressed dramatically together. Nowadays, a $300 personal computer has more capability than a room-size computer had 50 years ago when the first principles of magnetic confinement were being formulated.
Computer simulation was spearheaded by the late John Dawson, who worked out the first principles and trained a whole cadre of students who have developed the science to its present advanced level. A computer can be programmed to solve an equation, but equations usually cannot even be written to describe something as complicated as a plasma in a torus. What, for instance, does wavebreaking mean? In Hokusai’s famous painting in Fig. 8.10, we see that the breaking wave doubles over on itself. In mathematical terms, the wave amplitude is double-valued. Ignoring the fractals that Hokusai also put into the picture, we see that the height of the wave after breaking has two values, one at the bottom and one at the top. Equations cannot handle this; Dawson’s first paper showed how to handle this on a computer.
So the idea is to ask the computer to track where each plasma particle goes without using equations. For each particle, the computer has to memorize the x, y, z coordinates of its position as well as its three velocity components. Summing over the particles would give the electrical charge at each place, and that leads to the electric fields that the particles generate. Summing over their velocities gives the currents generated, and these specify the magnetic fields generated by the plasma motions. The problem is this. There are as many as 1014 ions and electrons per cubic centimeter in a plasma. That’s 200,000,000,000,000 particles. No computer in the foreseeable future can handle all that data! Dawson decided that particles near one another will move together, since they will feel about the same electric and magnetic fields at that position. He divided the particles into bunches, so that only, say, 40,000 of these superparticles have to be followed. This is done time step by time step. Depending on the problem, these time steps can be as short as a nanosecond. At each time step, the superparticle positions and velocities are used to solve for the E — and B-fields at each position. These fields then tell how each particle moves and where they will be at the beginning of the next time step. The process is repeated over and over again until the behavior is clear (or the project runs out of money). A major problem is how to treat collisions between superparticles, since, with their large charges, the collisions would be more violent than in reality. How to overcome this is one of the principles worked out by Dawson.
Fig. 8.11 Electric field pattern in a turbulent plasma (from ITER Physics Basis 2007 [26], quoted from [14]. The plot is of electric potential contours of electron-temperature-gradient turbulence in a torus) |
Before computers, scientists’ bugaboo was nonlinearity. This is nonproportionality, like income taxes, which go up faster than your income. Linear equations could be solved, but nonlinear equations could not, except in special cases. A computer does not care whether a system behaves linearly or not; it just chugs along, time step by time step. A typical result is shown in Fig. 8.11. This shows the pattern of the electric fields generated by an instability that starts as a coherent wave but then goes nonlinear and takes on an irregular form. This turbulent state, however, has a structure that could not have been predicted without computation; namely, there are long “fingers” or “streamers” stretching in the radial direction (left to right). These are the dangerous perturbations that are broken up by the zonal flows of Chap. 7.
The simulation techniques developed in fusion research are also useful in other disciplines, like predicting climate change. There is a big difference, however, between 2D and 3D computations. A cylinder is a 2D object, with radial and azimuthal directions and an ignorable axial direction, along which everything stays the same. When you bend a cylinder into a torus, it turns into a 3D object, and a computer has to be much larger to handle that. For many years, theory could explain experimental data after the fact, but it could not predict the plasma behavior. When computers capable of 2D calculations came along, the nonlinear behavior of plasmas could be studied. Computers are now fast enough to do 3D calculations in a tokamak, greatly expanding theorists’ predictive capability. Here is an example of a 3D computation (Fig. 8.12). The lines follow the electric field of an unstable perturbation called an ion-temperature-gradient mode. These lines pretty much follow the magnetic field lines. On the two cross sections, however, you can see the how these lines move in time. The intersections trace small eddies, unlike those in the previous illustration. It is this capability to predict how the plasma will move under complex forces in a complicated geometry that gives confidence that the days of conjectural design of magnetic bottles are over.
The science of computer simulation has matured so that it has its own philosophy and terminology, as explained by Martin Greenwald [15]. In the days of Aristotle, physical models were based on indisputable axioms, using pure logic with no input from human senses. In modern times, models are based on empiricism and must agree with observations. However, both the models and the observations are inexact. Measurements always have errors, and models can keep only the essential elements. This is particularly true for plasmas, where one cannot keep track of
Fig. 8.12 A 3D computer simulation of turbulence in a D-shaped tokamak (courtesy of W. W. Lee, Princeton Plasma Laboratory) |
every single particle. The problem is to know what elements are essential and which are not. Computing introduces an important intermediate step between theory (models) and experiment. Computers can only give exact solutions to inexact equations or approximate solutions to more exact (and complicated) equations. Computer models (codes) have to be introduced. For instance, a plasma can be represented as particles moving in a space divided into cells, or as a continuous fluid with no individual particles. Benchmarking is checking agreement between different codes to solve the same problem. Verification is checking that the computed results agree with the physical model; that is, that the code solves the equations correctly. Validation is checking that the results agree with experiment; that is, that the equations are the right ones to solve. Plasma physics is more complicated than, say, accelerator physics, where only a few particles have to be treated at a time. Because even the models (equations) describing a plasma cannot be exact, the development of fusion could not proceed until the science of computer simulation had been developed.
Fig. 9.29 Example of the variation of the safety factor q(r) across the minor diameter of an advanced tokamak plasma (blue), and the plasma current distribution required to produce it (red) [37] |
the TFTR machine at Princeton (Fig. 8.3) and the JET in England (Fig. 8.4), both of which have used DT fuel. Robots can weld joints by remote control. The first experiments in ITER will use hydrogen or helium, which produce no radioactivity. Later, deuterium experiments will give a small amount of radioactivity. In the next stage, tritium will be used; and the machine will become very “hot.” ITER is much larger than TFTR or JET, and the components to be moved will be large and heavy. Remote handling is expensive and inconvenient, but it does not seem to be a technological barrier.
Some day the inhabitants of this planet will look back at the clumsy magnetic bottle, the D-T tokamak, which is described in the previous chapters. The tokamak will seem like an old IBM Selectric typewriter with font balls compared to Microsoft Word on a 2-GHz notebook computer. The deuterium-tritium reaction is a terrible fusion reaction, but we have to start with it because it is easy to ignite. It generates power in neutrons, which make everything radioactive so you cannot go near the reactor. The neutrons are hard to capture and also damage the whole structure of the machine. And you have to breed the tritium and keep it out of the environment. There are much cleaner fusion fuels that we can use in next-generation magnetic bottles.
These future magnetic bottles will hold denser, hotter plasmas for a longer time. Then we can use reactions that do not produce the intense flux of energetic neutrons that plagues D-T reactors. Here is a list of the main possibilities.
D + D ^ T + p (half the time)
D + D ^ He3 + n (half the time)
D + He3 ^a + p p + B11 — 3a p + Li6 — He3 +a He + Li —12a + p p + Li7 — 2a He3 + He3 — a + 2p D + Li —— 2a
‘Numbers in superscripts indicate Notes and square brackets [] indicate References at the end of this chapter.
F. F. Chen, An Indispensable Truth: How Fusion Power Can Save the Planet,
DOI 10.1007/978-1-4419-7820-2_10, © Springer Science+Business Media, LLC 2011
Fig. 10.1 Reactivities of several fusion reactions versus ion temperature in keV [1, 2] |
In this list, D stands for a deuteron, T for a triton, p for a proton, and a for an alpha particle (He4 nucleus). He3 is a rare isotope of helium with only one neutron instead of two. Figure 10.1 compares some of these reactions with D-T. What is shown are their reactivities, which show how fast fusion occurs for each fuel mix at each ion temperature.
The special role of D-T is immediately apparent. Not only does it fuse much faster than anything else, but the peak occurs at a much lower temperature. The 50-keV temperature of the peak can already be achieved. We next describe the advanced fuels in groups.
The first group involves only deuterium, which is plentiful in water. It can fuse with itself two ways, either producing T and p, or He3 and n. When it goes the first way, the proton is harmless, but the T will quickly react with D in a DT reaction and produce a 14-MeV neutron. When D+D goes the second way, it produces harmless He3 and a weaker neutron. So the DD reaction is not completely clean; there are neutrons, but much fewer of the dangerous ones. Forty percent of the energy comes out as charged particles (p, T, He3, and a), which keep the plasma hot and can give up their energy electrically instead of through a thermal cycle. The neutron damage to materials is greatly reduced. These two DD reactions will occur simultaneously, but their reactivities are very low even when summed. However, there is a gain of a factor of 2 because both reactants are the same. That is, each deuteron can react with all the other ions instead of with only the half that are tritons, as in a DT reactor. However, this still leaves the DD reaction with a much lower rate than DT.
The reactions in the second group have the next highest reactivities and are the most promising ones. D-He3 has sizeable reactivity at low temperature and produces no neutrons. Unfortunately, you cannot keep deuterium from fusing with itself, so there are DD reactions going on at the same time. But the energy in neutrons is reduced by a factor of 20 relative to DT, and this is an almost clean reaction. The problem is that He3 does not occur naturally. It can, however, be mined on the moon. It is estimated that there are a billion tons of He3 just under the surface of the moon, enough to supply the world for 1,000 years if it could be brought down here [3]. Mining machines have been designed which could dig 1 km2 of the moon’s soil, down to 3 m depth, to get 33 kg of He3 a year [4]. If the moon is ever colonized, this would be the fuel used. Finding deuterium there may not be as easy, and the much harder He3-He3 reaction (Fig. 10.1) would have to be used. Burning D-He3 on earth will have to wait until space shuttles can reach the moon. Nonetheless, the simplicity of the engineering is so attractive that a D-He3 reactor has been designed [5].
The p-B11 reaction is the most attractive one at present. The reactants are not radioactive, and only helium is produced. Without neutrons, all the shielding and blankets of DT reactors are unnecessary. Fusion power plants can dispense with the tritium recovery and processing plant, as well as with remote handling equipment. Only hydrogen and boron are used. Boron is plentiful on earth, and B11 is its main isotope. We commonly use 20 Mule Team Borax, a cheap cleanser. All the energy comes out as fast alpha particles. Since these are charged particles, there may be a possibility of direct conversion of the energy into electricity without going through boilers and turbines. This can be done by leading the alphas into a channel where they can be slowed down with electric fields, thus producing electricity directly, or by capturing the synchrotron radiation emitted by the alphas spiraling in a magnetic field. However, boron is not a light element; it has a charge of 5 (Z = 5) when it is fully ionized. When electrons collide with ions, they produce X-rays at a rate that increases with Z2. Though it is not hard to shield against these X-rays, they represent energy that is lost to the plasma, and it is harder to raise the plasma temperature. Special methods being developed to overcome this is described in a later section.
All the other reactions on our list have very low reaction rates, as exemplified by p-Li6 and He3-He3 in Fig. 10.1. Reactants with atomic number Z above 2 have two other problems besides low reactivity. First is the synchrotron radiation loss mentioned above. Second, there are competing reactions when there is a large number of proton and neutrons, and they can combine in different ways. For instance, p-Li7 looks like a great reaction, producing two alphas. However, p + Li7 ^ Be7 + n (a neutron) is also possible [6], and this happens 80% of the time. The two reactions in the third group above form a chain reaction in which the He3 generated by p-Li6 can react with Li6 to regenerate the proton, and only alphas are the result. However, the reaction rate is low, and there are competing reactions.
Speaking of chain reactions, it was Hans Bethe who invented the famous carbon cycle that allows hydrogen fusion to occur in the sun at a comparatively low temperature. Carbon is used as a catalyst that regenerates itself. Other chain reactions for the sun have been devised since then. No one so far has found a chain reaction for advanced fuels that will allow them to burn at lower plasma temperatures on earth. However, there has never been a large-scale effort to find such a chain reaction.
The last reaction listed above, D + Li6 ^ 2a, looks very attractive, but there are five competing reactions that produce nasty products. It has an interesting story. Lithium is the lightest solid element. Lithium hydride (LiH) is a glassy, solid material. It is one of the hydrides mentioned in Chap. 3 for carrying hydrogen in hydrogen cars. If we replace natural Li with Li6 and H with D, we get lithium-6- deuteride, a similar solid that is easy to transport and store. There is no information on this reaction on the Web because apparently it is useful for making hydrogen bombs, being easy to carry and producing a large amount of energy, 22 MeV. In a bomb, the reaction is set off by neutrons, and the nasty by-products would be just fine for the purpose. Mention of tests involving this reaction can be found in public histories of atomic bomb development in the USA. In a fusion reactor, however, a deuterium-lithium plasma would be hard to ignite, and the neutrons and gamma rays emitted would be hard to manage. The reaction rate [7] is about 28% of that of He3-He3, the lowest curve in Fig. 10.1. Furthermore, the competing reaction D + Li6 ^ Be7 + n occurs 3.5 times more often, producing neutrons. This is the reason many clean-looking reactions are not actually viable for a reactor.
The benefits of fusion will not come cheaply, but the cost is smaller than that of other projects that the USA has undertaken with success. Figure 11.1 compares the costs of the Manhattan Project, the Apollo Program, and the Iraq and Afghanistan wars (up to 2010) with the projected cost of developing a fusion reactor. In constant 2010 US dollars, the Manhattan Project cost $22.6B, and the longer Apollo program cost $100.8B.’ Other estimates are twice as high.2 The two current wars have
Fig. 11.1 Comparative costs of the Manhattan Project, the Apollo Program, the Afghanistan and Iraq wars, and the conjectural cost of development of fusion reactors. All costs are normalized to 2010 US dollars |
cost $732B and $282B, respectively, so far.3 The cost of developing fusion is a highly conjectural estimate. The cost of ITER, originally set at €5B ($6.3B), has risen to €16B ($21B).4 Engineering research will require fusion development facilities (FDFs). These have not been costed out, but one design is 45% the linear size of ITER, and the cost rises as the size squared. With the higher projected cost of ITER, this would make an FDF cost about $4.2B. Perhaps three of them would be required for a total of $12.6B. The DEMO would cost at least twice as much as ITER or $42B. The total is $75B, less than that of the Apollo program, which did not solve any pressing problems. After DEMO has been run successfully, further development would be turned over to private industry, and federal support would no longer be needed. The fusion cost given here is a guess, but it is clear that the USA has the resources to develop fusion without outside help. It is only a matter of priorities. Jack Kennedy showed that it can be done.
Figure 11.2 gives a breakdown of the $5.1B FY 2011 budget request of the Department of Energy’s Office of Science.5 Fusion Energy Sciences is the item that supports magnetic fusion research. It is the smallest item there. Basic Energy Sciences is deservedly the largest item because it supports current renewable energies like wind and solar. High-energy physics traditionally has a large budget because it is the community that gave us the hydrogen bomb to win WWII. It still has a large budget for accelerators and experiments that can improve our knowledge of the structure of matter. This is the forefront of science, but mankind may or may not need to know this to survive.
Energy Physics Environment Physics Energy Sciences Sciences Fig. 11.2 Support for different divisions in the US Office of Science |
U. S. to ITER Fig. 11.3 Comparison of the annual budgets of the space and fusion programs in the US |
Figure 11.3 compares the annual budgets in the USA for magnetic and inertial confinement fusion research with the $1.9B for the NASA space program. The magnetic fusion budget includes a paltry $80M contribution to ITER, equivalent to four hours of expenditures in the Iraq war. Exploration of the solar system (NASA) and study of the behavior of matter under extreme conditions (ICF) are exciting extensions of modern knowledge which scientists are happy to have funded because of their importance to national security. These programs, however, contribute little to the solution of environmental and energy issues. We are spending more money looking for the Higgs boson than for a solution to global warming and oil shortage. Re-examination of priorities is in order.
Figure 11.4 shows the cost of the ITER experiment, including construction but not operation. The expense is shared by seven nations. It is the first giant step toward fusion power. Compared with this is the amount spent by the USA alone to wage the war in Iraq for one month.6 The graph speaks for itself. The USA could easily have taken this step alone had it not been so dependent on Mid-Eastern oil.
In spite of its low voltage, rooftop solar may actually be the deadliest source of energy! This is because the panels get dirty and need to be cleaned of dust, dirt, wet leaves, and bird droppings in order to maintain their efficiency. People will naturally climb up to the roof to clean their panels. Statistics on people falling off the roof and ladders are not readily available, but here are some figures for accidental deaths from falls. The Center for Disease Control and Prevention28 reports that 15,800 adults above age 64 died from unintentional falls. Another branch of CDC shows 19,195 total accidental deaths from falls in 2006.29 Data from the US Census in 2000 show that deaths from falls from one elevation to another were 3,269 in 1996.30 If we conservatively take the smallest number, about 3,000, and say that maybe 10% of those were falls from a roof or a ladder going to the roof, then 300 US deaths occur annually from such falls. Now if rooftop solar becomes widespread, this number may grow by an order of magnitude to 3,000 deaths per year. Compare this to the annual average of 32 coal-mining deaths in the USA from 1996 to 2009!31 Even the 4,000-6,000 coal-mining deaths in China is comparable to the number of USA fatalities if local solar power expands as intended.
Factories are usually large, single-story building with flat roofs. These would be ideal for solar installations. Forward-looking companies like Walmart and Google have already installed solar power on their roofs. Covered parking lots are also good candidates, and some are already being converted. These installations would be serviced by professionals, not homeowners. No doubt measures will be taken to make solar systems for homes safer. Panels can be designed with this in mind.32 Perhaps a cottage industry of panel cleaners will arise, the way chimney sweeps have been reinvented. Rooftop solar is needed, but its dangers must be minimized.
France and Japan reprocess spent fuel to recover plutonium and 0.9% enriched uranium out of it; the USA does not. Here is what is involved. The spent fuel rods are cooled for 1 year in water (“swimming pools”). They are then taken apart underwater by remote control. The fuel pellets are dissolved in chemicals to separate out the uranium and plutonium. These are sent to Russia for isotope separation in centrifuges. Their oxides are made into an LWR fuel called “mixed oxide” or MOX. Ceramic MOX is radioactive and expensive. The arguments for reprocessing are that uranium fuel is not wasted, and there is less left-over radioactive waste to store underground. The long-lived part is four times smaller than in stored waste without reprocessing. The arguments against reprocessing are that the plutonium can be stolen for bombs, and that it is simpler and cheaper to just store the spent fuel.
If the plasma in a torus always thrashes around violently, there must be an energy source that drives the thrashing. An obvious source is the electric field applied to drive the current in ohmic heating. In the 1960s, a new method was devised for heating without a large DC current in the plasma. This was Ion Cyclotron Resonance Heating or ICRH. A radiofrequency (RF) power generator was hooked up to an antenna around the plasma, the way an FM station is hooked up to its antenna on a tower. The frequency was tuned to the gyration frequency of the ions in their cyclotron orbits. As the ions moved in circles, the RF field would change its direction so as to be pushing the ions all the time, just as in a real cyclotron. This could heat up the plasma without having to drive a DC current in it.8 Would this kill the turbulence and make the plasma nice and quiet, without Bohm diffusion? A case of champagne was bet on it. It didn’t work. The thrashing was as bad as ever. The darts in Bohm’s picture stayed there.
The problem was a failure of magnetohydrodynamics, MHD for short. MHD theory treats a plasma as a pure superconductor, with zero resistivity, and neglects the cyclotron orbits of the particles, treating them as points moving at the speed of their guiding centers. Though this simplified theory served us well in the design of toroidal confinement devices and in the suppression of the gravitational and kink instabilities, it did not treat a plasma in sufficient detail. First of all, there have to be some collisions in a fusion plasma or else there wouldn’t be any fusion at all! These infrequent collisions cause the plasma’s resistivity to be not exactly zero, and that has dire consequences on stability. The fact that the Larmor orbits of the ions are not mathematical points gives rise to the finite Larmor radius (FLR) effect. In some cases, even the very small inertia of the electrons has to be taken into account. Finally, instabilities could even be caused by distortions of the particles’ velocity distributions away from a pure Maxwellian, an effect called Landau Damping. These small deviations from ideal MHD turned out to be important, making the theorists’ task much more difficult.
The first inkling of what can happen was presented by Furth, Killeen, and Rosenbluth in their classic paper on the tearing mode [2]. If a current is driven along the field lines in a plasma with nonzero resistivity, the current will break up into filaments; and the initial smooth plasma will tear itself up into pieces! So “tearing” rhymes with “bearing,” and not “fearing,” though the latter interpretation may have been more appropriate. The tearing mode is too complicated to explain here, but we describe other instabilities which caused even more tears.
One of the tenets of ideal MHD is that plasma particles are “frozen” to the field lines, as shown in Fig. 4.10. Without collisions or one of the other microeffects named above, ions and electrons would always gyrate around the same field line, even if the field line moved. Bill Newcomb once proved a neat theorem about this [3], saying that plasma cannot move from one field line to another as long as E (E-parallel) is equal to zero. Here, E is the electric field along a magnetic field line, and it has to be zero in a superconductor, since in the absence of resistance even an infinitesimal voltage can drive an infinite current. But if there are collisions, the resistivity is not zero, E can exist, and plasma is freed from one of its constraints.
So it was back to the drawing board. While the theorists enjoyed a new challenge and a new reason for their employment, the experimentalists pondered what to do. In previous chapters, we showed that (1) a magnetic bottle had to be shaped like a torus, (2) bending a cylinder into a torus caused vertical drifts of ions and electrons, (3) these drifts could be canceled by twisting the field lines into helices, (4) this twist could be produced by driving a current in the plasma, and (5) this current could cause other instabilities, even in ideal MHD, but that those could be controlled by obeying the Kruskal-Shafranov limit. In spite of these precautions, the plasma is always turbulent, even when the current is removed by using a stellarator rather than a tokamak. How can we get a plasma so smooth and quiet that we can see a wave grow bigger and bigger until it breaks into turbulence, as in Fig. 6.9? Obviously, if one could straighten the torus back into a cylinder, much of the original cause of all the trouble would be removed. But how can one hold the plasma long enough just to do an experiment? The plasma will simply flow along the straight magnetic field into the endplates that seal off the cylinder so that it can hold a vacuum. The solution came with the invention of the Q-machine (Q for Quiescent). Developed by Nathan Rynn [4] and Motley [5], this is a plasma created in a straight cylinder with a straight magnetic field. Inside each end of the vacuum chamber is a circular tungsten plate heated to a red-hot temperature. A beam of cesium, potassium, or lithium atoms is aimed at each plate. It turns out that the outermost electron in these atoms is so loosely bound that it gets sucked into the tungsten plate upon contact. The electron is then lost, and the atom comes off as a positively charged ion.
Fig. 6.11 Example of a Q-machine |
Of course, a plasma has to be quasineutral, so the tungsten has to be hot enough to emit electrons thermionically, the way the filament in light bulb does. So both ions and electrons are emitted from the tungsten plates to form a neutral plasma. No electric field has to be applied! Only tungsten or molybdenum, in combination with the three elements above, can perform this kind of thermal ionization. In this clever device, all sources of energy to drive an instability have been removed or so we thought. Figure 6.11 shows a typical Q-machine, covered with the coils that create the steady, straight, and uniform magnetic field.
The plasma in a Q-machine has to be quiescent, right? To everyone’s surprise, it was still turbulent! The trace shown in Fig. 6.10 actually came from a Q-machine. Fortunately, it was possible to stabilize the plasma by applying shear, as shown in Fig. 5.9, or by applying a small voltage to the radial boundary of the plasma. A quiescent plasma in a magnetic field was finally achieved. Then, by adjusting the voltage, one could see a small, sinusoidal wave start to grow in the plasma; and, with further adjustment, one could see the wave get bigger and bigger until it broke into the turbulence seen in Fig. 6.10. With a regular, repetitive wave like a wave in open water, one could measure its frequency, its velocity, its direction, and how it changed with magnetic field strength. These were enough clues to figure out what kind of wave it was, what caused it to be unstable, and, eventually, to give it a name: a resistive drift wave.
As its name implies, the wave depends on the finite resistivity of the plasma. It also depends on microeffects: the finite size of the ion Larmor orbits. Before showing how a drift instability grows, let’s find the source of energy that drives it. In a Q-machine, we have eliminated all toroidal effects and all electric fields normally needed to ionize and heat the plasma. In fact, the plasma is quite cold, as plasmas go. It is the same temperature as the hot tungsten plates, about 2,300 K, so that the plasma temperature is only about 0.2 eV. You can heat a kiln up to that temperature, and it would stay perfectly quiescent. A magnetically confined plasma, however, has one subtle source of energy: its pressure gradient. When everything is at the same temperature and there are no energy sources such as currents, voltages, or drifts, there is still one source of energy when the plasma is confined. And confinement is the name of the game. Since ions and electrons recombine into neutral atoms when they strike the wall, plasma is lost at the walls. The plasma will be denser at the center than at the outside, and this causes a pressure that pushes against the magnetic field. By Newcomb’s theorem, the plasma would remain attached to the field lines, and nothing can happen; but once there is resistivity, all bets are off. The plasma is then able to set up electric fields that allow it to move across the magnetic field in the direction that the pressure pushes it. Even if there are no collisions, other microeffects like electron inertia or Landau damping can cause the drift instability to grow. For this reason, the resistive drift instability and others in the same family are called universal instabilities. They are fortunately weak instabilities because the energy source is weak, and they can be stabilized with the proper precautions.
Aside from materials exposed to plasma and large heat fluxes, structural materials have to be chosen to support the huge weight of the reactor elements — the vacuum chamber, magnetic coils, breeding blankets, and so forth. Normally one would use steel; but for fusion, the type of steel has to be carefully designed. The neutrons bombarding the structure will make it radioactive. Only the following elements can be used: iron, vanadium, chromium, yttrium, silicon, carbon, tantalum, and tungsten. Elements like manganese, titanium, and niobium used in other steels would result in long-lived radioactive isotopes. Two Reduced Activation Ferritic Martensitic Steels have been designed: Eurofer (in Europe) and F82H (in Japan). These have the following additives to iron [4]:
Chromium (%) |
Tungsten (%) |
Vanadium (%) |
Tantalum (%) |
Carbon (%) |
Eurofer 7.7 |
2 |
0.2 |
0.04 |
0.09 |
F82H 8.9 |
1 |
0.2 |
0.14 |
0.12 |
These steels have only short-lived radioactivity and, unlike fission products, are nonvolatile and can be re-used after storage for 50-100 years. The amount of swelling under neutron bombardment is much smaller than for ordinary stainless steel. Swelling and embrittlement come from helium and hydrogen bubbles trapped in the steel. There are experimental oxide dispersion strengthened (ODS) steels which have nanoparticles of Y2O3 that can trap helium and hydrogen, strengthen the material, and reduce creep. Though much has to be done to manufacture these materials with low impurity levels, to study their welding properties, and to test their limits in temperature and radiation resistance in fulltime operation, structural materials are not one of the worrisome problems in fusion technology.
Figure 9.8 shows the predicted radioactivity of Eurofer and SiC in a fusion reactor after 25 years of full-power operation. Note that the scales are logarithmic, so that each vertical division represents a factor of 10, and each horizontal division a factor of 100. After 100 years, the radioactivity has decayed by a factor of almost 1,000,000. This material is solid and will not leak out of its containers. The main danger from radioactivity comes from tritium, which decays in 12 years and will be considered in detail later. Note that even this small amount of radioactivity compared with fission is caused by the fact that the D-T reaction emits energetic neutrons. In second-generation fusion reactors using advanced fuels there will be almost no radioactivity.
During the mirror hiatus of the last two decades, however, new ideas have emerged that revive the possibility of mirror reactors. The yin-yang and other end coils of tandem mirrors have large magnetic stresses because of their twisted shapes. The new idea is to make mirror machines completely axisymmetric, using only simple circular coils. The feasibility of this was proved by Gas Dynamic Trap experiments in Novosibirsk, Russia [14]. The mirror field can be made extremely strong, creating a large mirror ratio (as large as 2,500), thus reducing the size of the loss cone. A schematic of this is shown in Fig. 10.27. It looks like a large plasma with a pinhole leak at each end, but the pinhole is not in real space but in velocity space.
The Gas Dynamic Trap produced 10-keV ions with a peak density of 4 x 1019 m-3 (4 x 1013 cm-3) and electron temperatures of 200 eV. The beta value was 60%, compared with only a few percent in tokamaks, since only a weak central field is needed to contain large-orbit ions with mainly perpendicular energy.3 In mirrors, neutral beams are used to inject ions, and no energy is wasted in heating the electrons. The machine is pulsed for only 5 ms, and the confinement time is only a millisecond or so. Electric fields are produced by applying voltages to different parts of the walls where the field lines end.
In an axisymmetric tandem mirror, the complexity of the stabilizing coils is gone; the circular coils are easy to make. How, then, can the plasma be stable? It turns out that the outside plasma beyond the mirrors can play an essential role. There the field lines have favorable curvature, bulging inwards toward the plasma. The stability there can overcome the instability driven by the bad curvature at the ends of the central region. It turns out that the density in the outside region does not have to be very high for this to happen as long as the plasma diameter there is large. One can shape the outside field with large coils, of which one is shown, so as to optimize the stabilizing effect [27]. A “kinetically stabilized tandem mirror” machine has been proposed [26, 27] to test this principle. That machine, shown in Fig. 10.28, uses multiple axisymmetric mirrors and injection of ions into the diverging region to improve stability.
Fig. 10.27 Magnet system of a totally axisymmetric mirror machine |
Articles in the popular press have intrigued the public with wild ideas, some of which have even been legitimized under the rubric geoengineering. For instance, instead of reducing GHGs, why don’t we shield the earth from getting so much sunlight? This could be done by sending zillions of small plastic sheets up into orbit to reflect sunlight over large areas of the earth. It has also been suggested to use natural plant spores which have large area for their weight. This would not ride well with the resort business! More seriously, such a large-scale, uncontrolled experiment would have unpredictable consequences for our climate and for life itself. It may even trigger an ice age. Such proposals are, of course, science fiction.
The following proposal has been taken more seriously. If the sun does not shine all the time on terrestrial solar panels, why not put them in space? In a geostationary orbit, 22,000 miles (36 km) above the earth, the panels will receive the whole 1.366 kW/m2 of sunlight instead of the 1 kW/m2 that reaches the earth, and the weather is always clear. That’s only 37% more, but nights will be shorter since the satellite is so high that it will not always be in the earth’s shadow when it is nighttime on earth. Gyroscopes can keep the panels always pointed at the sun. If expensive multijunction silicon solar cells are used, the efficiency could be 40%. How much area would be required to produce the power of a coal or nuclear plant, say 1 GW (1,000 MW)? (There are thousands of such plants in a large country.) For the sake of argument, let us assume that the satellite panels get an average of 1 kW/m2. To generate 1 GW at 100% efficiency would require 1 million square meters or 1 km2 (0.39 square miles). At 40% efficiency, it would require 2.5 km2 or just about 1 square mile of panels. That is a lot to send into space! The panels would not last may years because they would be damaged by micrometeorites and solar flares. The moon’s gravity would make the satellites drift from their geosynchronous orbits, so a supply of propellant is necessary to make corrections. This supply cannot last many years either.
Then there is the problem of getting the power back to earth. It is proposed to transform the solar energy into microwaves and beam the energy back to the earth at a wavelength that is not absorbed by the atmosphere. Of course, this would be in a desert area with few storms and clouds, and that means building transmission lines to population centers. Microwaves are strongly absorbed by water vapor in the atmosphere. Low frequencies, like the 2.45 GHz (gigahertz) used in microwave ovens are well absorbed by water, which is why microwave ovens work in the first place. To get good transmission, the frequency has to be high, like 100 GHz. Such frequencies can be generated by gyrotrons, and the most advanced of these are being developed for the large fusion energy experiment ITER, which is described in Chap. 8. In the laboratory, a gyrotron has produced 1.67 MW at 110 GHz for 3 ps, and 800 kW at 140 GHz for 30 min [28]. Though continuous operation at such powers is expected to be attainable on earth, it may not be possible in space because of the lack of air and water for cooling. Gyrotrons are large devices containing heavy magnets into which energetic electron beams are injected. The magnets help convert the electron energy into microwaves, but not all the energy can be extracted because as the electrons slow down, they get out of sync. The best efficiency that can be hoped for is about 50%. The rest of the energy goes into a beam dump, which has to be cooled. One can build a heat engine that generates electricity from that heat to accelerate more electrons, but that would make the device even more complicated than it is already. There is a further loss at the receiving end in converting the microwave energy into AC power. Even worse, high-power microwaves are known to break down air and make plasma that can scatter or reflect the microwaves. Solar panels in space may gain a factor of 2 in available sunlight over those on land, but more than this is lost in transmission even if the technology can be developed. Regardless of the cost, this is a really bad idea!