Category Archives: An Indispensable Truth

Ioffe Bars and Baseball Coils

Note that a simple mirror has unfavorable curvature (Fig. 7.10) and is unstable to the basic Rayleigh-Taylor interchange instability (Fig. 5.5). This problem was solved by M. S. Ioffe [21] by adding what are now known as Ioffe bars, shown in Fig. 10.21. These are four conductors parallel to the axis adding a poloidal field to the mirror field. The plasma is squeezed into a peppermint-candy shape. The strength of the magnetic field now increases outward in every perpendicular direction, so it is ener­getically impossible for a Rayleigh-Taylor instability to develop and push the plasma out. This is called a minimum-B configuration, since the plasma sits in a minimum of the B-field. Of course, the plasma can still leak out the ends.

image392

Fig. 10.21 A magnetic mirror with Ioffe bars (An old diagram or picture originally from Lawrence Livermore National Laboratory.)

Now imagine how to combine the Ioffe bars with the circular coils into a single coil. This proceeded in two steps. First, one can combine them into two identical coils, called yin-yang coils, shown in Fig. 10.22. This was such an attractive shape that an artist made a sculpture of it (Fig. 10.23). Finally, all the necessary currents can be combined into a single coil called a baseball coil because it resembles the seam on a baseball. This is shown in Fig. 10.24.

image393

Fig. 10.22 Yin-yang coils (An old diagram or picture originally from Lawrence Livermore National Laboratory.)

image394

Fig. 10.23 A yin-yang coil sculpture (Photo by the author at the 1977 meeting of the Plasma Physics Division of the American Physical Society in Atlanta, GA.)

image395

Fig. 10.24 A baseball coil (An old diagram or picture originally from Lawrence Livermore National Laboratory.)

Gas-Electric Hybrids

The range and charging problems of electric cars are solved by combining an electric motor with a gasoline motor. The most successful of these hybrids has been the Toyota Prius, which approximately doubles the mileage of a normal car. The way it does this, however, is not what most people would imagine. Instead of carrying a large battery, the Prius carries a battery so small that it can be hidden. When we drive, we subconsciously vary the pressure on the gas pedal every second or so as the road curves or rises and falls a little, or because of traffic. Each time the car coasts, its kinetic energy charges the battery, and this energy is re-used in the next few seconds when the gas pedal is pressed to maintain speed. At a stop light, the braking energy is stored and used for startup when the light turns green. Just by saving these small, instantaneous bits of energy, the car can greatly reduce its gas consumption. A dashboard display shows a red symbol every time 50 Wh of energy has been saved and re-used by the car. Fifty watt-hours sounds like a piddling amount of energy. A TV or computer draws 5W when it is off, so 50 W-hrs can power only 10 such devices in a home for one hour out of 24. However, as shown in Box 3.7, 50 W-hrs is equivalent to 241 horsepower-seconds, or almost 50 horse­power for five seconds. This allows the car to have fast pickup after a stop. Fifty kilowatt-hours (67 HP-hrs) would be more normal for a car that didn’t have instan­taneous response to small accelerations and decelerations. Indeed, hackers who have modified the Prius by adding a large battery have increased its mileage from 45 mpg (5.2 liters/100 km) to 100 mpg (2.4 liters/100 km), but at great expense. More on the hardware in the Prius is given in footnote 52.

Hybrid cars incorporate many other improvements to decrease fuel consumption. A continuously variable transmission is more efficient than a 4-speed automatic or a 5-speed manual. A switch available in some models turns off the gas motor altogether so that the car runs on electric alone until the battery gets low. When the energy used to climb hills is recovered and the braking energy is stored for use in starting again, an electric car is very efficient in city traffic. In traffic jams when normal cars are burning gas without moving, electric hybrids can get surprisingly high mileage. Driving at high speeds is another matter; the car has to push its way through the soup we call air. In perfect streamlining, the front of the vehicle slices the air apart. The air streams above and below then rejoin each other at the back of the car, pushing the car forward. But there is friction, and heat is lost in the windshield; and there is turbu­lence, so the stream at the rear is not smooth. There are also protuberances: windows, door handles, tires, and, above all, the rear view mirrors. Sticking your hand out the window at autobahn speeds will show how much energy is needed to push through the atmosphere. Wind drag accounts for 60% of energy use; tire friction, 10%; and engine and transmission line losses account for the rest. In the Prius, sticking to the speed limit can save 10% in gasoline, but over-inflating the tires can save only 1%. Retuning the electronic fuel injection can save 10%. Effective streamlining is mea­sured by the drag coefficient Cd, on which more information is given in footnote 53.

In both hybrids and normal cars, gasoline is used inefficiently when the car is cold. A car rated at 30 miles/gallon (mpg) may get only 12 mpg when it first starts. A Prius which gets 45 mpg when warm drops to 30-35 mpg until the engine and catalytic converter warm up. This loss is avoided when running on electric alone. In hybrids, battery power can be used to heat up the catalytic converter more rapidly. Both motors in a hybrid depend on rare, precious metals. A catalytic converter contains about 5 g of platinum worth about $500. On the other hand, electric motors use permanent magnets made with neodymium. Their batteries may contain more than 10 kg of lanthanum. These materials, however, can be recycled. Many rare-earth elements are used in hybrids, and the concern is that China has a near monopoly on the supply of these elements.

Mappings, Chaos, and Magnetic Surfaces

Figure-8 stellarators are hard to make, especially since the coils have to be accurate enough to keep the field lines from wandering out to the walls.8 It was soon real­ized, however, that the necessary twist of the field lines can be produced without twisting the entire torus. We mentioned that the field lines in a toroidal magnetic bottle are twisted like the stripes on a candy cane. The way to produce such helical field lines can be visualized more easily if we decompose them into toroidal lines, as in Fig. 4.13a, and poloidal lines, as in Fig. 4.13b. Adding these two types of fields together will result in a field with helical field lines. To produce the toroidal part of the field, we can use coils like those in Fig. 4.14. Now we want to add coils that will produce the poloidal field. Figure 4.19 shows how this is done. Let there be a number of toroidal hoops placed all around the torus; two of these are shown in Fig. 4.19. If each hoop carries a current in the toroidal direction, as shown by the horizontal arrows, it will produce a magnetic field around itself in the direction shown by the arrows on the small circles around each hoop. The part of this field that extends into the plasma will be mostly in the poloidal direction. Imagine that there are an infinite number of these hoops covering the surface of the torus. Their fields inside the plasma will add up to give a purely poloidal field, as shown by the dashed arrows.

You have no doubt noticed the complementarity here: poloidal windings create toroidal magnetic fields (Fig. 4.14), and toroidal windings create poloidal fields (Fig. 4.19). In the same way that the toroidal and poloidal fields add up inside the torus to make helical field lines, the poloidal and toroidal windings can be combined into a helical winding! One turn of such a winding is shown in Fig. 4.20. The dotted line is a helical field line. Because it contains both toroidal and poloidal components, it may start near the top and then go to the bottom in another cross section. Now look at what an ion does.9 On the right, an ion starts drifting

Fig. 4.19 Generation of poloidal fields with coils

Подпись:image192upwards — not downwards, as in Fig. 4.17 — because here I have drawn the mag­netic field going into the page instead of out of the page. When the ion reaches the left side, it is still drifting upwards — not downwards as in a figure-8 stellarator — but this is fine, because the ion is now near the bottom, and an upward drift will bring it back away from the wall. So there are two ways to skin the cat. Either a figure-8 stellarator or a stellarator with helical field lines made by helical windings can cancel the dreaded vertical drift of ions and electrons caused by bending a cylinder into a torus.

We started with the concept that field lines have to end on themselves so that particles moving along them will never leave the magnetic trap. Of course, the field lines do not have to meet their own tails exactly. All that is required is that the line never hits the wall. In general, field lines do not close on themselves. Rather, they come back to the same cross section in a different position after going around the torus the long way. This is illustrated in Fig. 4.21. An imaginary glass sheet has been cut through the torus so that we can see where the field lines strike this cross section. Let’s assume that a field line intersects this cross section at position 1. After going around the torus once, it might intersect at position 2. On successive passes, its position might be 3, 4, 5, 6, etc. On the seventh pass, the field line almost comes back to position 1, but it does not have to. One can define a mapping function such that for every position on that plane, there is a definite position for the next pass. Thus, whenever the line goes through position 2, it will pass near position 3 the next time. The line does not ever have to come back to itself. It can cover the entire cross section randomly, and the plasma will still be confined as long as the line never hits the wall.

At this point, we should define a quantity that will be very useful for under­standing twisted magnetic fields: the rotational transform. This is the average number of times a field line goes the short way around a cross section for each time

Fig. 4.21 Mapping of a field line

image193it goes the long way around the whole torus. In Fig. 4.21, suppose pass No. 7 fell exactly on pass No. 1, then it took six trips around the torus for the field line to make one trip around the cross section. The rotational transform is then about one-sixth. The field line does not have to trace a perfect circle in the cross section, and the crossings do not have to be evenly spaced. The rotational transform is an average that more or less measures the amount of twist.

You have no doubt heard of fractals and chaos theory, topics that have been developed since the invention of fast computers. It was the mapping of field lines in magnetic bottles that gave impetus to the development of these concepts. Ideally, with well designed and fabricated windings for creating the magnetic field, the locus of intersection points in a stellarator can be perfect circles, with the field lines coming back to a different angle on the circle each time. With a finite number of turns on the helical coil instead of an infinite number, the circle can be distorted into, say, a triangle; but a field line will come back to the same triangle on each pass. In real life, magnet coils are not made perfectly, and there are small perturba­tions. These can cause wild behavior in the map, causing strange attractors, where the points tend to clump at a particular place; or magnetic islands, which we will discuss later; or complete chaos in the way the points are distributed. The name of the game in stellarators is to create nested magnetic surfaces, in which the magnetic lines always stay on the same surface and intersect each cross section on the same curve. An idealized case is shown in Fig. 4.22. Once created on a magnetic surface, an ion or electron stays on that surface as it goes around the torus thousands or millions of times. The surfaces do not have to be circles, but they never touch or overlap, so the plasma remains trapped by the magnetic field.

A stellarator requires such precision in its manufacture that in the early days they could not hold a plasma very long. In the next chapter, we shall introduce the toka — mak. This is a torus, of course, since it has to be doubly connected; but its poloidal field is not generated by external coils but by a current in the plasma itself. This allows it to have self-healing features which can overcome small imperfections in its construction.

Fig. 4.22 Nested magnetic surfaces. A particle stays on its surface as it goes around and around the torus

Fits, Starts, and Milestones

How did we get to this point? The scatter in the points in Fig. 8.1 tells a story. In the short term, progress has been sporadic, with fits and starts caused not only by problems of physics, but also by problems of funding and politics. Glimpses of the history of fusion research can be found in popular books by physicists Amasa Bishop [1], Hans Wilhelmsson [2], McCracken and Stott [3], and Ken Fowler [4].

image271

Fig. 8.7 Inside the vacuum chamber of DIII-D when it is opened up to air

Less technical coverage of people and politics is given in books by journalists Joan Lisa Bromberg [5] and Robin Herman [6], and in an article by Gary Weisel [7]. Here is a nutshell account.

In the USA, three groups started research on controlled fusion in 1951-1952: one at Livermore, California, headed by Richard F. Post; one at Los Alamos, New Mexico, headed by James Tuck, and one at Princeton, New Jersey, headed by Lyman Spitzer, Jr. It was obvious that the hydrogen bomb reaction was a source of a huge amount of energy, if only it could be released slowly in a controlled way. It was not obvious how to do it. All agreed that trapping and holding a hot plasma would be necessary. Dick Post proposed to use magnetic mirrors, which we shall describe in Chap. 10. Jim Tuck proposed to use pinches (Chap. 7), in which the entire magnetic field is generated by plasma currents. These devices suffered, of course, from the kink instability, which was not known at that time. Tuck had the foresight to name his machine the Perhapsatron. At Princeton, Lyman Spitzer, an astronomer, designed the figure-8 torus, which he named, of course, a Stellarator. A little later, a fourth program started at Oak Ridge, Tennessee, based on another mirror machine, the DCX. This group emphasized experiments which ran continu­ously (hence DC) rather than in pulses, and eventually included the curiously named ELMO Bumpy Torus. In England, the initial efforts concentrated on pinches, particularly the toroidal pinch, which is a torus like a tokamak, but with a poloidal confining field produced by a large toroidal current. In Russia, research began at the Kurchatov Institute in Moscow with a small torus which they named the Tokamak, invented by Igor Tamm and Andrei Sakharov. Other nations did not join in until after the first milestone, the Geneva conference of 1958, when these secret programs were declassified and revealed.

In the years before that, the US program grew rapidly with the enthusiastic support of Atomic Energy Commission chairman Lewis L. Strauss. The program was named Project Sherwood after the name of James Tuck, reminiscent of Friar Tuck of Sherwood Forest. Strauss kept the program classified and well funded with the aim of beating out the UK and the USSR in achieving fusion. Sherwood confer­ences were held yearly, and there were some memorable occasions. In 1956, the meeting was hosted by Oak Ridge at Gatlinburg, Tennessee, and most attendees found out for the first time the meaning of “dry town.” Even without lubrication, Lyman Spitzer regaled the group with his rendition of songs by Gilbert and Sullivan, which he sang from memory. In 1957, the meeting was in Berkeley, California, and a movie theater had to be taken over in the day time and secured for the classified meeting. By sheer coincidence, the movie that was playing that week was “Top Secret.” From 1952 to 1954 James van Allen, who discovered his famous radiation belts, built the B-1 stellarator at Princeton, a machine which the newly hired young experimentalists inherited in 1954.

Meanwhile, Spitzer had assembled a strong theoretical group, whose magnum opus was the elegant paper An energy principle for hydromagnetic stability problems, published in 1958 [8]. This paper by Bernstein, Frieman, Kruskal, and Kulsrud did more than anything else to establish plasma physics as a respectable new field in the eyes of all physicists. A calculational method based on minimization of energy was given that could predict the boundaries of stable MHD operation even in toroidal machines with complicated magnetic geometries. This tool allowed experi­mentalists to build machines that were stable against the Rayleigh-Taylor and kink instabilities, among others, that were discussed in Chaps. 5 and 6.

The 1958 Atoms for Peace conference was organized by the IAEA (International Atomic Energy Agency), formed in 1957 by the United Nations. Based in Vienna, Austria, the IAEA has sponsored the plasma physics and controlled fusion confer­ence every two years since then. A large contingent from Project Sherwood was sent to Geneva, flying across the Atlantic on propeller planes. Preceding the team were tons of display equipment managed by the Oak Ridge experts. Not only were there models such as the figure-8 stellarator shown in Fig. 4.18, but actual operating machines were also transported, including the power supplies and control equip­ment needed to make them work. No expense was spared. England also put on a large and splendid exhibit, featuring their toroidal pinch, the Zeta. Meanwhile, the USSR exhibit featured the Sputnik, which they had just launched to open the space age. Their fusion machine, the tokamak, was secondary. The tokamak on exhibit looked like a formless, dark, unrecognizable piece of iron and was not made to work. This was how the tokamak age began. But the gauntlet was thrown by the USA, the UK, and the USSR; and the race was on.

At the Geneva conference, the British team announced that neutrons character­istic of fusion reactions had been observed in Zeta. This would have been the first demonstration of fusion created by hot plasma. Unfortunately, it was found that these neutrons came from energetic ions striking the wall, not from the thermal ions in the body of the plasma. As explained in Chap. 3, ion beams cannot produce net energy gain; that requires a thermonuclear reaction. The Brits had been careless and had stumbled. It was an embarrassing moment for their leaders, Peter Thonemann and Sebastian “Bas” Pease, two gentlemen who were the best friends one could have. The idea of a toroidal z-pinch (zed-pinch to Englishmen) has survived, however, as a possible advanced alternative to the tokamak, aided by a brilliant theory by their countryman, Bryan Taylor.

The 1960s saw progress on many fronts. The most important was the announce­ment in 1968 by Lev Artsimovich, the driving force of the Russian effort, that the confinement time was 30 times longer than the Bohm time and record-breaking electron temperatures had been achieved in their T-3 tokamak. Recall that Bohm diffusion, caused by microinstabilities, was limiting confinement times to the millisecond regime, so this was important progress if it could be believed. The scientific community was skeptical, since Russian instruments were compara­tively primitive. In 1969, an English team headed by Derek Robinson flew to Kurchatov with a laser diagnostic tool that the Russians did not have. They measured the plasma in the T-3 and found that the Russian claims were correct. The tokamak had to be taken seriously. Soon thereafter, research tokamaks began appearing at General Atomics and several universities in the USA, as well as in many locations in Western Europe and Japan. Even the venerable Model C stellarator at Princeton was converted to a tokamak in 1970. In retrospect, the invention of the tokamak was a lucky break. Its self-curing feature of sawtooth oscillations was not foreseen, nor were the gifts from Mother Nature listed in Chap. 7. The cures for Bohm diffusion could have been laboriously found in any of a number of magnetic bottles, some of which may turn out to be more suitable for a reactor than a tokamak. It was concentrating on a single concept, the first promising one, that advanced the tokamak to its present status.

Throughout the 1960s, the Princeton group whittled away at the Bohm diffu­sion problem, clarifying the microinstabilities responsible for that enhanced loss rate. Much of this work was basic experimentation done in linear machines, which did not suffer from the complicated field lines of stellarators and tokamaks. In the USSR, Mikhail Ioffe at his institute in St. Petersburg invented the “Ioffe bars.” These were four bars carrying current to form a magnetic well (“minimum-B”) configuration in a mirror machine, thus stabilizing the most troublesome insta­bility in those confinement devices. Though mirror confinement is outside our scope here, the minimum-B concept is also used in tokamak configurations. These results, as well as the ones from the T-3 tokamak, were presented in the memorable IAEA meeting of 1968. After the technical sessions in Moscow, Artsimovich led the entire conference to a big party in Novosibirsk, the science city deep in Siberia. The party was held at a large artificial lake made by cutting down trees and covering the stumps with water. Long picnic tables were set up on the shores and food served with Russian hospitality. It seemed that the tables for 60-second chess games must have stretched for 100 yards. Here, plasma physicists from many countries got acquainted on a personal level. It was the beginning of international cooperation and competition.

Another milestone was announced at the Novosibirsk meeting when the General Atomics group showed the picture of Fig. 8.8, which completely surprised the Russians. Had the Americans trumped them with the resources to build a torus large enough to hold a person standing up? Actually, it was not a tokamak or stellarator

image272

Fig. 8.8 Inside the toroidal octopole at General Atomics (courtesy of Tihiro Ohkawa and published in Chen [10])

but an “octopole,” spelled “octupole” when another one was built at the University of Wisconsin by Don Kerst. It had four current-carrying rings suspended by thin wires within the plasma, creating a magnetic well. The plasma was absolutely stable in such a magnetic field, and the classical diffusion rate, caused by collisions alone, was observed for the first time [9]. Being a pure physics experiment, the octopole did not require a large, expensive magnetic field, and it was not the advanced fusion machine that the Russians had feared. Internal conductors would not be practical in a real reactor.

The 1970s was a period of euphoria, with Artsimovich predicting scientific breakeven by 1978, and Bob Hirsch, then head of fusion research in the Atomic Energy Commission, pushing for an even earlier date. The prospect of an infinite energy source evoked such lyrical epithets as “Prometheus Unbound!”. With the difficulty of magnetic confinement recognized, the importance of controlling fusion was compared with that of inventing fire. Funding started to increase when James R. Schlesinger became AEC chairman on the way to the CIA and Defense. Support for fusion energy was further escalated by the oil crisis of 1973, when a speed limit of 55 miles per hour was mandated throughout the USA. The dramatic increase in the fusion budget is shown in Fig. 8.9, reaching a peak of almost $900M annually in 2008 dollars. Championed by Representative Mike McCormack (D-WA), Congress passed the Magnetic Fusion Engineering Act of 1980, which laid out the plans and the budget needed to build a demonstration reactor DEMO by the year 2000. The Act was never funded as passed. Tired of promises that fusion would be achieved in 25 years regardless of when the question was asked, Congress began cutting the fusion budget. Ed Kintner took over the fusion office from Hirsch in 1976 and had to reorganize priorities to fit available funds. Many alternative

image273

Year

Fig. 8.9 US fusion research budget in 2008 dollars (adapted from data from Fusion Power Associates, Gaithersburg, VA)

approaches to magnetic confinement still existed at that time,2 and they should be explored while keeping the tokamak as the flagship, while critical engineering tests are made. Nonetheless, several large projects ultimately had to be canceled, includ­ing the Fusion Materials Test Facility and MFTF-B, the world’s largest supercon­ducting magnet built for mirror fusion. That fusion would always be 25 years in the future was made a self-fulfilling prophecy by the decrease in funding.

Curiously enough, the peak in funding in Fig. 8.9 follows a similar graph of the price of oil at the time.3 Unfortunately, this did not happen in the oil crisis of 2008, since other energy alternatives such as solar and wind power were available, and the USA was at war in Iraq. The dissolution of the Soviet Union in 1991 had a major effect on the willingness of Congress to support fusion. The threat of being outdone by the Russians was no longer there, and the attitude was to let the friendly nations which are more dependent on foreign oil bear the main expense. As a result, the USA, which had been the world leader in fusion development, slowly lost its preemi­nent position to the UK and Japan.

The peak funding levels of the 1970s nonetheless enabled the start of the billion-dollar machines that set milestones two decades later. The TFTR at Princeton4 began construction in 1976 and ran from 1982 to 1997. This was a big step because it was the first machine made to run with DT rather than helium or deuterium. Once tritium is introduced, the DT reaction would produce 14-MeV neutrons, which would activate the stainless steel walls. Massive shielding would be required, and maintenance could be done only by remote control. By 1986, TFTR had set records in ion temperature (50 keV or 510,000,000°C), plasma den­sity (1014 cm-3), and confinement time (0.21 s), but of course not all at the same time. In 1994, a 50-50% DT mixture was heated to produce 10.7 MW of fusion power. This is only about 1% of what a power plant would give and occurred only

in a pulse, but it was the first demonstration of palpable power output. Before it was decommissioned, TFTR also demonstrated bootstrap current and reversed shear, effects described in Chap. 7.

Close on the heels of the TFTR, western Europe built an even larger machine, the Joint European Torus, JET, also capable of using DT fuel. Designed in 1973-1975 and constructed in 1979, it has operated from 1983 until now. It was funded by the countries of Euratom and is now operated under the European Fusion Development Agreement, with participation of over 20 countries.5 Currently, the world’s largest tokamak with a major radius of 3 m, it is also powered impressively with a mag­netic field of 3.45 T (34.5 kG), total heating power of 46 MW, and a toroidal current of 7 MA. It set a record with a pulse of 2 MA that lasted 60 s. In 1997, JET announced a new world record with DT fuel, producing 16 MW of fusion power and keeping 4 MW going for 4 s. JET is being modified for experiments in support of ITER, the large international project described at the end of this chapter.

The third large tokamak of this era is Japan’s JT-60, which started operating in 1985. It plays a leading role in researching the effects on the forefront of tokamak science, such as reversed shear, H-modes, and bootstrap current. Much of this is too technical for this book, but JT-60 has set some world records which are easy to understand. In 1996, it achieved the highest fusion triple product. Recall that the triple product is, more exactly,

Triple product = hTit e,

where te is the energy confinement time. The value achieved was 1.5 x 1021 keV s/m3, close to the value needed for energy breakeven, and only about seven times less than that required for a reactor. Of course, this was in a pulse and not in steady state. In 1998, JT-60 set a record for Q, the ratio of fusion energy to plasma heating energy, at Q = 1.25. However, since JT-60 was not designed to handle tritium, the experiment was done in deuterium and the result extrapolated to DT. The highest ion temperature of 49 keV was also reported in JT-60. The machine excelled in long pulses, running steadily for as long as 15 s, or for 7.4 s while the bootstrap fraction was 75%. Perhaps most impressive was the production in 2000 of a plasma with zero current over 40% of the minor radius. The current in an outer shell held the plasma even though there was no confinement in the current hole. This is exactly the profile that is suitable for operation with a large bootstrap current fraction.

By focusing on these three machines, we have had to omit the great contributions of other large machines such as DIII-D and ASDEX, as well as those of hundreds of smaller tokamaks built to study particular effects. Though not tokamaks, there are also large machines of the stellarator type, such as Wendelstein 7 in Germany and the Large Helical Device in Japan. No large tokamaks had been built since the turn of the century until two Asian machines went online in 2007: the KSTAR in Daejeon, Korea and the EAST (Experimental Advanced Superconducting Tokamak) in Hefei, China. You can guess what KSTAR stands for. Both of these machines use superconducting coils cooled by liquid helium, requiring a second vacuum system to keep the coils cold. The development of large superconductors is an important step toward a fusion reactor.

As can be seen in Fig. 8.9, the US fusion budget steadily declined in the 1980s and 1990s. Construction of large machines had been completed; there was no oil crisis or competition from the USSR; and people were disillusioned about the pros­pect of ever achieving fusion. In particular, members of Congress were reluctant to support a project that could not be completed in their terms of office. Major sources of funding shifted to countries which have very limited fossil fuel reserves, and the USA slowly lost its lead at the forefront of fusion research. In 1995, a Fusion Review Panel headed by John P. Holdren and Robert W. Conn submitted a report6 to President Clinton’s Commission of Advisors on Science and Technology on a requested evaluation of the fusion situation. The Panel estimated that progress to a demonstration reactor by 2025 would require annual funding levels averaging $645M between 1995 and 2005, with at peak of $860M in 2002. Should budgetary constraints not permit this level, alternate scenarios were also given. At a realistic level of $320M/year, the best that could be done was to maintain the expert com­munity in plasma science and fusion technology while expanding international participation. With this devaluation, the Magnetic Fusion Energy Program was changed to the Fusion Energy Sciences Program. The restructured program was presented to the DOE Office of Energy Research by the Fusion Energy Advisory Committee, chaired by Conn, in 1996 [13]. As seen in Fig. 8.9, the budget has been maintained the $300M level since that time, partly through the efforts of Undersecretary for Science Raymond Orbach under President Bush. With DIII-D, the largest tokamak extant in the USA, the level of fusion science and innovation nonetheless leapt forward with many intermediate-sized devices in universities and with advances in computation and theory.

It was in this period that burning plasma became the catchword, and planning for a large international tokamak to achieve this, the ITER, began. The success story of the negotiations deserves its own section. This is presently our best chance to move forward in making our own sun. Meanwhile, we need another scientific interlude to clarify the uncertainties that still exist in fusion science.

Operating a Fusion Reactor

Startup, Ramp-Down, and Steady-State Operation

Turning on the power in a large tokamak is not an easy task. The vacuum system, the cryogenic system, discharge-cleaning of the walls, the magnetic field system, the tokamak current drive, and the various plasma heating systems, and various auxiliary systems have to be started up in sequence. Operators have learned by experience how to do this in large tokamaks. The plasma has to be maintained stably while it is being heated and while the current is being increased in synchro­nism with the toroidal magnetic field. Each power supply has to be ramped up at a certain time at a certain rate. Turning the discharge off also requires careful ramp — down of each system. Only after a good routine has been found can automatic controls take over.

All present tokamaks run in pulses, not continuously. Even if the pulses last for minutes or an hour, they will not uncover problems that will arise with truly steady — state operation. In the 1980s, a machine called the ELMO Bumpy Torus was run at the Oak Ridge National Laboratory. Though the magnetic configuration never caught on, the machine was run in steady-state and revealed problems that are not seen in pulsed machines. The Tore Supra tokamak in Cadarache, France, near the ITER site, has been gathering information on long-pulse operation for 20 years [20]. It is a large tokamak with high magnetic field, large current, and powerful heating. The first wall is water-cooled boronized carbon. In a deuterium plasma, the retention of deuterium by the carbon was found to be significant. This is one reason for rejecting carbon as a wall material. Damage to the ICRH antennas was noted. Electrical faults in the magnet system were found to limit the length of discharges. It was found that turning the lower-hybrid power on slowly greatly alleviated this problem. Water leaks were found to occur 1.7 times per year. The frequency of disruptions was also recorded. These were found to be caused mainly by the flaking of carbon off the walls after many days of operation. Pulses lasting 1 or 2 seconds were possible with transformer-driven currents, but with the addition of lower — hybrid current drive, 6-min pulses with 3 MW of lower-hybrid heating (LHH) were achieved in 2007. At the 2-MW level, 150 consecutive 2-min discharges could be routinely produced [21]. These are the types of problems that will be encountered when ITER is operated in continuous mode.

Maintaining the Current Profile

Advanced tokamaks utilize reversed shear and internal transport barriers for enhanced plasma confinement. These require precise shaping of the safety factor q (see Chap. 8), which determines how the twist of the magnetic field lines changes across the radius. The shape of the q(r) curve controls the stability and loss rate of the plasma. Since the twist is determined by the poloidal field created by the plasma current, this current has to be shaped in a particular way. Some of the current is naturally produced by the bootstrap effect (Chap. 9); the rest has to be driven by lower-hybrid and electron cyclotron current drive. The blue curve in Fig. 9.29 shows an example of a q(r) curve which stays above q=2 and gives reverse shear. The red curve shows the auxiliary current needed to produce this q(r). Only precise control of the localized heating can produce this current profile. As the plasma starts up, the currents will be changing, and the power supplies will have to be programmed to keep the current in a stable shape.

Glass Lasers

Major laser facilities are so large that they cannot be shown in a single picture. A simplified schematic is shown in Fig. 10.42. A weak pulse of the right spatial and temporal profiles is generated in an oscillator at the left. The same pulse is sent into each laser chain. Each identical chain consists of increasingly large amplifiers to raise the power of the beams. At the right-hand side, the beams enter a switch yard consisting of mirrors to bring the beams into the target chamber (the white sphere) at the desired angles. The beams have a finite length, since light travels at 1 feet (30 cm) per second, so a 1-ns pulse is only a foot long. Each beam path has to have the same length for the beams arrive at the same time. In between the amplifiers are

image413

Fig. 10.42 Simplified schematic of a glass laser installation (Photo from the author’s archives; original from a national laboratory: Livermore, Los Alamos, or Sandia.)

optical units to reject the reflected light and to maintain the same time variation, spatial profile, and smoothness that the beams started with. The light is divided into multiple beams not only to illuminate the target uniformly, but also to avoid over­heating the glass in each beam.

The NIF laser has 192 beams divided into 48 groups of four each. The neo­dymium in the doped glass is driven into an excited state by a pulse of light from flash lamps. Originally, the lamps were like the electronic flashes in cameras. Recent conversion to solid-state units like LED flashlights have greatly reduced the complexity and cost. It takes 400 MJ of capacitors to store the energy for the lamps. An excited amplifier lases when it is tickled by the light from the previous stage. The total length of each light path is 300 m, the length of three football fields for either kind of football. Nd-glass lasers produce infrared light with 1.06 pm wave­length. This is upshifted to 3w with single crystals of potassium dihydrogen phos­phate. The 3w light has a wavelength of 351 nm, which is in the ultraviolent, so different optical materials have to be used. Figure 10.43 shows individual beam tubes in the earlier Nova laser. Figure 10.44 shows the NIF laser bay before it was all covered up. The optical elements in each beam tube have to be held in exact alignment and kept completely dust free. In NIF, the optical equipment between each amplifier stage is preassembled in refrigerator-size boxes so that spares can be slipped into place from below if one element fails.

Nd-glass lasers were developed also at the Institute for Laser Engineering in Osaka, Japan, under the leadership of Prof. Chiyoe Yamanaka. Other important participants in the development of glass lasers were Academicians N. G. Basov and A. M. Prokhorov in competing groups at the Lebedev Institute in Moscow; Kip Siegel, who founded KMS Fusion in Michigan; Moshe Lubin, who founded the Laboratory for Laser Energetics in Rochester, New York; the group at the Rutherford-Appleton Laboratory in England; and Edouard Fabre’s laboratory

image414

image415

Fig. 10.44 View of the NIF laser bay (https://lasers. llnl. gov/multimedia/photo_gallery/.)

at the Ecole Polytechnique in Palaiseau, France, which led to the Laser Megajoule being constructed in Bordeaux.

Panels on Every Rooftop

The easiest way to use solar energy is to put a panel on the roof to heat water. This is already done in many countries. Such panels can be seen as one rides on a train in Japan. In a place like Hawaii, the panel does not have to be very big at all; 1 m2 is more than adequate. A panel can be just a flat box with a glass top and a black bottom to absorb all the sunlight (Fig. 3.23). The panel is connected to the usual water heater with two pipes. A small pump circulates the water up to the solar panel and back down to the water heater. The gas or electricity driven heater then does not have to turn on as often to keep the hot water at the set temperature. No fancy electronics are needed, so the cost is low. Solar swimming pool heaters are even more economical. The same pump used for the water filter can pump the water up to panels on the roof, from where the water siphons down without further pumping energy. Since the temperature rise in each pass is only a couple of degrees, no high-temperature materials are needed. Black plastic panels, about one by two meters, are used. Each has many small channels to flow the water in parallel. Such panels have lasted over 30 years.

The fossil footprint of rooftop solar thermal collectors has been analyzed by the Italians [1]. As with the life-cycle analyses described in the previous section on

image111

Fig. 3.23 The simplest implementation of a solar water heater (http://images. google. com)

Wind, all the energy used in producing the materials used and in installation, operation, and maintenance is added up; and the energy recovered in the recycled materials at the end of life is subtracted. The energy comes from conventional sources, mainly fossil fuel plants. This is then compared with the solar energy pro­duced during the lifetime of the equipment. The resulting energy payback time lies somewhere between 1.5 and four years. However, the systems considered include an insulated tank on the roof, and this is the main contributor to the weight of the galva­nized steel component, which accounts for 37% of the energy used. For systems without a rooftop tank, the energy payback time should be closer to the lower limit of 1.5 years. All the solar heating collected after that is real “green” energy. There is really no reason for every house not to collect the solar energy that falls on its roof.

Photovoltaic (PV) solar panels on the roof are another matter. These are expen­sive, but they provide electricity, not just heat. It costs about $5 a watt to have PV installed on the roof. Since the electricity use per home in the USA is about 1.2 kW averaged over the whole year, one would need about 5 kW to cover the peak hours. The cost is then 5000 x $5 = $25,000. People usually pay between $20,000 and $40,000 for their systems, but there is a 30% federal rebate and some­times also a state rebate in the US. PV systems are usually guaranteed to lose no more than 20% of their efficiency after 25 years. States with net metering will allow the electric meter to count only the external energy used and to run back­wards if the solar cells produce more energy than is used. The savings in electricity

image112

Fig. 3.24 A 4.4-kW photovoltaic roof installation (http://www. caUforniasolarco. com)

bills can payback the PV cost in about 15 years without rebates or about eight years with rebates.27 This presumes that there is a large roof area with an unobstructed view to the south (in the northern hemisphere) (Fig. 3.24).

Whether PV solar can pay for itself of course depends on where you live. The number of Peak-Equivalent Hours per Day is a measure of how much usable sunlight is available in a given place. The average in the USA is 3.5-6.5 hours. Winter in the Northwest would give only 1.5-2.5 hours, while summer in the Southwest can give 8 hours.27 At 2 hours of intense sun equivalent, a 5-kW PV system would yield 10 kWh of electricity. Remembering that the average use per home is 1.2 kW, amounting to 1.2 x 24=28.8 kWh/day, we see that a large system can supply about a third of the electricity requirements even in the Northwest. The good news is that even on cloudy days, 20-50% of solar energy can still be obtained.

Of course, the sun does not shine when we need electricity the most; namely, at night when the lights are on and we are watching TV. The energy has to be stored. In the Southwest, the peak power is so large that it cannot be used right way; it has to be stored. This requires batteries, which increases the cost of solar energy beyond that for the panels themselves. The most economical batteries available today are the lead-acid batteries used in cars. A whole bank of them will have to be installed in the house. There are larger, more compact lead-acid batteries available. These are used, for instance, in African safari camps in case diesel fuel for their generators cannot be delivered. A 20-feet (6 m) row of these can supply the minimal needs of a camp for three days. PV power, stored or otherwise, cannot run appli­ances because they produce direct current (DC) power. An inverter has to be used to convert the DC to AC at 60 cycles/s in the USA and 50 elsewhere. This is an addi­tional expense that must be counted.

There are other impediments to local solar power that are not widely known. Shadows, for instance, can completely shut off a solar panel. This is because each solar cell produces only 0.6 V of electricity. The cells in a panel are connected in series to buildup the voltage to at least 12 V, which the batteries and inverters need.

If one cell is in shade, it cuts off the current from all the cells. This is like the old strings of Christmas tree lights which were connected in series instead of parallel. If one bulb burns out, the entire string goes out.

Types of Reactors [41]

A boiling water reactor is a light-water reactor (LWR) using H2O as both moderator and coolant. The fuel rods are simply placed in the water, which is allowed to boil under pressure, producing steam directly to the turbines. The water, however, is exposed to radioactive material. A pressurized water reactor (PWR) or the European version called EWR contains the water under 153 atm of pressure so that it cannot boil at its temperature of 322°C. This water goes to a heat exchanger to transfer the energy to outside water which does not touch any radioactive material. All the reactors in France are PWRs. Standardizing to a single type reduces the risk of accidents.

A CANDU (Canadian Deuterium Uranium) reactor was invented because Canada had no enrichment facilities. It burns natural uranium containing only 0.7% U235. With so few fissionable nuclei, the moderator has to be heavy water, D2O. Hydrogen would absorb too many neutrons. The fuel rods are double tubes, the inner tube contains the fuel pellets and cooling water. Gas in the outer tube insulates the heat from the moderator, which is at room temperature. No thick domed vessel is necessary to contain the reactor. With so little U235, the power output is only 20% of other LWRs, so the fuel has to be replenished often. It is done continuously, going from one end of the rods to the other. There is no proliferation risk due to enriched fuel, but plutonium is produced and comes out with the expended fuel. It can be stolen more easily since it comes out continuously instead of at a fixed time under heavy guard [41].

AGRs are early (advanced gas-cooled reactors) developed in England using a graphite moderator and 600°C carbon dioxide as a coolant.78 Natural uranium could be used at lower temperatures where a low-absorbency “Magnox” fuel cladding could be used, but enrichment is needed for the advanced types. Yet another acro­nym is the European pressurized reactor (EPR), a safer type being constructed in Finland and France. These two projects have been delayed by cost overruns and safety protests.

Liquid-metal fast breeder (LMFBR) reactors are an entirely different breed. Fast refers to the fast, or prompt, 2-MeV neutrons emitted in fission. In LWRs, these neutrons have to be slowed down by the moderator before they can cause U235 to fission. In breeders, the fuel is U238 with 10% Pu239, and U235 is not used. Twelve percent of the fast neutrons cause fission in the U238, and the rest are captured. But as Fig. 3.59 shows, the capture of a neutron by U238 produces an atom of Pu239, which is a good fuel. Those neutrons that do not get captured immediately eventually slow down and cause U238 and Pu239 to fission. By covering the chamber with a uranium blanket, which can be made of depleted uranium from an LWR, more plutonium can be produced than is used. Breeder reactors can breed fuel from natural uranium.

No moderator is needed; in fact, it is essential not to have any material that will slow down the 2-MeV neutrons. However, there has to be a coolant, and the coolant must not slow down or capture the neutrons either. There are only two elements in the peri­odic table that can be used: sodium (Na) and lead (Pb). These can be used in liquid form and do not capture many fast neutrons. Sodium, which melts at 98°C, is chosen for convenience in spite of its nasty nature. Although it is harmless when combined with another nasty element (chlorine) in table salt, pure sodium will explode if it touches water. It is the liquid metal in LMFBR. These reactors cannot go critical with normally enriched uranium. A chain reaction requires 10-12% enrichment.

This technology has been well tested in the Superphenix reactor on the Rhone river in France. The 3,000 tons of sodium coolant was in its own closed loop, and heat was exchanged to a secondary sodium loop not exposed to radioactivity. Steam was created in a second heat exchanger. The reactor ran between 1995 and 1997, producing 1.2 GW of electricity between repairs. The sodium ran at 545°C and never boiled, so there was no high pressure. The fuel elements had thicker walls than in LWRs and produced twice the energy per ton. Sodium leaks have been the main problem. A smaller LMFBR, the Monju in Japan, developed a leak in the intermediate coolant loop in 1996. No radioactivity was released, but the sodium fumes made people sick. The reactor was restarted in 2010.81 LMFBRs are ready for the next generation of reactors. Gas cooling in the intermediate heat loop is the only improvement needed.

Reactor Control

A chain reaction requires active control. The reproduction ratio of neutrons has to be exactly one. Too few neutrons, the reaction dies. Too many, the reaction runs away. The reaction rate depends on the temperature of the moderator (how much it absorbs) and the freshness of the fuel. Fission occurs so fast that it would be impos­sible to stop a chain reaction except for a lucky circumstance. A small fraction of the neutrons are delayed. In uranium, 0.65% and in plutonium, 0.21% of the neu­trons from a fission event are emitted only after 10 seconds. Since every neutron is needed, the chain reaction does not proceed instantaneously; there is a time lag. The moderator and coolant in the reactor have high heat capacity, so the temperature inside the reactor changes even more slowly. There can be as much as 20 minutes to react to a temperature change. Control rods made of boron carbide (BC), a powerful absorber, are moved in and out of the moderator to control the neutron population. This is normally done automatically, and reactors have run for years without trouble. The few accidents that have occurred are due to human error in response to an abnor­mal condition. The danger is not only when the chain reaction is going too fast and the temperature rises. If the temperature goes too low, voracious neutron absorbers like Xe135 can accumulate and poison the reactor. It cannot be restarted until all the Xe135 has built up and then decayed with a half-life of 8 hours [41].

Turbulence and Bohm Diffusion

A picture of David Bohm was taped to the wall of Bob Motley’s office, and our group of experimentalists at Princeton’s Plasma Physics Laboratory took turns throwing darts at it. The frustration came from an unexplained phenomenon called “Bohm diffusion,” which caused plasmas in toruses to escape much faster than any classical or neoclassical theory would predict.6 In spite of all efforts to suppress the known instabilities, the plasma was always unstable, vibrating, rippling, and spit­ting itself out, like the foam on violently breaking surf. In Chap. 5, classical diffu­sion was described. This is a process in which collisions between ions and electrons cause them to jump from one field line to another one about one Larmor radius away. The classical confinement time is long, of the order of minutes. In this chap­ter, we described neoclassical diffusion, in which particles jump from one banana orbit to the next. The neoclassical confinement time is still of the order of seconds, longer than needed. Bohm diffusion caused the plasma to be lost in milliseconds. Major instabilities like the Rayleigh-Taylor or kink were no longer there, else the confinement time would have been microseconds. There were obviously some other instabilities that the theorists had not foreseen.

Bohm diffusion was first reported by physicist David Bohm when he was work­ing on the Manhattan project and, in particular, on a plasma device for separating uranium isotopes. From measurements of the plasma’s escape rate, he formulated a scaling law for this new kind of diffusion. It reads as follows. The diffusion rate across the magnetic field, given by the coefficient D± (pronounced D-perp), is pro­portional to 1/16 of the electron temperature divided by the magnetic field:

1 Te D, <x e

D 16 B

The 1/16 makes no sense here because I have not said what units T and B are in,

e

but that number has a historical significance. Whenever Bohm diffusion is observed, there are always randomly fluctuating electric fields in the plasma. Regardless of what is causing these fluctuations, the plasma particles will respond by drifting with their E x B drifts (Chap. 5). Since the size of the noise is related to Te, which supplies the energy for it, and the drift speed is inversely proportional to B, it is not hard to show that the TJB part is to be expected [1]. But how did Bohm come up with the number 1/16? Bohm had disappeared from sight after he was exiled to Brazil for un-American activities. In the 1960s, Lyman Spitzer tracked him down and asked him where the 1/16 came from. He didn’t remember! So we’ll never know. It turns out that the Bohm coefficient depends on the size and type of turbu­lence and can have different values, but always in the same ballpark.

Plasma turbulence is the operative term here. Any time there was unexplained noise, it was called “turbulence.” Doctors do the same thing with “syndrome” or “dermatitis.” Figure 6.9 is an example of turbulence; it is simply a wave breaking on a beach. As the wave approaches the beach, it has a regular, predictable up and down motion. But as the water gets shallower, the wave breaks and even foams. The

image215

Fig. 6.9 Turbulence at the beach

motion of the water is no longer predictable, and every case is different. That’s the turbulent part. The regular part is called the linear regime; this is a scientific term that has to do with the equations that govern a physical system’s behavior. Linear equations can be solved, so the linear behavior is predictable. The turbulent part, in the nonlinear regime, can be treated only in a statistical sense, since each case is different. Nonlinear generally means that the output is not proportional to the input. For instance, taxes are not proportional to income, since the rate changes with income. Compound interest is not proportional to the initial investment, even if the interest rate does not change, so the value increases nonlinearly. Population growth is nonlinear even with constant birth rate, in exact analogy with compound interest. Waves, when they are small and linear, will have sizes proportional to the force that drives them. But they cannot grow indefinitely; they will saturate and take on different forms when the driving force is too large. What a wave will look like after it reaches saturation can be predicted with computers, but the detailed shape will be different each time because of small differences in the conditions. Then you have turbulence. The smoke rising from a cigarette in still air will always start the same way, but after a few feet each case will look different.

The turbulence in every fusion device in early experiments was always fully devel­oped; we could never see the linear part, so we could not tell what caused the fluctua­tions to start in the first place. An example of plasma turbulence in a stellarator is shown in Fig. 6.10. This is what “foam” looks like in a plasma. These are fluctua­tions in electric field inside the plasma. These noisy fields make the particles do a random walk, reaching the wall faster than classical diffusion would take them.

Turbulence is well understood in hydrodynamics. If you try to push water through a pipe too fast, the flow breaks up into swirling eddies, slowing down the flow. Hydrodynamicist A. N. Kolmogoroff once gave an elegant proof, using only dimensional analysis, that the sizes of eddies generally follows a certain law; namely, that the number of eddies of a given size is proportional to the power 5/3 of the size.7 Attempts to do this for plasmas yielded a power of 5 rather than 5/3 [1],

image216

Fig. 6.10 Fluctuations in a toroidal plasma

and this has been observed in several experiments. However, plasmas are so com­plex (because they are charged) that no such simple relation holds in all cases.

The importance of turbulence and Bohm diffusion is not only that it is much faster than classical diffusion, but also that it depends on 1/B instead of 1/B2. In classical diffusion, doubling the magnetic field B would slow the diffusion down by a factor of 22 or 4. In Bohm diffusion, it would take four times larger B to get the same reduction in loss rate. It was this unforeseen problem of “anomalous diffusion” that held up progress in fusion for at least two decades. Only through the persistence of the community of dedicated plasma physicists, was the understanding and control of anomalous diffusion achieved. Modern tokamaks have confinement times approaching those required for a D-T reactor.

The Divertor

Sixty percent of the plasma exhaust is designed to go into the “divertor,” thus sparing the first wall from the major part of the heat load. Materials and cooling methods can be used in the divertor that cannot be used for the first wall. Figure 9.4 shows how this is done. Special coils located at the bottom of the chamber bend the outermost field lines so that they leave the main volume and enter the divertor. Plasma tends to follow the field lines, so that most of it leaves the chamber by striking the surfaces of the divertor rather than the first wall. Only

image300

Fig. 9.4 Two views of a tokamak cross section showing the divertor, the first wall, and some ports for heating and diagnostics equipment or for test modules [30, 31]. In the left diagram, the outer­most magnetic field lines are drawn, showing how they lead the plasma into the divertor. The closed magnetic surfaces in the interior have been omitted for clarity

the plasma that migrates across the magnetic field hits the first wall. The heat load on the first wall can be larger than average when there is an instability such as an ELM or a disruption that takes plasma across the field lines suddenly. The first wall of ITER will have to withstand such heat pulses, but DEMO must be built to avoid such catastrophes.

As can be seen in the diagram, the boundary layer of diverted field lines is very thin, only about 6 cm in ITER. In the divertor, these field lines are spread out over a larger area, and the surfaces which the plasma strikes are inclined almost parallel to the field lines so that the heat is deposited over as large a surface as possible. Tungsten can be used for these surfaces, and even carbon compounds can be used in spite of their tritium retention. The divertor parts are easier to replace than the first wall, so the tritium can be removed periodically. The heat load on the divertor surfaces is huge, some 20 MW/m2, so the cooling system is an important part of the design. Water cooling is possible in ITER, but helium cooling at higher temperatures would have to be used in DEMO and FPPs. The conditions inside a divertor are so intense that they are hard to imagine. Ions with tens of keV energy stream in along the field lines, accompanied by electrons that neutralize their charges. When the ions meet a solid surface, they recombine with electrons to form neutral atoms. There is a dense mixture of plasma with neutral gas made of deuterium, tritium, helium, and impuri­ties, which later have to be separated out in an exhaust processing unit. The neutral gas has to be pumped away fast by vacuum pumps before it flows back into the main chamber and gets ionized again into ions and electrons. To trap the neutrals inside the divertor, a dome-shaped structure has to be added. Figure 9.5 shows the main parts of a divertor designed for ITER. The plasma impinges at a glancing angle onto the high-temperature surfaces made of tungsten and CFC. A heat-sink material, CuCrZr, transfers the heat to water-cooled surfaces.

Water cooling, which is limited to about 170°C, would be insufficient for DEMO and FPP, and cooling by helium gas would have to be used. The helium

image301

would be injected at 540°C and be heated to 720°C, while the tungsten and CFC tiles would get to 2,500°C [3]. The coolant would be injected under pressure to cool a small dome as illustrated in Fig. 9.6. These domes are then assembled into nine-finger units, and these units then form a uniformly cooled surface.

Divertor technology is in better shape than other problem areas because divertors are small, and they have already been extensively tested. For instance, meter-sized tungsten and CFC divertor segments (Fig. 9.7) have been tested in Karlsruhe, Germany, up to heat fluxes of 20 MW/m2. In that large laboratory, divertor materials

image302

Fig. 9.6 Possible design of a helium cooling system for a divertor [31]. Helium cools a dome­shaped “finger” (a), and nine of these are assembled into one unit (b). A number of these together then form a cooled surface (c)

image303

have been neutron-irradiated, and their manufacturing and assembly techniques have been worked out. Even remote handing techniques for replacing divertors have been tested. It seems possible to design water-cooled divertors for heat fluxes up to 20 MW/m2 and helium-cooled divertors up to 15 MW/m2 [31].