Category Archives: An Indispensable Truth

A Spherical Tokamak FDF

Spherical tokamaks are tokamaks with very small aspect ratio, which is the ratio of major radius to minor radius. They are fat doughnuts with a very small hole in the middle. These are hard to make, but they have advantages in stability. They are described in Chap. 10. Peng et al. [25] have designed a fusion development facility using a spherical tokamak (FDF-ST) with an aspect ratio of 1.5. This is shown in Fig. 9.33. The magnetic coils are normal-conducting copper, even the narrow center leg going through the small central hole. With major radius only 1.2 m, the machine is much smaller than other designs and yet can generate a neutron wall loading of 1.0 or even 2.0 MW/m2. The toroidal field is 1.2 T, and the plasma current is

8.2 MA. The fusion power is only 7.5 MW or 2.5 times the input power. The machine can accommodate 66 m2 of blanket area. If this can be engineered, it would be the least costly nuclear test facility to prepare for DEMO.

Pulsed Power

This term describes systems which can deposit huge amounts of energy in a short time, but directly, without lasers. Alan Kolb, one of the earliest fusion researchers, left that program to start the field of pulsed power by founding Maxwell Laboratories in San Diego, California, to develop large, fast capacitors for storing energy. They were the first to put “a megajoule in a can.” A megajoule is not an incomprehensible amount of energy. It is the heat energy of a pot (3 L) of water at boiling temperature. A 50 ampere-hour car battery contains 2 MJ. What matters is how fast the energy can be released to get power. Power is the rate of energy delivery. While a car battery can be drained in an hours, capacitors can release their energy in nanoseconds. Capacitors can store over 2 J/cm3. A megajoule can be crammed into 500,000 cm3, which is the size of a cube 80 cm (30 in.) on a side. A pulsed power installation has hundreds of these.

To get high voltage, the capacitors are hooked up in a Marx bank, shown in Fig. 10.52. In this arrangement, a DC power supply is connected to each capacitor as shown in the top half of the figure. After charging, the switches between the

image423

Fig. 10.52 Schematic of a Marx bank. At the top, the capacitors are charging in parallel; at the bottom, they are discharging in series

image424

Fig. 10.53 Diagram of the Z-machine, the world’s largest pulsed power machine [46]

capacitors are opened, as shown in the bottom half, and the diagonal switches are closed, connecting the capacitors in series. A much higher voltage is then produced than a single power supply can generate.

The current is then carried to the machine in a Blumlein. This is a big, specially designed transmission line that can handle the huge currents and voltages that the Marx bank can provide. The Blumlein uses water as the insulator, and also has magnetic insulation by the B-field generated by the current. The pulses can also be made shorter in the process. The spark-gap switches are perhaps the most important high-tech elements in the system.

Figure 10.53 is a diagram of the Z-machine at Sandia National Laboratories in Albuquerque, New Mexico, and Fig. 10.54 shows the actual machine. The capaci­tors surround the machine, and the cylinders are the Blumleins carrying the energy pulses into the vacuum chamber at the center. The capacitors in Z store 11.4 MJ,

image425

Fig. 10.54 The Z-machine during a discharge (http://www. sandia. gov/media/). This publicity photo shows arcs which occur only during abnormal operation

of which 5 MJ is delivered by the Blumleins to the load. A 100-ns pulse can carry 20 MA of current and 60 TW of power. For military applications, the machine can produce 2 MJ of X-rays per pulse at a power of 200 TW.

For fusion applications, the Z-machine can produce heavy — or light-ion beams to transport to a capsule larger than those in laser fusion because of the higher energy here. The problem is in the transport. Ion beams are hard to keep in focus across the large distance to the pellet. When the beam becomes narrow near the target, its space charge will tend to expand it unless the charge is neutralized. The best way to do that is to send the ions through a preformed plasma, whose electrons can neutralize the space charge. This is a perfect setup for a beam-plasma instability. Ion-beam drivers have not been successfully developed. The plans now are to use the intense X-rays from pulsed power to fill a hohlraum. Even if this works, it cannot work at 10 Hz. Pulsed power is not a promising source for an inertial fusion driver.

The Fossil Footprint

Wind contributes less than 1% to the world’s energy. The planned buildup in wind power will have to use mostly fossil fuel energy and thus contribute to CO2 emis­sions. Fortunately, wind is one renewable energy source that can payback this energy in months instead of years. Careful analyses of energy use in wind energy generation have been made by Vestas Wind Systems in 199720 and 2006.21 Vestas is a large Danish manufacturer that has installed 38,000 turbines, about half the world’s total. The bottom line is that the fossil energy used can be recovered in about four months for a 600-kW turbine in 1997 and in about 6.8 months for an off­shore 3-MW turbine in 2006.

These so-called life-cycle analyses are interesting because they give a good idea of all that is involved in building a wind farm. We’ll take the 2006 study as an example. The study begins with the description of a fictitious power plant to be built.21 This plant will consist of 100 Vestas V90-3.0MW turbines built 14 km (9 miles) offshore in water 6.5-13.5 m (about 33 feet) deep. Each turbine will pro­duce 14 GWh/year for a total of 1,400 GWh/year for the whole plant. That’s 1.4 billion kWh/year of electricity, compared to the 2,300 kWh that an average Danish household uses per year. That is enough power for 600,000 homes! It turns out that large plants require less energy per kilowatt produced than small ones.

The energy used to build this plant is divided into four parts: (1) manufacture of the components, (2) transport, construction, and installation of the turbines, (3) their operation and maintenance, and (4) their dismantling and disposal at the end of life. The lifespan is assumed to be a conservative 20 years. The compo­nents consist of the foundations, the towers, the nacelles, the blades, the trans­former station, the transmission lines up to the grid, and even the boat dock for offshore plants. The foundation if offshore would be a steel tube 30 m long 4 m in diameter, and 40 cm thick. The transition piece to the tower is of concrete. The tower is made of steel, and all the energy used in making the steel from ore, fabricating the tower, and sandblasting and painting the surface is counted. The nacelles contain the gearbox, the generator, the transformer, a switchboard, a yaw system, a hydraulic system, and the cover. When these components are made by subcontractors, all the energy used in those factories is accounted for. The blades are made of 60% fiberglass and 40% epoxy, and the spinner on which they are mounted is plastic.

Transporting these components to the site by truck or boat uses gasoline or die­sel, and the large cranes used for installation use more fuel. A transformer station for the offshore plant is to be built on three concrete piles 14 m above the water. The steel structure is 20 m x 28 m in size and 7 m high, with a helicopter platform on top. To carry the power to land, two 150-kV underwater cables are used up to a cable transition station 20 km away. From there, 34 more kilometers of dry cables carry the power to land. For maintenance, it is assumed that half the gearboxes and generators in the station will have to be replaced or repaired during the 20-year life cycle. Each turbine will be inspected four times a year, and the energy used to transport the inspectors by car, helicopter, and boat is also counted. A resource one usually does not know about is the use of sacrificial aluminum anodes for cathodic protection against the attack of parts by salt water. Since the aluminum cannot be reclaimed, the energy in mining is lost.

At the end of life, the turbines, towers, and foundations have to be dismantled and disposed of. Metals can be 100% recycled, with 90% recovery, and 10% going to landfill. Materials like fiberglass, plastics, and rubber can be burned; and the heat can be captured for use. Energy is actually recovered in the dismantling stage. When all this is added up, each turbine’s energy cost over 20 years is 8.1 million kWh, while it is producing 14.2 million kWh/year. Dividing these two numbers gives the 0.57-year or 6.8-month energy payback time quoted above. This is for an offshore plant. An onshore plant produces only half as much energy, but it also takes half as much energy to build and maintain. Amazingly, the energy recovery time is almost the same, at 6.6 months. As for the carbon footprint, such a plant generates about 5 g of CO2 for every kilowatt-hour (kWh) of electricity generated. By comparison, normal European power plants emit 548 g/kWh. Wind is indeed a very clean way to generate energy, but it has other problems.

Biofuels

Instead of using electric cars, we can lower our dependence on foreign oil by con­verting plant matter into ethanol. About 10 billion gallons of ethanol were produced in the USA in 2009, a small but growing fraction of the 140 billion gallons of gaso­line consumed. Ethanol burns 22% more cleanly than gasoline because it contains more oxygen, but it contains only two-thirds the energy per gallon. Most ethanol is sold as E10, a 10% mixture of ethanol with gasoline. Most cars can run on E10 without modification. E85, which is 85% ethanol, requires modified “flex-fuel” engines, which are installed in many trucks. In Brazil, the leader in biofuels, all cars are so modified because the country is completely independent of foreign oil, having started 25 years ago to produce biofuels from sugarcane.

In the USA at the present time, ethanol is produced from corn, not the stalks but the good part, the ears that we and the cows eat. This has played havoc with the prices of corn and soybeans. The corn is ground up, fermented, and then distilled to evaporate off the alcohol. The beer industry knows this well. What is left is still good for cattle feed. The first distillation yields only 8% ethanol, so it has to be repeated many times to get to 99.5% high octane fuel. This takes energy, at present coming mainly from fossil fuels. More energy is used in planting and harvesting the corn, in making the fertilizers, and in trucking the corn and the fuel. Pipelines cannot be used for ethanol because it is soluble in water, and water in the pipes would cause them to rust. Gasoline does not have this problem. The use of fossil energy also entails GHG emission, negating the cleanliness of ethanol exhaust. There has been controversy as to whether making ethanol from corn actually provides more energy than it consumes, and whether there is any saving of GHG emissions. Early reports in the popular literature were rather negative toward ethanol.6364 Much of the pessimism came from papers by Pimentel [36], which indicated that the energy in corn ethanol is 30% less than the energy used to make and transport it. However, other data, mostly recent ones, show a net gain in energy, though much smaller for corn than for cellulosics, which we will describe shortly. Wang’s [37] life-cycle analysis shows that to produce one energy unit of corn ethanol, 0.7 energy units of fossil energy has to be used. This means that about 40% (=0.3/0.7) more energy comes out than goes in. When blended with gasoline, E85 of course has better energy savings than E10. As for GHG emissions, E85 saves 29% and E10 26%. Wang also gives a chart showing all the studies made so far on this topic. Twelve of these showed an energy gain, while nine showed an energy deficit. The breakeven is still marginal, but the saving grace is that only 15% of the fossil fuel used is in the form of oil, the scarce commodity that depends on the Middle East. The stance of the US government is that the energy balance is positive, but no firm numbers are given.65

How does Brazil do it? Because they have the climate and labor, they can grow sugar directly instead of extracting it from corn. Sugarcane yields twice as much ethanol per acre than corn. Biofuels from sugarcane give 370% more energy than is used in production.63 The stalk is 20% sugar, and the rest can be burned to generate electricity. One factory is self-supporting; it can generate enough electricity to run the whole operation. This huge plant produces 300 million liters of ethanol and 500,000 tons of sugar per year. Between the biofuel and the electricity, the plant produces eight times the energy that it uses.64 But there is a big problem: deforesta­tion. An area the size of the state of Rhode Island was razed in half of 2007 to plant sugarcane, and the acreage is to double in the next 10 years.66 Worldwide, deforesta­tion accounts for 20% of carbon emissions, which is why Brazil ranks fourth in the world on carbon emissions.66 There is more bad news. Sugarcane has to be cut by hand, and it is hard work in the heat. It is so hard that many workers die at it. To make the cutting easier, the cane is burned every year even though it does not have to be. This releases large amounts of soot and strong GHGs to pollute the air. This sours the sugar business.

The USA cannot grow so much sugarcane, but it cannot grow enough corn either. If all the present corn and soybean crops are used to produce biofuels, there would be only enough to supply 12% of the gasoline and 6% of the diesel oil that we consume.64 But why use only the sweet part of the corn? We could also use the stalks. The stalks are made of cellulose, as are many other plants. Cellulosics are our best hope for a source of biofuel. Cellulose has a rigid molecular structure that is stiff and can allow plants to grow vertically. This is how corn can grow high as an elephant’s eye. The very structure of cellulosics makes them very hard to break down into alcohol. At present, it takes 30% more energy to make the fuel than it gives back [37]. There is an intense effort to find more efficient ways to do this, including using high-speed computers to model the chemical reactions. The Obama administration in 2009 allotted $800 million to the Department of Energy’s biomass program, and $6 billion in loan guarantees to start biofuel projects begin­ning in 2011.63

Cellulosics can be found everywhere in corn stalks, wood chips and sawdust, wheat straw, paper, leaves, and specially grown crops of grasses and other fast­growing plants. The Departments of Energy and Agriculture in the USA estimate that 1.3 billion tons of cellulosics can be gathered and grown each year without affecting food crops for either humans or animals. It is possible to produce ethanol, gasoline, diesel oil, and even jet fuel from cellulosics. The amount of cellulosics available equate to 100 billion gallons of gasoline equivalent per year, about half of our needs [38]. To do this, of course, is very hard.

There are three ways to make fuel from cellulosics [38]. At an extreme tempera­ture of 700°C, steam or oxygen can turn the biomass into syngas which is carbon monoxide and oxygen. This is done under pressures of 20-79 atm in the presence of a special catalyst. Coal plants are already set up to produce syngas (see Chap. 2). But a reactor to do this with cellulosics would be so expensive that the capital cost would not be paid back for perhaps 30 years. A second method reproduces the conditions in the earth which made fossil fuels in the first place. At temperatures of 300-600°C in an oxygen-free environment, the biomass turns into a biocrude oil. This crude oil cannot be used directly because it is acidic and would ruin the engine. It would have to be converted to usable fuel. A new idea called catalytic fast pyrolysis is being investigated which would convert biomass into gasoline in a few seconds! Fast means that the biomass is heated to 500°C in one second. The mol­ecules then fall into the pores of a catalyst which turns them into gasoline. The whole process takes 2-10 seconds.

The third, more promising way to treat cellulosics is slow and less dramatic; but it could move out of the laboratory into industry. In the ammonia fiber expansion process, the fiber is softened by pressure-cooking at 100°C in a strong ammonia solution. When the pressure is released, the ammonia evaporates and is captured and recycled. The cellulose is than fermented with enzymes into sugar with 90% yield. Distillation then yields ethanol. What is left is lignin, which burns well and can be used to boil water to generate electricity. Of course, burning generates CO2, but with biomass this CO2 was taken from the air when it was growing, so there is no CO2 added to the atmosphere. What spoils this rosy picture? It’s the enzyme.

The bacteria that make the enzyme can be found in only a few places, the best of which is in the guts of termites! We know that termites eat wood. They have an enzyme in their stomachs that turns that into something digestible. The enzyme is not easy to reproduce, unlike the yeast that makes yogurt. Presently, they cost $0.25/gallon of ethanol.67 To mass-produce either the enzyme or the termites is unthinkable. People are finding mushrooms in Guam or other bugs that could make such enzymes.63

If we can get over that hurdle, we can think about switchgrass, which you have heard of. A fast-growing source of cellulose, switchgrass needs no fertilizer and little water. It grows in places not suitable for other activity. Its roots grow 8-10 feet down, stabilizing the soil and also drawing CO2 into the ground.68 It grows for 5-10 years before reseeding. It has four times the energy potential of corn. The US Department of Energy’s goal is to make cellulosic ethanol cost-competitive with gasoline by 2012. The 100 billion gallons of gasoline equivalent per year quoted above will also lower our GHG emissions by 22% relative to our 2002 emissions. Even if switchgrass is grown outside of farm land, it will still take a lot of land. To supply all the transportation fuel for the USA would take 780 billion liters of ethanol per year.69 At the rate of 4,700 L of ethanol per year per hectare, it would take 170 million hectares or 650,000 square miles. Only Alaska, more than twice the size of Texas, has that much area.

Fortunately, new ideas are coming from people thinking out of the box. James Liao [39] has found a way to make more complex alcohols which contain more energy than does ethanols and, moreover, are miscible with gasoline but not water. Such an alcohol is isobutanol. The enzymes that ferment sugar into isobutanol are more common than those in termites: they are found in E. coli. Yes, this is the same bug that causes food poisoning, but its use can be controlled, and it is surely not hard to reproduce. The problem is not entirely solved because biomass has first to be converted to sugar before the process can start. To get around this, Liao has engineered a cyanobacterium [40] that can turn CO2 and H2O into a biofuel! Plants do this all the time by photosynthesis, but the result is cellulose. A bacterium has been engineered that can photosynthesize isobutyraldehyde, which boils at a low temperature so that it can be separated from water. That chemical can then be easily converted into isobutanol. To be competitive with current production of bio­diesel from algae, the rate has to exceed 3,420 qg/L/h. The best achieved so far is 2,500, which is promising and can be improved with further research [1]. However, making diesel from algae is very slow and space consuming — only 100,000 L (26,000 gallons) per hectare per year. Two companies, LS9 and Amyris, both in California, are involved in this development.70 It remains to be seen if this process is economically feasible.

To make transportable fuel, it would seem simpler to make electricity in fission and fusion power plants and develop smaller and lighter batteries for electric cars. Government policy, however, has to take economic stimulus into account. Farmers in Iowa and Nebraska have to be kept happy. The subsidies for ethanol production in Midwest states resulted from strong lobbies. It would seem that our corn is stored not in silos but in pork barrels.

The Rayleigh-Taylor Instability

When you turn a bottle of mineral water upside down, the water falls out even though the atmospheric pressure of 15 lbs./sq. in is certainly strong enough to sup­port the water. This happens because of an instability called the Rayleigh-Taylor instability, which is illustrated in Fig. 5.5. If the bottom surface of the water remained perfectly flat, it would be held by the atmospheric pressure. However, if there is a small ripple on the surface, there is slightly less water pressing on the top of the ripple than elsewhere, and the balance between the weight of the water above the ripple and the atmospheric pressure is upset. The larger the ripple grows, the greater is the unbalance, and the ripple grows faster. Eventually, it grows into a large bubble which rises to the top, allowing water to flow out under it. If you hold the end of a straw filled with water, the water does not fall out because surface ten­sion prevents the interface from deforming like that. A similar instability occurs in a plasma held by magnetic pressure, as we’ll soon see.

Instabilities occur because of positive feedback. There are many examples of this in real life. Microphone screech occurs because the loudspeaker feeds into the microphone the tone to which it is most sensitive. The audio system amplifies that tone, making it louder in the speaker, which then drives the microphone harder. Forest fires are instabilities. A small fire dries the wood around it so that it catches fire more readily. The larger fire then dries a larger amount of wood near it, which then starts to burn, and the instability spreads like… a wildfire. Stock market insta­bilities can go both ways, as a rise or fall in the market induces more people to buy or sell. A more subtle instability creates snow cups when a field of snow melts or sublimes, as can be seen in Fig. 5.6.

If the sun shines evenly on a perfectly flat surface of snow, it should melt evenly, retaining a smooth surface. It never does, because there are ripples in the snow. A depression in the snow will cause some sunlight to scatter onto its walls, heating them before reflecting out into space. The deeper the hole is, the more light will deposit energy into it to hasten the melting. A snow cup can be started by a twig or pebble, which, being dark, will absorb more heat. But instabilities will always start and grow because there is always some imperfection or noise in the system. It just takes longer if the system starts out being almost perfect.

image198

image199

Fig. 5.6 Snow cups: an instability in melting snow

The main obstacle to making a leak-proof magnetic bottle is instability. There are many instabilities, and the first step is to know your enemy. This first instability, however, was known from the beginning because it is similar to the well-known Rayleigh-Taylor instability in hydrodynamics. A plasma weighs almost nothing, so the instability is driven not by gravity but by pressure. To see how this works, we have to extend the concept of E x B drifts to drifts caused by other forces. Or, you can skip the next two diagrams and move on to see how this instability is stabilized.

Fig. 5.7a is the same as Fig. 5.4 except that the small gyrations have been sup­pressed, and only the guiding center drift due to an electric field is shown. In Parts (b) and (c), the E-field has been turned to different directions, and the drifts have rotated correspondingly. In Part (c), the E-field applies a downward force on the ions. If we apply to the ions another type of downward force, such as a pressure force, the ions will also drift to the left, as shown in Fig. 5.7d. Note that the elec­trons and ions now drift in opposite directions. The reason that the electric-field drifts are the same for both species is that both the electric force and the Lorentz force of the magnetic field depend on the sign of the charge, and these two depen­dences cancel each other. The pressure force, however, is in the same direction regardless of charge, so this cancelation does not take place, and the pressure drift depends on the sign of the charge.

Figure 5.8a shows a part of the plasma boundary when it is perfectly smooth, like the first drawing of a water bottle in Fig. 5.5. The upper part is plasma, and the lower part is vacuum, containing only the magnetic field. The plasma pressure is held back by the magnetic pressure, just as the water in Fig. 5.5 is supported by the atmospheric pressure. The force that now tries to push the denser fluid into the less dense fluid is now the plasma pressure rather than gravity. The pressure force, according to Fig. 5.7, causes ions to drift to the left and electrons to the right. As long as the plasma surface is straight and smooth, these drifts are perfectly harm­less, and the magnetic field prevents the plasma from leaking out. Now suppose there is a small ripple in the surface, like the one in the Rayleigh-Taylor instability

Fig. 5.7 Guiding center drifts caused by electric fields (top and bottom left) and by pressure forces (bottom right). In all cases, the magnetic field is out of the page

Подпись: Pressure
Подпись: © MAGNETIC FIELD ©
image202
Подпись: PLASMA
Подпись: Pressure

image205

Fig. 5.8 Development of a Rayleigh-Taylor instability in a plasma for water. What happens is shown in Fig. 5.8b. The ions, drifting to the left, pile up on the right side of the ripple, and the electrons, drifting right, pile up on the left side. These charges create an E-field pointing to the left, as shown in Fig. 5.8b. From Fig. 5.7b, we see that this E-field causes both ions and electrons to drift upwards, thus enhancing the ripple. The ripple or bubble then grows unstably, with the magnetic field forcing its way into the plasma, ejecting the plasma outwards in a way reminiscent of Fig. 5.5. The plasma escapes from the magnetic trap by orga­nizing itself to create electric fields which can push it out! Since in the long run the magnetic field has basically changed its place with the plasma, with the field on the inside and the plasma on the outside, this instability is also called an interchange instability.

The Troyon Limit

This is a limit on the plasma pressure that a tokamak’s magnetic field can hold. Unlike the Greenwald limit, this criterion is rigorously calculated from ideal MHD (MagnetoHydroDynamics) theory. The quantity that measures the balance between the pressure and magnetic forces is called В (beta). Since В is used in many scien­tific disciplines, especially in medicine, I had refrained from defining it until it was necessary. It is now necessary. Beta is the ratio between plasma pressure and magnetic pressure:

Plasma pressure Magnetic pressure

The plasma’s pressure is the product of its density and its temperature, and the magnetic pressure is proportional to the square of the field strength B. These quanti­ties are not constant over a cross section of the plasma, so a reasonable definition would be to take the average pressure and divide it by the average magnetic field before the plasma is created. The last proviso is needed because the plasma is diamagnetic, so its very presence decreases the B-field inside it. Since the B-field is the most expensive component, В is a measure of the cost effectiveness of a tokamak. It has a value below 10%, typically 4-5%.

The value of В has been shown to depend on the toroidal current I divided by the plasma radius a and the magnetic field strength B. Figure 8.19 shows how data from different tokamaks all fall on the same line if plotted against I / aB. It is con­venient, then, to introduce a normalized В, called BN, which would apply to all tokamaks, regardless of their values of I, a, and B:

В x a x B
I

The Troyon limit (Troyon et al. [30])9 is when BN is about 3.5. A numerical formula is given in footnote 10. Figure 8.17 shows how well the experiments in different tokamaks obey the Troyon limit, above which disruptions are likely to occur.

Benefits of Nonaxisymmetry

Tokamak plasmas are basically symmetric around the major axis. They may have D-shaped rather than circular cross sections, but they still look the same from any direction. The figures here show that stellarators are far from symmetric. Instead of using the plasma current to shape the plasma, external coils are used, and these can produce shapes that cannot be formed by plasma current alone in a self-organized tokamak. It is precisely the lack of self-organization that gives stellarators their advantage [11]. Nonaxisymmetric shaping can be used to improve plasma stability, control ELMs, and eliminate disruptions. Indeed, the ELM coils being added to ITER to suppress ELM instabilities do so by spoiling the axisymmetry. In DEMO, the bootstrap current is relied on to supply at least 80% of the plasma current. This is extremely difficult to produce and control when self-organization is strong. In stellarators, a large plasma current is not at all necessary, since the rotational trans­form is generated by external coils.

In addition to their suitability for steady-state operation, stellarators have some unexpected advantages as reactors. Very small errors in the magnetic field (0.01%) have been found to cause problems with plasma confinement. Originally, stellara — tors’ problems were believed to be due to magnetic errors, but it has been found that once axisymmetry is broken, the wild shapes shown above are actually less sensi­tive to magnetic errors. Data from all stellarators have been found to follow a scal­ing law and fall on the same curve, as shown in Fig. 8.21 for tokamaks, so that extrapolation can be used to design larger machines. In addition, higher density and beta values have been achieved in stellarators. A purported benefit [11] of higher density is the formation of a MARFE (Multifaceted Asymmetric Radiation From the Edge), a “detached” layer which forms when plasma recombines before reach­ing the divertor. The energy is then radiated away before it reaches the divertor, sparing the divertor of the large heat load. The energy, however, has then to be taken up by the first wall. The advantages of stellarators come at a price: the dif­ficulty of making and assembling the weirdly shaped coils and vacuum chambers; but this technology has already been demonstrated.

Is Large-Scale Solar Power Really Feasible?

Proponents of solar power have calculated what it would take for a sizable fraction of the world’s energy to be provided by sunlight. Jacobson and Delucchi [5] esti­mated that the world will need 16.9 TW (terawatts or billions of kilowatts) of energy by 2030. If we were to use only Water, Wind, and Sun power, only 11.5 TW would be needed, since these sources can generate electricity directly, without going through a thermal cycle. This amount can be generated by WWS in the proportion shown in Fig. 3.29. Water energy (1.1 TW) is to come from hydro­electric and geothermal plants, and from tidal turbines yet to be developed. Wind power (5.8 TW) will come from 3.8 million wind turbines and from machines driven by ocean waves, which arise from wind. Solar power (4.6 TW) will require 89,000 300-MW power plants and 1.7 billion rooftop collectors. These three sources would have to work together to cover the daily and annual fluctuations. More than 99% of these numerous installations have still to be built.

Подпись: Water Wind Solar Fig. 3.29 Power that must be provided by water, wind, and solar sources by 2030 to supply the entire world’s needs

The solar part of this has been evaluated in great detail by Fthenakis et al. [6]. They estimate that plants located in the Sahara, Gobi, or southwestern US deserts can produce photovoltaic electricity at $4/W and $0.16/kWh. This includes the entire plant, not just the panels themselves. Since residential electricity costs closer to $0.12/kWh than the average of $0.10/kWh, and since there are rebates, the cost of solar is already competitive with standard sources. The authors point out that electricity storage and transmission have still to be developed, and this has to be done using conventional fuels, since solar energy is still small. However, the energy payback time is of the order of two years (as will be shown here later); and once solar grows to 10% or more of total energy, further development could be done without the use of fossil fuels. These studies seem to be realistic, since the authors point out that there are many problems that still need to be treated in detail: the availability of rare materials, the sites for compressed-air storage, the transmission problem, the commercial problems in scaling up, and ecological damage to land and wildlife. If 10% solar cell efficiency is achieved and 2.5 times more land area than cell area is required, then 42,000 km2 of desert area could supply 100% of the electricity for the USA (if it can be stored and transported). This seems like a large area, but it is less than half the area of the lakes produced by dams for hydro in the USA, and solar produces 12 times the energy. Lakes like Lake Mead have drastically changed the landscape. The change may have been welcomed by boaters, but not by the fish.

At this point, it is becoming clear that WWS (water, wind, and solar) sources have some large problems to overcome: storage of intermittent energy; transmis­sion over large distances; use of large land areas; ecological damage to land and wildlife; unsightly encroachment on the landscape and seascape; and legal, politi­cal, and environmental objections to these intrusions. Overcoming these obstacles may take longer than developing compact power centers, like nuclear fusion, which avoids these problems. Replacing the power core of a coal or nuclear plant with a fusion reactor would retain the electrical gen­erators, transmission lines, and real estate already in place. There would be no noticeable difference to the public except that all CO

2

emissions and fuel costs would be eliminated. The great advantage of WWS, however, is that feasibility is already proven; and further improvements in tech­nology can be tested on a small scale, privately financed by industry. By contrast, each step in the development of fusion is so costly that the expense is best shared among nations.

Future Reactors

Generation III reactors will have more efficient use of fuel and better safety features but no radically new designs. Advanced Boiling Water Reactors, Advanced CANDU heavy-water Reactors, and EPRs will be added to the list of acronyms. Generation IV reactors will be of two main types: breeder reactors, either liquid-metal or gas cooled (discussed above); and very high temperature reactors (VHTRs) [44]. Of these, the most interesting is the pebble-bed modular reactor (PBMR), shown in Fig. 3.60.

The “pebbles” are tennis-ball size spheres containing both the fuel and the moderator. The small grain of fuel can be any fissionable material such as enriched uranium, plutonium, or MOX, the mixed oxides of both. The fuel is surrounded by a layer of porous graphite to absorb gaseous products of the reaction. This is covered by a thin layer of silicon carbide, which is an impenetrable barrier that can take high temperatures. The outer layer of the fuel grain is pyrolytic carbon, which

image161

Fig. 3.60 Diagram of a pebble and a pebble-bed reactor vessel (European Nuclear Society: http:// www. euronuclear. org/info/encyclopedia/p/pebble. htm)

is dense and can take extremely high temperatures. These tiny fuel grains are dispersed in the graphite moderator, which forms the bulk of the pebble. The reactor core contains some 360,000 pebbles, enough to make a critical mass with the spacing fixed by the spherical pebbles. Helium is circulated through the spaces between the spheres for cooling, and the helium then carries the reaction energy to a heat exchanger.

The design has built-in safety features. The reaction products are contained within the fuel grains and the pebbles. In fact, depleted pebbles can be their own waste containers. The helium is not radioactive even if it leaks out. The reactor can operate at 1,000°C to raise the thermal efficiency to 50%. If the coolant fails, the reactor cannot go critical because the U238 part of the fuel absorbs more neutrons at higher temperature, thus slowing down the reaction if it gets hot. The pebbles might reach a temperature of 1,600°C, but the pebbles are still stable at that temperature, and the reactor core will just stay that way until cooling is restored. The pebbles can be dropped in at the top and removed from the bottom of the reactor core. This allows the pebbles to be periodically examined and removed to storage if they have been used up.

Critics of PBMRs cite the possibility that the graphite would catch fire if it contacts air or water at these extreme temperatures. PBMRs are being developed in Germany, the USA, the Netherlands, and China. The automatic safety mechanisms have been tested on a small scale.

Magnetic Islands

Figure 6.1 of the last chapter showed how a plasma current circulating around a toka — mak generates a poloidal magnetic field to give a twist to the field lines. This twist, or helicity, is necessary to average out the vertical drifts that the particles have in a torus. These drifts arise when a straight cylinder is bent into a circle to form a torus. We then defined a quantity q, the quality factor, which tells how much twist there is; actually, how little twist there is. Large q means the twist is gentle, and small q means that the twist is tight. It is called the quality factor because the plasma is stable if q is larger than 1 and unstable if q is smaller than 1, so larger q gives better stability. You may recall that the culprit was the kink instability, and the boundary at q=1 was called the Kruskal-Shafranov limit. If q = 1, a field line goes around the torus the short ‘Numbers in superscripts indicate Notes and square brackets [] indicate References at the end of this chapter.

F. F. Chen, An Indispensable Truth: How Fusion Power Can Save the Planet,

DOI 10.1007/978-1-4419-7820-2_7, © Springer Science+Business Media, LLC 2011 way (the poloidal direction) exactly once after it goes once around the long way (the toroidal direction). It then joins on to its own tail. If q=2, the twist is smaller, so it takes two trips the long way before the field line joins onto itself, and so on.

In general, q is not a rational number like 1, 2, 3, 3/2, 4/3, and so forth. Except in such cases, a field line never comes back to itself; rather, after numerous turns, it traces out a magnetic surface. The field lines of neighboring surfaces cannot be parallel to one another either, because the magnetic field has to be sheared. Shear has a stabilizing effect on almost all instabilities. That means that q has to vary with radius within a cross section of the torus, so that the amount of twist is different on each magnetic surface. Scientifically, we say that q is a function of minor radius r, written as q(r). By now you may have guessed that something special happens when q is a rational number, like 2. At the radius where q(r) = 2, a field line joins onto itself after traversing the torus twice the long way. Remember that the tokamak current (the one that creates the helicity) is driven by an electric field (E-field). How this is done is shown later in this chapter. It is easier for the E-field to drive a current if the field lines are closed, since the electrons can then run around and around on the same field line. The current can break up into filaments. Each filament acts like its own little tokamak with its own magnetic surfaces, and the magnetic surface at q=2 breaks up into two magnetic islands. Other chains of islands could form at the q=3 surface, and so on. Between rational surfaces, the filamentation does not occur, and there are no islands. Figure 7.1 shows a computed picture of islands at the q=3/2 surface.1 Since the rotational transform is 1/q, it has the value of 2/3 here. That means that a field line inside the top island, after going around the whole torus once, will end up in the island at the lower right, say, two-thirds of the way around the cross section. After the next revolution, it will go another two-thirds of the way around, ending up in the island at the lower left. After the third traversal, it will be back in the top island, but not exactly where it started. It will be on the same small

image228

Fig. 7.1 Magnetic islands in a tokamak at the q = 3/2 surface

magnetic surface inside the island, but at a different point. It is only after many, many traversals that the island is traced out. Our previous naive picture of nested magnetic surfaces has taken on a fantastic character!

Ions and electrons can cross an island in between collisions; and since the island width is much larger than a Larmor diameter, the escape rate is faster than classical just as in banana diffusion. Fortunately, not all island chains are large, and higher fractions like 5/6 would not yield noticeable islands at all.

Exactly where these island chains lie depends on how much current there is at each radius. The amount of current depends not only on the strength of the E-field, but also on the temperature of the electrons. The higher the temperature, the lower the resistivity, and the higher the current. Since the plasma tends to be hotter at the center, the plasma current generally has a peak at the center. Figure 7.2 gives an example of where island chains can occur, in principle. The curve shows how q typically varies with distance from the center of the plasma’s cross section. In this case, the rational surfaces q = 1, 2, and 3 occur at radii of about 3, 7, ad 9 cm, respectively, and there are no places where q is 4 or higher. There is a special region where q is less than 1.

The shape of the curve q(r) is determined by the distribution of the plasma cur­rent. Figure 7.3 gives examples of different current profiles J(r) and the q(r) curves that they produce. The uppermost curve, corresponding to the most peaked current, would have more rational q surfaces. Tokamak operators have some control over J(r), since there are various ways to heat the plasma. If the electron temperature changes, however, J(r) will change, and so will the magnetic topology. Where the q curves cross the line q = 1 is of utmost importance, as will be explained next.

Подпись: 0 2 4 6 8 10 r (cm)

Islands were first observed experimentally by Sauthoff et al. [1] in the famous “sombrero hat” experiment. Electrons emit a small X-ray signal when they collide with ions. By collecting these signals with detectors surrounding the plasma, the plasma density distribution can be reconstructed by computer the same way as in a medical CAT scan. Figure 7.4 shows a typical result at one instant of time.

image230
The contours of constant density in Fig. 7.4a shows a q=2 island structure. A 3D plot of this in Fig. 7.4b resembles a sombrero.