Как выбрать гостиницу для кошек
14 декабря, 2021
When neutral-beam heating was installed and turned on in the ASDEX tokamak in Garching, Germany [9] in 1982, Mother Nature came up with a major surprise that no one could have predicted. When the heating power was increased slightly from 1.6 to 1.9 MW, the plasma snapped into a new mode. Its temperature went up; its density went up; and the confinement times of both the plasma energy and the plasma particles went up, as dramatically shown by a sudden drop in the measured flux of escaping ions. It was as if a wall or dam, called a transport barrier, had formed, as depicted in the cartoon of Fig. 7.24a. The plasma would diffuse as it normally does up to this barrier, and then it would be held up by the barrier and leak out slowly in small bursts. This high-confinement mode, called the H-mode, came about from two innovations: the increase in heating power possible using neutral beams and the use of a single divertor of the type shown in Fig. 7.13. When the
Fig. 7.25
neutral beams are turned on below 1.6 MW, the confinement time actually gets a little worse because the beam disturbs the plasma equilibrium that was set up by ohmic heating. This is called the low-confinement L-mode. Once the power is increased beyond the H-mode threshold, the L — to H-transition occurs and a pressure pedestal forms.
Figure 7.25 shows what is meant by the pedestal. This is a graph of the plasma pressure as it varies across the minor radius; that is, from the center of the toka — mak’s cross section to the outside. Up to the pedestal, the plasma density and temperature (whose product is the pressure) fall gently from their maxima as in normal diffusion; but they do not fall all the way to zero. They hang up at a high value, so that the average pressure inside is higher than in the L-mode. At the pedestal, the pressure falls rapidly to nearly zero as the plasma is drained off to the
divertor, where it recombines into gas and is pumped out. What happens inside the barrier is illustrated in Fig. 7.24b. Large electric fields in the direction of the minor radius are set up, and these cause perpendicular E x B drifts in the toroidal direction, as shown in Fig. 5.6. These drifts are not uniform but are highly sheared. Apparently, this sheared motion stabilizes the microinstabilities and slows down the diffusion from the instability-controlled diffusion in the interior. Note that this is electric shear stabilization, as opposed to the magnetic shear stabilization used in elementary forms of toroidal confinement devices.
The H-mode barrier layer is very thin, about 1-2 cm in a large tokamak with meter-sized cross sections. The H-mode is not a peculiarity of the tokamak, since it has been seen in stellarators and other toroidal devices. It is also not a phenomenon of neutral beam heating. It seems to have only two requirements: (1) that the input power be high enough and (2) that the plasma be led out by a divertor into an external chamber rather than be allowed to strike the wall. The latter requirement is due to the fact that impurity atoms or neutral atoms prevent the pedestal from forming. In the H-mode, the confinement time improves by about a factor of 2 (see Fig. 7.26a), and the plasma pressure by about 60%. A factor of 2 does not seem a lot, considering that confinement times have increased a million-fold since fusion research began; but we are now talking about a machine that is almost ready to be designed into a reactor. A factor of 2 can turn a 1-GW reactor into a 2-GW reactor, serving 1,000,000 homes instead of 500,000. All current designs for fusion reactors assume H-mode operation. The power produced by a reactor depends critically on the density and temperature of the pedestal.
Fig. 7.26 (a) The H-mode confinement enhancement factor vs. ion-electron temperature ratio, as measured in four large tokamaks (adapted from A. C.C. Sips, Paper IT/P3-36, 20th IAEA Fusion Energy Conference, Vilamoura, Portugal, 2004). (b) Scaling law for H-mode threshold power vs. plasma density, toroidal magnetic field, and plasma surface area [10] |
These have occupied the thoughts of a large fraction of fusion physicists for over two decades. One annual conference devoted to this topic has been going on for over 20 years. Sheared flows have good and bad effects. On the one hand, they can cause an instability, called the Kelvin-Helmholtz instability, which is well known in hydrodynamics. It is the instability that causes wind to ripple the surface of water. On the other hand, shear can quench an instability or at least limit its growth. In hydrodynamics, there is a simple theorem that tells what shape of shear is stable or unstable. In plasma physics, no such simple result is possible because so many kinds of waves can exist in a magnetized plasma. It is also difficult to make measurements in such a thin layer. The physics of the transport barrier — “edge physics” — is an ongoing study. The transport task force, a conference devoted to this topic, has been meeting yearly since 1988. More important, however, is to know how to turn on the H-mode. The threshold power depends on magnetic field, plasma density, and machine size. Since the H-mode threshold has been observed in so many machines, it was possible to formulate a scaling law that tells how the threshold depends on these various parameters. This is shown in Fig. 7.26b.
The H-mode has benefited not only our ability to confine plasma, but it has also improved our knowledge of plasma physics. Even the way in which the plasma’s energy escapes from the barrier has turned out to be a considerable problem. It escapes by means of yet another instability, call an ELM. This is described in the next chapter.
The ITER machine is an experiment large enough to require an international consortium. Its mission is to achieve a burning plasma, one in which the alpha particles produced by the D-T reaction can maintain the plasma’s temperature without external heating. At this stage of construction, not all physics problems have been solved, though they may be solved by the time construction is finished. We hope that these problems will be solved in time for DEMO. However, the physics does not have to be completely understood for something to work. Books have been written on the physics of tennis, baseball, sailing, and even pizza. Sometimes, it is easier just to get on with it.
Fig. 9.26
The ARIES program in the USA is the leading group in designing fusion reactors. Originally started by Robert W. Conn in the 1980s at the University of Wisconsin and the University of California (UC) Los Angeles, it is now headed by Farrokh Najmabadi at UC San Diego. Throughout the years, new ARIES designs have been made as new physics has been discovered. The designs are not only for tokamaks; stellarators and laser-fusion reactors have also been covered. The latest designs, ARIES-AT for advanced tokamaks and ARIES-ST for spherical tokamaks, inspired the FDF proposals described above. Practical considerations such as public acceptance, reliability as a power source, and economic competitiveness pervade the studies. The designs are very detailed. They optimize the physics parameters, such as the shape of the plasma and the neutron wall loading. They also optimize the engineering details, such as how to replace blankets and how to join conductors to make the joints more radiation resistant. As new physics and new technology became available, the reactors ARIES I, II, … to ARIES-RS (reversed shear) and
ARIES-AT (advanced tokamak) evolved to become smaller and cheaper. This is shown in Fig. 9.35. We see that as fusion physics advanced from left to right in each group of bars, the size of the tokamak, the magnetic field, and the current — drive power could be decreased while increasing the neutron production. This is due to the great increase in plasma beta that the designers thought would be possible. The recirculating power fraction is the power used to run the power plant; the rest can be sold. It dropped from 29 to 14%. The thermal efficiency in the latest design breaks the 40% Carnot-cycle barrier by the use of a Brayton cycle. Finally, we see that the COE is expected to be halved from 100 to 50 per kWh with advanced tokamaks.
ARIES-AT is shown in Fig. 9.36. Unlike existing tokamaks, this reactor design has space at the center for remote maintenance and replacement of parts. The philosophy in reactor design is to assume that the physics and technology advancements that are in sight will actually be developed and, on that basis, optimize a reactor that will be acceptable to industry and the public. It is not known whether high-temperature superconductors will be available on a large scale, but this would simplify the reactor. The blankets will be of the DCLL variety, and it is predicted that the Pb-Li can reach 1,100°C without heating the SiC walls above 1,000°C. This high temperature is the key to the high thermal efficiency. For easier maintenance and better availability, the blankets are made in three layers, two of which will last the life
of the reactor. Only the first layer, along with the divertor, has to be changed out every five years. Sectors are removed horizontally and transported by rail in a hot corridor to a hot cell for processing. Shutdowns are estimated to take four weeks.
Turbocharging and supercharging in automobiles are terms that are well known to the public. Airplanes engines are turbocharged. Modern power plants use thermodynamic cycles that have higher efficiency than the classic Carnot cycle. The ARIES-AT reactor will use one of these called a Brayton cycle. The hot helium from the tokamak blanket is passed through a heat exchanger to heat helium that goes to electricitygenerating turbines. The two helium loops are isolated from each other because the tokamak helium can contain contaminants like tritium. The turbine also runs with cooler helium at a different flow rate. The Brayton cycle precompresses the helium three times before it goes into helium turbines. The heat of the helium coming out of the turbines is recovered in coolers that cool the helium before it is compressed. It is this system that achieves the 59% thermal efficiency of the ARIES-AT design.
ARIES-AT will produce 1,755 MW of fusion power, 1,897 MW of thermal power, and 1,136 MW of electricity. The radioactive waste generated will be only 30 m3 per year or 1,270 m3 after 50 years. The plant will run for 40 of those years if availability is 80%. Ninety percent of this waste is of low-grade radioactivity; the rest needs to be stored for only 100 years. No provisions for public evacuation are necessary, and workers are not exposed to risks higher than in other power plants. The COE from ARIES-AT is compared with other sources in Fig. 9.37. We see that electricity from fusion is not expected to be extravagant.
Europeans have also made reactor models in their Power Plant Conceptual Studies (PPCS) [26]. Figure 9.38 is a diagram of the tokamak in those designs. As with the ARIES studies, Models A, B, C, and D in PPCS (Fig. 9.39) trace the evolution of the design with advances in fusion physics and technology, with Model D using the most speculative assumptions. All these models produce about 1.5 GW of
о H—————— 1—————- 1—————- 1—————- 1—————-
Natural gas Coal Nuclear Wind Fusion
(intermittent) (ARIES-AT)
Fig. 9.37 Estimated year 2020 cost of electricity in US cents per kilowatt-hour from different power sources [graph adapted from [25], but original data are from the Snowmass Energy Working group and the US Energy Information Agency (yellow ellipses)]. The red range is the cost if a $100/ton carbon tax is imposed. The fusion range is for different size reactors; larger ones have lower cost
6TF coils
coolant manifolds (d i
8 upper ports (f)
(modules & coolant)
176 blanket
— modules (a) і (5-6 уте lifetime)
8 central ports (g)
(modules)
vacuum vesse.
Fig. 9.38 Drawing of tokamak in Power Plant Conceptual Studies in Europe [26] electricity, but they are smaller and use less power with gains in knowledge. The recirculating fraction and thermal efficiency of Model D matches that of ARIES-AT. Safety and environmental issues were carefully considered. The cost estimates are given in Fig. 9.40, also in US cents per kWh. The difference between the wholesale price of electricity and that available to consumers is clearly shown. It is seen that fusion compares favorably with the most economical sources, wind and hydro.
10
8
6
4
2
0
R (m) Fusion Bootstrap Wall load Current (MA) Recirc. Frac. Therm. Effic. power fraction (MW/m2)
Sonoluminescence is a phenomenon in which megahertz sound waves in a liquid can cause a bubble to collapse into a very small dot, creating a high temperature there. Using deuterated acetone as the liquid, some researchers reported detecting fusion neutrons created by the collapsing bubble. However, experts on sonoluminescence, including Seth Putterman of UCLA, were not able to reproduce these results and have categorically stated that this is not a way to produce fusion. It appears that this is an even more extreme farce than cold fusion.
This is the original idea on cold fusion, having been disclosed by Luis Alvarez in his Nobel Prize speech in 1968 [49]. Muons are fundamental particles like electrons but 207 times heavier. They are produced in accelerators and live for 2 ms (an eternity!)
before decaying. As you know, elementary particles and photons have a dual nature, sometimes behaving like particles and sometimes like waves. As waves, they have a wavelength, called the deBroglie wavelength, which is inversely proportional to their masses. Being some 200 times heavier, muons have wavelengths 200 times shorter. A negative muon can take the place of an electron in an atom, and the “cloud” of negative charge is then 200 times smaller, bringing the nuclei of molecules closer together. The muon-fusion process for DT molecules is shown in Fig. 10.55.
In the first line of that figure, normal D and T atoms with their large electron clouds can combine into a DT molecule, just as two H atoms can form H2. In the second line, a д-meson (muon) replaces the electron in the tritium atom, and the resulting muonic tritium atom has a smaller size. Next, a deuterium nucleus joins the triton inside the muon cloud, forming muonic DT with the two nuclei close together. Normally, the D and the T repel each other with their positive charges and cannot fuse into helium at room temperature. However, in quantum mechanics, particles can tunnel through the Coulomb barrier if it is thin enough. In a muonic DT molecule, this can happen very fast, and the entire process can happen several hundred times during the 2-ps lifetime of the muon. In the last line of Fig. 10.55, DT fusion has occurred, creating the usual products of a neutron and an alpha
particle. What the muon does then is essential. If it flies off, it can catalyze another fusion again and again. However, if it “sticks” to the alpha particle, it is carried off and is lost. The sticking fraction is between 0.4% and 0.8%, and this limits the number of reactions that one expensive muon can catalyze.
Experiments are being done in accelerator laboratories like RIKEN-RAL4 in England and TRIUMPH in Vancouver, Canada. About 120 DT fusions per muon have been observed [28]. At 17.6 MeV per event, this amounts to over 2 GeV of energy. However, it takes 5 GeV to make each muon. There are ways to improve this ratio, by using polarized deuterons, by working at high temperatures, or by making cheaper accelerators. At this stage, the physics of muon fusion is still in its infancy.
Wind power rarely occurs where it is needed most. Conversely, you would not want to live where the wind is always fierce, like the west side of the Falkland Islands. New transmission lines are necessary, and this obstacle is preventing wind power from developing as fast as planned. In Germany alone, it is estimated that 2,700 km (1,700 miles) of extra-high-voltage lines will be needed by 2020 to carry an expected 48 GW of wind power. These lines run at up to 380 kV, compared to high-voltage lines at 110 kV, which are scary enough, and will cost over 3 billion
Compressed Air _
Fig. 3.17 Compressed air energy storage scheme for wind power (Vestas Wind, No. 16, April 2009)
euros.11 Traditionally, power plants are built near population centers, so the transmission lines are short. Distributing wind power will require new rights-of-way, some of it underground. These lines cost 7-10 times as much as standard lines.11 There will be political, legal, and social problems in addition to the large cost. Germany,
and even all of Europe, is small compared with the US. Transmission lines are an even bigger problem for wind power in the USA.
Load distribution is another big problem. If the wind input to the power grid varies by as much as 10%, the grid can become unstable. However, if several wind sources are connected to the same grid, load variation can be avoided if the power can be switched in and out fast enough from each of the sources. This requires accurate forecasting of the wind speed and close collaboration among grid operators. The Nordic countries of Sweden, Norway, and Denmark are close enough to pool their resources for load leveling.18 They can exchange wind and hydro energy. For instance, when wind power is excessive in Denmark, it can sell the power to Norway. Norway can accommodate the power by slowing down its hydroelectric power, storing the energy in the reservoir above a dam. The hydro energy can be sold back to Denmark when the wind dies down.
Wind is so variable that it can never be a large fraction of the total grid power. Not only that, but it must be backed up by conventional fossil fuel or nuclear plants. Estimates vary from 90%11 to 100%.24 That is, for every megawatt of new wind power installed, one megawatt’s worth of new coal, oil, gas, or nuclear plants have to be built.
The Cast of Characters
The atomic number of an element is the number of protons in the nucleus. Uranium, element 92, has atomic number 92. Fissionable elements all have atomic number 92 or higher.75 The mass number is the total number of protons and neutrons in the nucleus. So uranium 235 has 92 protons and 143 (=235 — 92) neutrons. The atomic weight is a loosely used term which is essentially the mass number but differs by a fraction because protons and neutrons do not weigh exactly the same; they are bound with different energies; and energy and mass are interchangeable, according to Einstein. The symbol for uranium 235 is 92U235, but we shall write it is U235 because the 92 is already specified by “U.” Elements can have the same atomic number but different mass numbers; these are called isotopes. Here are a few isotopes of importance in fission:
U238: The normal isotope of uranium in nature.
U235: The fissionable isotope of uranium, with an abundance of only 0.7% in nature.
P239: Plutonium (element 94) is generated in a reactor and fissions easily.
U239: Uranium 239 decays76 in 23 min.
Np239: Neptunium 239 (element 93) decays in 2.4 days.
Cs137: Cesium 137 (element 55) decays in 30 years.
I131: Iodine 131 (element 53) decays in eight days.
The first group of three contains the isotopes we will be discussing. The next two are intermediate states in the transformation of uranium into plutonium in a reactor. The last two are the most dangerous reaction products when released into the air in an accident. The decay times here are half-lives. Isotopes never completely disappear. Half of what is left goes away in a half-life. Note that only isotopes with odd mass numbers are fissionable.77 What is not given here is the tremendous amount of energy that nuclei can give. A single-fuel pellet, the size of a AAA battery, can make as much electricity as 6 tons of coal.78
You are probably wondering how we can heat a plasma to 100 million degrees (10 keV). We can do that because a plasma is not a collisionless superconductor after all! Although much of the theory of instabilities is done with the approximation of collisionlessness, we now have to take into account collisions between electrons and ions, infrequent though they are. First of all, plasmas can be made only inside a vacuum chamber because its heat would be snuffed out by air. Vacuum pumps create a high vacuum inside the torus. Then a gas such as hydrogen, deuterium, or helium is bled in up to a pressure that is only three parts in a million (3 x 10-6) times as high as atmospheric pressure. These atoms are then ionized into ions and electrons by applying an electric field, as we will soon show. Although the plasma is heated to millions of degrees, it is so tenuous that it does not take a lot of energy to heat the plasma particles to a million degrees (100 eV) or even 100 million degrees (10 keV). This is the reason a fluorescent tube is cool enough to touch even though the electrons in it are at 20,000°. The density of electrons inside is much, much lower than that of air.
Once we have the desired gas pressure in the torus, we can apply an electric field in the toroidal direction with a transformer (this will be explained later). There are always a few free electrons around due to cosmic rays, and these are accelerated by the E-field so that they strip the electrons off gas atoms that they crash into, freeing more electrons. These then ionize more atoms, and so on, until there is an avalanche, like a lightning strike, which ionizes enough atoms to form a plasma. This takes only a millisecond or so. The E-field then causes the electrons to accelerate in the toroidal direction, making a current that goes around the torus the long way. The ions move in the opposite direction, but they are heavy and move so slowly that we can assume that they stay put in this discussion. If the plasma were really collisionless, the electrons would “run away” and gain more and more energy while leaving the ions cold. However, there are collisions, and this is the mechanism that heats up the whole plasma.
Running an electric current through a wire heats it because the electrons in the wire collide with the ions, transferring to them the energy gained from the applied voltage. According to Ohm’s law, the amount of heating is proportional to the wire’s resistivity and to the square of the electric current. In toasters, a high — resistance wire is used to create a lot of heat. High resistance is hard to get in a plasma because it is almost a superconductor. The number of ions that electrons collide with may be 10 orders of magnitude (1010 or 10 billion times) smaller than in a solid wire. Nonetheless, heating according to Ohm’s Law (“ohmic heating”) is effective because very large currents can be driven in a plasma, currents above 100,000 A (100 kA), and even many megamperes (MA). This is the most convenient way to heat a plasma in a torus, but when the resistance gets really low at fusion temperatures, other methods are available.
Calculating the resistance of a plasma is not easy because the collisions are not billiard-ball collisions. The transfer of energy between electrons and ions occurs through many glancing collisions as they pass by at a distance, pushing one another with their electric fields. This problem was first solved by Spitzer and Harm [5], and their formula for plasma resistivity (“Spitzer resistivity”) allows us to compute exactly how to raise a plasma’s temperature by ohmic heating.
This resistivity formula allows us to calculate something of even more interest; namely, the rate at which plasma collisions can move plasma across magnetic field lines. Every time an electron collides with an ion, both their guiding centers shift more or less in the same direction, so both of them move across the field lines. The plasma, then, spreads out (diffuses) across the magnetic field the way an ink drop diffuses in a glass of water until the ink reaches the wall. This is a slow process, but nonetheless it limits how long a magnetic bottle can hold a plasma. There is, however, a big difference between ordinary diffusion and plasma diffusion in a magnetic field. In ordinary diffusion, collisions slow up the diffusion rate by making the ink molecules, for instance, undergo a random walk. The more the collisions, the slower the diffusion. A magnetically confined plasma, on the other hand, does not diffuse at all unless there are collisions. Without collisions, the particles would just stay on the same field line, as in Fig. 4.5. Collisions cause them to random walk across the B-field, and the collision rate actually speeds up the diffusion. Since a
350
300
250
200 c
о
150
w
100 50 0
hot plasma makes very few collisions, being almost a superconductor, this “classical” diffusion rate is very slow. This is called “classical” diffusion because it is the rate predicted by standard, well-established theory and applies to normal, “dumb” gases. Unfortunately, plasma can diffuse out rapidly by generating its own electric fields; and it leaks out much faster than at the classical rate.
Figure 5.11 shows the classical confinement time of a hot plasma as a function of magnetic field. We have assumed fusion-like electron and ion temperatures of 10 keV and a plasma diameter of 1 m — a large machine, but smaller than a full reactor. What is shown is the time for the plasma density to drop to about one-third of its initial value. This is similar to the “half-life” of a radioisotope used in medicine, a concept most people are familiar with. We see that at a field of 1 T (10,000 G), which we found before to be necessary to balance the plasma pressure, the time is about 90 secs — a minute and a half. This is much longer than what the Lawson criterion requires, which, we recall, is about 1 sec. It was this prediction of very good confinement that gave early fusion researchers the optimistic view that controlling the fusion reaction was a piece of cake. It did not happen, of course. Numerous unanticipated instabilities caused the confinement times to be thousands of times shorter than classical, and it is the understanding and control of these instabilities that has taken the last five decades to solve.
1. The data are from Bosch and Hale [1]. The vertical axis is actually reactivity in units of 1016 reactions/cm3/sec.
2. Such data were originally given by Post [2] and have been recomputed using more current data.
3. What is actually shown here is an equipotential of the electric field, which is the path followed by the guiding centers in an E xB drift. The short-circuiting occurs when the spacing becomes smaller than the ion Larmor radius, so that the ions can move across the field lines to go from the positive to the negative regions on either side of the equipotential. The curves are measured, not computed.
The triple product plotted in Fig. 8.20 contains the energy confinement time te, which is how long each amount of energy used to heat the plasma stays in there before it has to be renewed. The plasma energy is lost through three main channels: radiation, mostly in the form of X-rays, and escape of ions and electrons to the wall, carrying their heat with them. The first two of these, radiation and ion loss, follow theory and can be predicted, but electrons escape faster than can be explained. The energy loss by electrons can be measured, but it cannot be predicted. It would be
0.01 Г |
0.001 |
0.001 0.01 0.1 1 Predicted Confinement Time |
Fig. 8.21 Data from 13 tokamaks showing that the energy confinement time as measured follows an empirical scaling law12 impossible to design a new machine accurately without knowing what te would be, but fortunately the over 200 tokamaks that have been built were found to follow an empirical scaling law. This formula12 gives the value of te in terms of the size and shape of the tokamak, the magnetic field, the plasma current, and other such factors. The result is shown in Fig. 8.21.
This empirical scaling law is the basis on which new tokamaks are designed. It cannot be derived theoretically, but it is followed in a massive database from a variety of tokamaks. This “law” is given in mathematical form in footnote 12. Most of the dependences are consistent with our understanding of the physics. For instance, te increases with the square of the machine size. The strength of the toroidal field does not matter much because the size of the banana orbits depends on the poloidal field. The poloidal field indeed enters in the linear dependence on plasma current. The wonder is that only eight parameters are needed to make all tokamaks fall into line. As seen in Fig. 8.21, the data cover over a factor of 100 in te. To design ITER, the scaling had to be extrapolated by another factor of 4.
In Chap. 3, we carefully showed that a magnetic bottle has to be doubly connected and not a sphere; hence tokamaks are toruses.1 How, then, can a tokamak be spherical? No, spherical tokamak is not an oxymoron. A tokamak can be spherical as long as there is still a hole in the middle. This is shown in Fig. 10.11. These small, fat tokamaks have typical aspect ratios A between 1 and 2. There are many advantages to having small A, but the problem is how to fit all the necessary equipment into the small hole. Spherical tokamaks (STs) are so attractive that many clever ideas have been proposed for treating the small hole, and there are over two dozen STs all over the world testing these ideas.2 In fact, one can eliminate the hole in the vacuum chamber altogether as long as the magnetic field is still toroidal.
Aside from the small size and the consequent cost savings, STs have a large advantage in plasma stability. This is explained in Fig. 10.12, which shows the magnetic-field structure in an ST. The field lines behave very differently from those in a normal tokamak (Fig. 6.1). A particle following a field line spirals around the central column before returning to the outside of the plasma. Good and bad curvatures are shown in Fig. 7.10. In good curvature, the bend is toward the plasma, and in bad curvature, it is away from the plasma. We see that there is a lot of good curvature around the central column, and a region of weaker bad curvature when the field line returns to the top. Since particles spend more time in good curvature than in bad, there are strong forces pushing the plasma inwards. Much smaller magnetic fields are needed in STs because of the good confinement.
In a 1986 paper [16], Martin Peng and D. J. Strickler noted that the vertical field needed in tokamaks (Fig. 6.19) had a natural tendency to elongate the plasma, and they laid out the basics for the design of STs. Elongation is the vertical length of the plasma compared with its minor diameter, and it has good consequences for STs. As the aspect ratio goes down from 2.5 to 1.2, the elongation increases from 1.1 to 2, and the magnetic field that gives the needed quality factor q for a given
^bar. qg aspect ratio
(conventTSnaL tokamak)
Small aspect
ratio (spherical
tokamak)
Aspect Ratio = Major radius / minor radius
A= R / a
Fig. 10.11 A spherical tokamak has an aspect ratio much smaller than a normal tokamak [15]
Fig. 10.12 Sketch of one magnetic field line in a spherical tokamak with a current-carrying central column. The regions of good and bad curvature are marked (Adapted from S. Prager (University of Wisconsin), Magnetic Confinement Fusion Science Status and Challenges, February 2005) |
plasma current falls by a factor of 20! [15] The value of beta (ratio of plasma pressure to magnetic-field pressure) is therefore very high in STs.
The British machines START (Small Tight Aspect Ratio Tokamak) and its successor MAST (MegAmpere Spherical Tokamak) have given the most information on STs. A photograph of the spherical plasma in START is shown in Fig. 10.13. The graph of beta values obtained in START (Fig. 10.14) shows the great improvement over normal tokamaks. In that graph, BT is the toroidal beta (that calculated with the toroidal magnetic field), and BN is the normalized beta, as defined in Chap. 8 under Troyon Limit. The recent data (red dots) show that the density limit can be exceeded in a spherical torus.
In spite of their physical appearance, STs exhibit the same phenomena observed in large-A tokamaks; the H-mode and ELMs, for instance. MAST is suitable for studies of ELMs and was used for the design of ELM-suppression coils. The shape of the field lines also gives STs a natural divertor.
Now we tackle the question of how to minimize the width of the central column. The toroidal magnetic field in a tokamak is generated by coils that thread through the hole, as shown in Fig. 6.1. All the coil legs that go through the hole can be combined into a single copper bar carrying all the current, as shown in Fig. 10.13.
Fig. 10.14 Plot of toroidal beta (BT) in START and normal tokamaks [15] |
This is possible because the B-field is small in an ST, so the coil currents are reduced. To drive the toroidal plasma current, the brute force way is to put an iron core through the hole and drive the current by transformer action, as shown in Fig. 7.14. Most tokamaks use air-core transformers that have no iron. These consist of toroidal coils around the plasma, including some inside the hole. This is shown in Fig. 7.15. These methods are called inductive drive. The disadvantage is that the current has to be increasing to excite the current; and since it cannot increase forever, the tokamak has to be pulsed. Modern tokamaks use noninductive drive,
Fig. 10.15 Creation of a toroidal plasma in a spherical tokamak with no central column by the merging of two plasmas [15] |
which consists of bootstrap current and wave-driven currents (Chap. 9). This would eliminate the need for toroidal coils inside the hole.
The problem is that you can’t launch a wave unless there’s a plasma, and you can’t confine a plasma unless there is already a rotational transform. So it seems that at least some small toroidal coils have to be crammed into the hole, but there may be a solution. Neutral-beam injection is the usual way to heat a large tokamak. Currently, there has been some success (in MAST [15]) in ramping up the NBI in such a way that it drives a current also. It is also possible to create plasmas in corners of the chamber where poloidal coils can be inserted, and to have these plasmas drift and merge into the center. This is illustrated in Fig. 10.15.
While experimentation on STs is being conducted intensely worldwide, reactor studies have been made both in Europe and the USA. The ARIES-ST design of 1999 is shown in Fig. 10.16. The central column is made to be slid out and replaced easily. All blanket modules are on the outside. Note the natural divertors at the top and bottom.
By far the most common type of solar cell because of their long history, silicon solar cells are fast being overtaken by thin-fflm cells, which are much less complex and costly.
Crystalline silicon is expensive and takes a lot of energy to make. It also absorbs only part of the solar spectrum and does it weakly at that. Only those photons that have more energy than silicon’s bandgap can be absorbed, so the red and infrared parts of sunlight are wasted. That energy just heats up the solar cell, which is not good. The blue part of the solar spectrum is also partly wasted for the following reason. Each photon can release only one electron regardless of its energy as long as it exceeds the bandgap. So a very energetic photon at the blue end of the spectrum uses only part of its energy to create electric current, and the rest of the energy again is lost as heat. To capture more colors of sunlight, cells made with other materials with different bandgaps are used in the basic cell instead of silicon. These other semiconducting materials are called III-V compounds, and they are explained in Box 3.4.
Box 3.4 Doped and III-V Semiconductors
The way semiconductors can be manipulated is best understood by looking at the part of the periodic table near silicon, as shown in Fig. 3.34. The Roman numerals at the top of each column stand for the number of electrons in the outer shell of the atom. The different rows have more inner shells, which are not active. The small number in each cell is the atomic number of the element. Silicon (Si) and germanium (Ge) are the most common semiconductors and are in column IV, each with four active electrons. They share these with their four closest neighbors in what is called covalent bonds. These are indicated by the double lines in Fig. 3.35. These bonds are so strong that the atoms are held in a rigid lattice, called a crystal. The actual lattices are three-dimensional and not as simple as in the drawing. The crystal is an insulator until a photon makes an electron-hole pair by knocking an electron into the conduction band, as we saw in Fig. 3.32.
II III IV V VI
|
Fig. 3.34 The periodic table near silicon
Box 3.4 (continued)
However, there is another way to make Si or Ge conduct. We can replace one of the silicon atoms in Fig. 3.35a with an atom from column III, for instance, boron. We would then have a “hole.” That’s because boron (B) has only three active electrons and leaves a place in a covalent bond where an electron can go. Since holes can move around and carry charge as if there were positive electrons, this “doped” semiconductor can conduct. We can also dope Si with an atom from column V, such as phosphorus (P), as shown in Fig. 3.35b. Since phosphorus has five active electrons, it has an electron left over after forming covalent bonds with its neighbors. This is a free electron which can carry current. Note that the P nucleus has an extra charge of +1 when one electron is removed, so the overall balance of + and — charges is still maintained. The conductivity can be controlled by the number of dopant atoms we add. In any case, only a few parts in a million are sufficient to make a doped semiconductor be a good enough conductor to interface with metal wires. Any element in column III, boron (B), aluminum (Al), gallium (Ga), or indium (In), can be used to make a p-type semiconductor (those with holes). Any element in column V, nitrogen (N), phosphorus (P), arsenic (As), or antimony (Sb), can be used to make an n-type semiconductor. When the doping level is high, these are called p+ and n+ semiconductors.
Now we can do away with silicon! We can make compounds using only elements from columns III and V, the III-V compounds. Say we mix gallium and arsenic in equal parts in gallium arsenide (GaAs). The extra electrons in As can fill the extra holes in Ga, and we can still have a lattice held by covalent bonds. We can even mix three or more III-V elements. For instance, GaInP2, which has one part Ga and one part In from III and two parts P from V. There are just enough electrons to balance the holes. This freedom to mix
Box 3.4 (continued)___________________________________________
any of the III elements with any of the V elements is crucial in multijunction solar cells. First, each compound has a different bandgap, so layers can be used to capture a wide range of wavelengths in the solar spectrum. Second, there is lattice-matching. The lattice spacing is different in different compounds. Current cannot flow smoothly from one crystal to another unless the spacings match up. Fortunately, there is so much freedom in forming III-V compounds that multijunction cells with up to five compounds with different bandgaps have been matched. Figure 3.36 shows how the three layers of a triple-junction cell cover different parts of the solar spectrum.
At the bottom of Fig. 3.34, we have shown a II-VI compound, cadmium telluride (CdTe). Each pair of Cd and Te atoms contributes two electrons and holes. This particular II-VI material has been found to be very efficient in single-layer solar cells. It is one of the main types of semiconductors used in the rapid expansion of the thin-film photovoltaic industry.
Fig. 3.36 The parts of the solar spectrum covered by each subcell of a triple-junction solar cell (http://www. amonix. com/technology/index. html) |
By adjusting the compositions of these III-V compounds, their bandgaps can be varied in such a way as to cover different parts of the solar spectrum. This is illustrated in Fig. 3.37. The spectrum there will be explained in Fig. 3.40. The different cells are then stacked on top of one another, each contributing to the generated electric current, which passes through all of them. There are many layers in such a “multijunction” cell. The layers of a simple two-junction cell are shown in Fig. 3.38. The top cell has an active layer labeled n-GaInP2 and is sandwiched
Fig. 3.37 Top: the solar spectrum plotted against photon energy in eV. Long (infrared) wavelengths are on the left, and short (ultraviolet) wavelengths are on the right. The visible part is shown in the middle. Bottom: bandgaps of various semiconductors plotted on the same eV scale. The bandgaps of Ge, GaAs, and GaInP2 are fixed at the positions marked. In InGaN, half the atoms are N, and the other half In and Ga. The bandgap of InGaN, given by the data points, varies with the percentage of Ga in the InGa part. As illustrated for the marked point, the part of the spectrum on the blue side of its bandgap is captured, and the part on the red side is lost (adapted from http://emat-solar. lbl. gov) |
between the current-collecting buffer layers labeled n-AlInP2 and p+GaAs. This is the basic cell structure shown in Fig. 3.33. The bottom cell has an active element labeled n-GaAs surrounded by buffer layers. Connecting the two cells is a two-layer tunnel diode, which ensures that all the currents flow in the same direction. Up to five-cell stacks have been successfully made,38 yielding efficiencies above 40%, compared with 12-19% for single-silicon cells. Each cell in a stack has three layers plus the connecting tunnel diode. However, not all the layers are equally thick as in the diagram, and the entire stack can be less than 0.1 mm thick! Pure crystalline silicon needs at least 0.075 mm thickness to absorb the light and at least 0.14 mm thickness to prevent cracking [7], but this does not apply to the other materials.
The semiconductor layers are the main part of a solar cell, but they are thin compared with the rest of the structure. A triple-junction cell is shown in Fig. 3.39. The support layer could be a stainless steel plate on the bottom or a glass sheet on the top. The top glass can also be grooved to catch light coming at different angles. At the bottom is a mirror to make the light pass through the cell a second time.
Antireflection coating |
||
AR and conductive gnd coating |
||
Power collection grid |
||
n — AllnP2 |
Top cell |
|
n-GalnP2 |
||
p+GaAs |
||
p+GaAe |
Tunnel Diode |
|
n+-GaAa |
||
n-AIGaAs |
Bottom cell |
|
n-GaAa |
||
p-GaAs |
||
p+-GaAs |
Substrate |
Fig. 3.38 The parts of a two-cell stack using gallium-indium-phosphide (GaInP2) and gallium arsenide (GaAs) (http://www. vacengmat. com/ solar_cell_diagrams. html) |
At the top is an antireflection coating such as we have on camera or eyeglass lenses. The current is collected by a grid of “wires,” formed by a thin film of conducting material. The top grid has to pass the sunlight, so it is made of a transparent conductor like indium-tin oxide, which is used in computer and TV screens for the same purpose. The photovoltaic layers have to be in a specific order. At the top is material with the largest bandgap, which can capture only the blue light, whose photons have the highest energy. The lower energy photons are not absorbed, so they pass through to the next layer, labeled “green” here. This has a lower bandgap and captures less energetic photons. Last comes the “red” layer, which has the smallest bandgap and can capture the low-energy photons (the longest wavelengths) which have passed through the other layers unmolested. If the red layer were on top, it would use up all the photons that the other layers could have captured, but it would use them inefficiently, since the voltage generated is the same as the bandgap voltage.
The voltage generated by each cell is only about 1.5 V, so cells are connected into chains that add up the voltage in series to form a module. Modules giving a
Back reflector
film layer
Flexible stainless
steel substate
Fig. 3.39 A typical multijunction solar cell assembly. All the layers in the active part of this ceil are less than 1 pm (1/1,000th of a millimeter) thick (http://www. solarnavigator. net/thin_film_ solar_cells. htm) voltage of, say, 12 V are then grouped into arrays, and thousands of arrays make a solar farm. Modules and arrays generally need to be held in a frame, adding to the cost, and the frames have to be supported off the ground. There is a problem with the series arrangement of the cells. If one cell fails, the output of the entire chain is lost, since the current has to go through all the cells in a chain. Similarly, if one of the layers in a cell fails, there can be no current going out of that cell. Fortunately, the failure rate of commercial units is known and is not bad. Solar cells can still produce 80% of their power after 25 years or more, at least for single-junction cells.
Solar cell efficiency is degraded by another effect: the colors to which a cell responds is fixed in the design of the photovoltaic layers, but the color of sunlight changes with time and place. At sunset, the light is redder and yellower. This means that the blue cell cannot put out as much current. Since the same current flows in series through the whole stack, the red cell’s larger current cannot all be used; its excess current turns into heat. The atmosphere alters the solar spectrum more than you might think. This is shown in Fig. 3.40. In space, the spectrum is almost exactly like that of a classical blackbody. In the visible part of the spectrum, about 30% the intensity is absorbed by the atmosphere. In the infrared region, large absorption bands are caused by gases in the atmosphere. This spectrum is further degraded by the atmosphere during the day as the sun goes lower in the sky.
Multijunction and crystalline silicon solar cells are so expensive that they are not suitable for solar farms, but they have two good applications. First and foremost, these are used where cost is not a prime concern: in space satellites. The ruggedness of
|
|
||
|
|||
|
|||
|
|||
250 500 750 1000 1250 1500 1750 2000 2250 2500
Wavelength (nm)
Fig. 3.40 The solar spectrum in space ( yellow) and on the earth’s surface (red). The visible region is shown by the small spectrum at the bottom. Parts of the spectrum are heavily absorbed by water vapor, oxygen, and CO2 (http://en. wikipedia. org/wiki/Image:Solar_Spectrum. png) silicon and the efficiency of multijunction are needed out there. The sunlight is stronger, and cooling has to be considered because there is no air. Missions to the moon and Mars will no doubt have the most expensive solar cells made. On the earth, expensive solar cells can be used in concentrator PV systems. Since multijunction cells are so expensive, it is cheaper to make large-area Fresnel lenses to catch the light and focus it onto a small chip. The solar intensity can be increased as much as 500 times (“500 suns”). The solar cell will be very hot, but cooling on earth is not a problem. This idea has attracted commercial interest. The Palo Alto Research Center of Xerox Corp. has developed a molded glass sheet with bumps like bubble-wrap. Each bump contains two mirrors configured like a Cassegrain telescope to focus sunlight onto a small cell. The amount of PV material needed is reduced by at least 100 times. Making high-quality silicon is very energy-intensive, but some forms of it can be used for terrestrial solar cells. More on silicon is given in Box 3.5.
Box 3.5 The Story of Silicon
Oxygen and silicon are the most abundant elements on the earth’s crust, oxygen mostly in the form of water (H2O) and silicon in the form of rock (SiO2). These molecules are prevalent because they are very stable; it takes a lot of energy to break them up. The solar cell business got a head start because the semiconductor industry had already built up the infrastructure for producing pure silicon. Without a source of silicon, the expense of making a silicon solar cell would have been prohibitive.
|