Category Archives: An Indispensable Truth

Remaining Physics Problems

The ITER machine is an experiment large enough to require an international con­sortium. Its mission is to achieve a burning plasma, one in which the alpha particles produced by the D-T reaction can maintain the plasma’s temperature without exter­nal heating. At this stage of construction, not all physics problems have been solved, though they may be solved by the time construction is finished. We hope that these problems will be solved in time for DEMO. However, the physics does not have to be completely understood for something to work. Books have been written on the physics of tennis, baseball, sailing, and even pizza. Sometimes, it is easier just to get on with it.

Подпись: A lower-hybrid wave launcher of the type designed for ITER but one-fourth the size [36]

Fig. 9.26

Plasma Focus

Also called the dense plasma focus (DPF), this is one of the oldest devices invented to create fusion. Because of its simplicity, it is used in small laboratories over the world for instructional research. A diagram is shown in Fig. 10.38. A plasma is formed by discharging a large capacitor between the center electrode and the outer cylinder. An ionization front, shown by the white curve, travels rapidly to the end

image409

Fig. 10.38 Diagram of a dense plasma focus (http://www. plasma-universe. com.)

at the right. There, the current flows between the electrodes in the crown-shaped plasma consisting of streamers. In the center of the crown is a dense Z-pinch which can reach fusion conditions for a brief instant.

Intense X-rays are generated, and with deuterium for DT, neutrons are produced for 10-20 ns [39]. Both diagnostics and theory are difficult for the DPF, and it is not well understood. Nonetheless, some groups are proposing the DPF for p-B11 fusion. There is interesting physics to be studied in the DPF; but, as with all single­pulse machines, it is not suitable as an energy source.

Meshing with the Grid

Wind power rarely occurs where it is needed most. Conversely, you would not want to live where the wind is always fierce, like the west side of the Falkland Islands. New transmission lines are necessary, and this obstacle is preventing wind power from developing as fast as planned. In Germany alone, it is estimated that 2,700 km (1,700 miles) of extra-high-voltage lines will be needed by 2020 to carry an expected 48 GW of wind power. These lines run at up to 380 kV, compared to high-voltage lines at 110 kV, which are scary enough, and will cost over 3 billion

Подпись: Exhaust
Подпись: Recuperator
Подпись: Motor
Подпись: Compressor
Подпись: High Pressure Turbine Подпись: Low Pressure Turbine
Подпись: Fuel (Natural Gas)

image105Compressed Air _

Подпись: Fig. 3.18 A gas turbine (http://www.powergeneration.siemens.com/press/press-pictures/)

Fig. 3.17 Compressed air energy storage scheme for wind power (Vestas Wind, No. 16, April 2009)

euros.11 Traditionally, power plants are built near population centers, so the transmis­sion lines are short. Distributing wind power will require new rights-of-way, some of it underground. These lines cost 7-10 times as much as standard lines.11 There will be political, legal, and social problems in addition to the large cost. Germany,

and even all of Europe, is small compared with the US. Transmission lines are an even bigger problem for wind power in the USA.

Load distribution is another big problem. If the wind input to the power grid varies by as much as 10%, the grid can become unstable. However, if several wind sources are connected to the same grid, load variation can be avoided if the power can be switched in and out fast enough from each of the sources. This requires accurate forecasting of the wind speed and close collaboration among grid opera­tors. The Nordic countries of Sweden, Norway, and Denmark are close enough to pool their resources for load leveling.18 They can exchange wind and hydro energy. For instance, when wind power is excessive in Denmark, it can sell the power to Norway. Norway can accommodate the power by slowing down its hydroelectric power, storing the energy in the reservoir above a dam. The hydro energy can be sold back to Denmark when the wind dies down.

Wind is so variable that it can never be a large fraction of the total grid power. Not only that, but it must be backed up by conventional fossil fuel or nuclear plants. Estimates vary from 90%11 to 100%.24 That is, for every megawatt of new wind power installed, one megawatt’s worth of new coal, oil, gas, or nuclear plants have to be built.

How Nuclear Reactors Work

The Cast of Characters

The atomic number of an element is the number of protons in the nucleus. Uranium, element 92, has atomic number 92. Fissionable elements all have atomic number 92 or higher.75 The mass number is the total number of protons and neutrons in the nucleus. So uranium 235 has 92 protons and 143 (=235 — 92) neutrons. The atomic weight is a loosely used term which is essentially the mass number but differs by a fraction because protons and neutrons do not weigh exactly the same; they are bound with different energies; and energy and mass are interchangeable, according to Einstein. The symbol for uranium 235 is 92U235, but we shall write it is U235 because the 92 is already specified by “U.” Elements can have the same atomic number but different mass numbers; these are called isotopes. Here are a few isotopes of importance in fission:

U238: The normal isotope of uranium in nature.

U235: The fissionable isotope of uranium, with an abundance of only 0.7% in nature.

P239: Plutonium (element 94) is generated in a reactor and fissions easily.

U239: Uranium 239 decays76 in 23 min.

Np239: Neptunium 239 (element 93) decays in 2.4 days.

Cs137: Cesium 137 (element 55) decays in 30 years.

I131: Iodine 131 (element 53) decays in eight days.

The first group of three contains the isotopes we will be discussing. The next two are intermediate states in the transformation of uranium into plutonium in a reactor. The last two are the most dangerous reaction products when released into the air in an accident. The decay times here are half-lives. Isotopes never com­pletely disappear. Half of what is left goes away in a half-life. Note that only iso­topes with odd mass numbers are fissionable.77 What is not given here is the tremendous amount of energy that nuclei can give. A single-fuel pellet, the size of a AAA battery, can make as much electricity as 6 tons of coal.78

Plasma Heating and “Classical” Leak Rates

You are probably wondering how we can heat a plasma to 100 million degrees (10 keV). We can do that because a plasma is not a collisionless superconductor after all! Although much of the theory of instabilities is done with the approxima­tion of collisionlessness, we now have to take into account collisions between electrons and ions, infrequent though they are. First of all, plasmas can be made only inside a vacuum chamber because its heat would be snuffed out by air. Vacuum pumps create a high vacuum inside the torus. Then a gas such as hydrogen, deute­rium, or helium is bled in up to a pressure that is only three parts in a million (3 x 10-6) times as high as atmospheric pressure. These atoms are then ionized into ions and electrons by applying an electric field, as we will soon show. Although the plasma is heated to millions of degrees, it is so tenuous that it does not take a lot of energy to heat the plasma particles to a million degrees (100 eV) or even 100 million degrees (10 keV). This is the reason a fluorescent tube is cool enough to touch even though the electrons in it are at 20,000°. The density of electrons inside is much, much lower than that of air.

Once we have the desired gas pressure in the torus, we can apply an electric field in the toroidal direction with a transformer (this will be explained later). There are always a few free electrons around due to cosmic rays, and these are accelerated by the E-field so that they strip the electrons off gas atoms that they crash into, freeing more electrons. These then ionize more atoms, and so on, until there is an ava­lanche, like a lightning strike, which ionizes enough atoms to form a plasma. This takes only a millisecond or so. The E-field then causes the electrons to accelerate in the toroidal direction, making a current that goes around the torus the long way. The ions move in the opposite direction, but they are heavy and move so slowly that we can assume that they stay put in this discussion. If the plasma were really col­lisionless, the electrons would “run away” and gain more and more energy while leaving the ions cold. However, there are collisions, and this is the mechanism that heats up the whole plasma.

Running an electric current through a wire heats it because the electrons in the wire collide with the ions, transferring to them the energy gained from the applied voltage. According to Ohm’s law, the amount of heating is proportional to the wire’s resistivity and to the square of the electric current. In toasters, a high — resistance wire is used to create a lot of heat. High resistance is hard to get in a plasma because it is almost a superconductor. The number of ions that electrons collide with may be 10 orders of magnitude (1010 or 10 billion times) smaller than in a solid wire. Nonetheless, heating according to Ohm’s Law (“ohmic heating”) is effective because very large currents can be driven in a plasma, currents above 100,000 A (100 kA), and even many megamperes (MA). This is the most conve­nient way to heat a plasma in a torus, but when the resistance gets really low at fusion temperatures, other methods are available.

Calculating the resistance of a plasma is not easy because the collisions are not billiard-ball collisions. The transfer of energy between electrons and ions occurs through many glancing collisions as they pass by at a distance, pushing one another with their electric fields. This problem was first solved by Spitzer and Harm [5], and their formula for plasma resistivity (“Spitzer resistivity”) allows us to compute exactly how to raise a plasma’s temperature by ohmic heating.

This resistivity formula allows us to calculate something of even more interest; namely, the rate at which plasma collisions can move plasma across magnetic field lines. Every time an electron collides with an ion, both their guiding centers shift more or less in the same direction, so both of them move across the field lines. The plasma, then, spreads out (diffuses) across the magnetic field the way an ink drop diffuses in a glass of water until the ink reaches the wall. This is a slow process, but nonetheless it limits how long a magnetic bottle can hold a plasma. There is, how­ever, a big difference between ordinary diffusion and plasma diffusion in a mag­netic field. In ordinary diffusion, collisions slow up the diffusion rate by making the ink molecules, for instance, undergo a random walk. The more the collisions, the slower the diffusion. A magnetically confined plasma, on the other hand, does not diffuse at all unless there are collisions. Without collisions, the particles would just stay on the same field line, as in Fig. 4.5. Collisions cause them to random walk across the B-field, and the collision rate actually speeds up the diffusion. Since a

350

Подпись: 0 0.5 1 1.5 2 Magnetic field (Tesla) Fig. 5.11 “Classical” confinement time of a fusion plasma 300

250

200 c

о

150

w

100 50 0

hot plasma makes very few collisions, being almost a superconductor, this “classical” diffusion rate is very slow. This is called “classical” diffusion because it is the rate predicted by standard, well-established theory and applies to normal, “dumb” gases. Unfortunately, plasma can diffuse out rapidly by generating its own electric fields; and it leaks out much faster than at the classical rate.

Figure 5.11 shows the classical confinement time of a hot plasma as a function of magnetic field. We have assumed fusion-like electron and ion temperatures of 10 keV and a plasma diameter of 1 m — a large machine, but smaller than a full reactor. What is shown is the time for the plasma density to drop to about one-third of its initial value. This is similar to the “half-life” of a radioisotope used in medicine, a concept most people are familiar with. We see that at a field of 1 T (10,000 G), which we found before to be necessary to balance the plasma pressure, the time is about 90 secs — a minute and a half. This is much longer than what the Lawson criterion requires, which, we recall, is about 1 sec. It was this prediction of very good confinement that gave early fusion researchers the optimistic view that con­trolling the fusion reaction was a piece of cake. It did not happen, of course. Numerous unanticipated instabilities caused the confinement times to be thousands of times shorter than classical, and it is the understanding and control of these instabilities that has taken the last five decades to solve.

Notes

1. The data are from Bosch and Hale [1]. The vertical axis is actually reactivity in units of 1016 reactions/cm3/sec.

2. Such data were originally given by Post [2] and have been recomputed using more current data.

3. What is actually shown here is an equipotential of the electric field, which is the path followed by the guiding centers in an E xB drift. The short-circuiting occurs when the spacing becomes smaller than the ion Larmor radius, so that the ions can move across the field lines to go from the positive to the negative regions on either side of the equipotential. The curves are measured, not computed.

The Confinement Scaling Law

The triple product plotted in Fig. 8.20 contains the energy confinement time te, which is how long each amount of energy used to heat the plasma stays in there before it has to be renewed. The plasma energy is lost through three main channels: radiation, mostly in the form of X-rays, and escape of ions and electrons to the wall, carrying their heat with them. The first two of these, radiation and ion loss, follow theory and can be predicted, but electrons escape faster than can be explained. The energy loss by electrons can be measured, but it cannot be predicted. It would be

0.01 Г

0.001

0.001 0.01 0.1 1

Predicted Confinement Time

image288

Fig. 8.21 Data from 13 tokamaks showing that the energy confinement time as measured follows an empirical scaling law12 impossible to design a new machine accurately without knowing what te would be, but fortunately the over 200 tokamaks that have been built were found to follow an empirical scaling law. This formula12 gives the value of te in terms of the size and shape of the tokamak, the magnetic field, the plasma current, and other such fac­tors. The result is shown in Fig. 8.21.

This empirical scaling law is the basis on which new tokamaks are designed. It can­not be derived theoretically, but it is followed in a massive database from a variety of tokamaks. This “law” is given in mathematical form in footnote 12. Most of the dependences are consistent with our understanding of the physics. For instance, te increases with the square of the machine size. The strength of the toroidal field does not matter much because the size of the banana orbits depends on the poloidal field. The poloidal field indeed enters in the linear dependence on plasma current. The wonder is that only eight parameters are needed to make all tokamaks fall into line. As seen in Fig. 8.21, the data cover over a factor of 100 in te. To design ITER, the scaling had to be extrapolated by another factor of 4.

Power Plant Designs

The ARIES program in the USA is the leading group in designing fusion reactors. Originally started by Robert W. Conn in the 1980s at the University of Wisconsin and the University of California (UC) Los Angeles, it is now headed by Farrokh Najmabadi at UC San Diego. Throughout the years, new ARIES designs have been made as new physics has been discovered. The designs are not only for tokamaks; stellarators and laser-fusion reactors have also been covered. The latest designs, ARIES-AT for advanced tokamaks and ARIES-ST for spherical tokamaks, inspired the FDF proposals described above. Practical considerations such as public accep­tance, reliability as a power source, and economic competitiveness pervade the studies. The designs are very detailed. They optimize the physics parameters, such as the shape of the plasma and the neutron wall loading. They also optimize the engineering details, such as how to replace blankets and how to join conductors to make the joints more radiation resistant. As new physics and new technology became available, the reactors ARIES I, II, … to ARIES-RS (reversed shear) and

Подпись:Подпись:Подпись:Подпись: 19 TПодпись:Подпись:image356ARIES-AT (advanced tokamak) evolved to become smaller and cheaper. This is shown in Fig. 9.35. We see that as fusion physics advanced from left to right in each group of bars, the size of the tokamak, the magnetic field, and the current — drive power could be decreased while increasing the neutron production. This is due to the great increase in plasma beta that the designers thought would be possible. The recirculating power fraction is the power used to run the power plant; the rest can be sold. It dropped from 29 to 14%. The thermal efficiency in the latest design breaks the 40% Carnot-cycle barrier by the use of a Brayton cycle. Finally, we see that the COE is expected to be halved from 100 to 50 per kWh with advanced tokamaks.

ARIES-AT is shown in Fig. 9.36. Unlike existing tokamaks, this reactor design has space at the center for remote maintenance and replacement of parts. The philosophy in reactor design is to assume that the physics and technology advancements that are in sight will actually be developed and, on that basis, optimize a reactor that will be acceptable to industry and the public. It is not known whether high-temperature superconductors will be available on a large scale, but this would simplify the reactor. The blankets will be of the DCLL variety, and it is predicted that the Pb-Li can reach 1,100°C without heating the SiC walls above 1,000°C. This high temperature is the key to the high thermal efficiency. For easier maintenance and better availability, the blankets are made in three layers, two of which will last the life

image357

of the reactor. Only the first layer, along with the divertor, has to be changed out every five years. Sectors are removed horizontally and transported by rail in a hot corridor to a hot cell for processing. Shutdowns are estimated to take four weeks.

Turbocharging and supercharging in automobiles are terms that are well known to the public. Airplanes engines are turbocharged. Modern power plants use thermody­namic cycles that have higher efficiency than the classic Carnot cycle. The ARIES-AT reactor will use one of these called a Brayton cycle. The hot helium from the tokamak blanket is passed through a heat exchanger to heat helium that goes to electricity­generating turbines. The two helium loops are isolated from each other because the tokamak helium can contain contaminants like tritium. The turbine also runs with cooler helium at a different flow rate. The Brayton cycle precompresses the helium three times before it goes into helium turbines. The heat of the helium coming out of the turbines is recovered in coolers that cool the helium before it is compressed. It is this system that achieves the 59% thermal efficiency of the ARIES-AT design.

ARIES-AT will produce 1,755 MW of fusion power, 1,897 MW of thermal power, and 1,136 MW of electricity. The radioactive waste generated will be only 30 m3 per year or 1,270 m3 after 50 years. The plant will run for 40 of those years if availability is 80%. Ninety percent of this waste is of low-grade radioactivity; the rest needs to be stored for only 100 years. No provisions for public evacuation are necessary, and workers are not exposed to risks higher than in other power plants. The COE from ARIES-AT is compared with other sources in Fig. 9.37. We see that electricity from fusion is not expected to be extravagant.

Europeans have also made reactor models in their Power Plant Conceptual Studies (PPCS) [26]. Figure 9.38 is a diagram of the tokamak in those designs. As with the ARIES studies, Models A, B, C, and D in PPCS (Fig. 9.39) trace the evolu­tion of the design with advances in fusion physics and technology, with Model D using the most speculative assumptions. All these models produce about 1.5 GW of

о H—————— 1—————- 1—————- 1—————- 1—————-

Natural gas Coal Nuclear Wind Fusion

(intermittent) (ARIES-AT)

Fig. 9.37 Estimated year 2020 cost of electricity in US cents per kilowatt-hour from different power sources [graph adapted from [25], but original data are from the Snowmass Energy Working group and the US Energy Information Agency (yellow ellipses)]. The red range is the cost if a $100/ton carbon tax is imposed. The fusion range is for different size reactors; larger ones have lower cost

6TF coils

coolant manifolds (d i

image358

8 upper ports (f)

(modules & coolant)

176 blanket

— modules (a) і (5-6 уте lifetime)

8 central ports (g)

(modules)

image359 Подпись: divertor plates (b) (2 yis. lifetime) image361

vacuum vesse.

image362Fig. 9.38 Drawing of tokamak in Power Plant Conceptual Studies in Europe [26] electricity, but they are smaller and use less power with gains in knowledge. The recirculating fraction and thermal efficiency of Model D matches that of ARIES-AT. Safety and environmental issues were carefully considered. The cost estimates are given in Fig. 9.40, also in US cents per kWh. The difference between the wholesale price of electricity and that available to consumers is clearly shown. It is seen that fusion compares favorably with the most economical sources, wind and hydro.

12

Подпись:10

8

6

4

2

0

image364

R (m) Fusion Bootstrap Wall load Current (MA) Recirc. Frac. Therm. Effic. power fraction (MW/m2)

Bubble Fusion

Sonoluminescence is a phenomenon in which megahertz sound waves in a liquid can cause a bubble to collapse into a very small dot, creating a high temperature there. Using deuterated acetone as the liquid, some researchers reported detecting fusion neutrons created by the collapsing bubble. However, experts on sonoluminescence, including Seth Putterman of UCLA, were not able to reproduce these results and have categorically stated that this is not a way to produce fusion. It appears that this is an even more extreme farce than cold fusion.

Muon Fusion

This is the original idea on cold fusion, having been disclosed by Luis Alvarez in his Nobel Prize speech in 1968 [49]. Muons are fundamental particles like electrons but 207 times heavier. They are produced in accelerators and live for 2 ms (an eternity!)

before decaying. As you know, elementary particles and photons have a dual nature, sometimes behaving like particles and sometimes like waves. As waves, they have a wavelength, called the deBroglie wavelength, which is inversely pro­portional to their masses. Being some 200 times heavier, muons have wavelengths 200 times shorter. A negative muon can take the place of an electron in an atom, and the “cloud” of negative charge is then 200 times smaller, bringing the nuclei of molecules closer together. The muon-fusion process for DT molecules is shown in Fig. 10.55.

In the first line of that figure, normal D and T atoms with their large electron clouds can combine into a DT molecule, just as two H atoms can form H2. In the second line, a д-meson (muon) replaces the electron in the tritium atom, and the resulting muonic tritium atom has a smaller size. Next, a deuterium nucleus joins the triton inside the muon cloud, forming muonic DT with the two nuclei close together. Normally, the D and the T repel each other with their positive charges and cannot fuse into helium at room temperature. However, in quantum mechanics, particles can tunnel through the Coulomb barrier if it is thin enough. In a muonic DT molecule, this can happen very fast, and the entire process can happen several hundred times during the 2-ps lifetime of the muon. In the last line of Fig. 10.55, DT fusion has occurred, creating the usual products of a neutron and an alpha

image426

particle. What the muon does then is essential. If it flies off, it can catalyze another fusion again and again. However, if it “sticks” to the alpha particle, it is carried off and is lost. The sticking fraction is between 0.4% and 0.8%, and this limits the number of reactions that one expensive muon can catalyze.

Experiments are being done in accelerator laboratories like RIKEN-RAL4 in England and TRIUMPH in Vancouver, Canada. About 120 DT fusions per muon have been observed [28]. At 17.6 MeV per event, this amounts to over 2 GeV of energy. However, it takes 5 GeV to make each muon. There are ways to improve this ratio, by using polarized deuterons, by working at high temperatures, or by making cheaper accelerators. At this stage, the physics of muon fusion is still in its infancy.

Silicon Solar Cells

By far the most common type of solar cell because of their long history, silicon solar cells are fast being overtaken by thin-fflm cells, which are much less complex and costly.

Crystalline silicon is expensive and takes a lot of energy to make. It also absorbs only part of the solar spectrum and does it weakly at that. Only those photons that have more energy than silicon’s bandgap can be absorbed, so the red and infrared parts of sunlight are wasted. That energy just heats up the solar cell, which is not good. The blue part of the solar spectrum is also partly wasted for the following reason. Each photon can release only one electron regardless of its energy as long as it exceeds the bandgap. So a very energetic photon at the blue end of the spec­trum uses only part of its energy to create electric current, and the rest of the energy again is lost as heat. To capture more colors of sunlight, cells made with other materials with different bandgaps are used in the basic cell instead of silicon. These other semiconducting materials are called III-V compounds, and they are explained in Box 3.4.

Box 3.4 Doped and III-V Semiconductors

The way semiconductors can be manipulated is best understood by looking at the part of the periodic table near silicon, as shown in Fig. 3.34. The Roman numerals at the top of each column stand for the number of elec­trons in the outer shell of the atom. The different rows have more inner shells, which are not active. The small number in each cell is the atomic number of the element. Silicon (Si) and germanium (Ge) are the most com­mon semiconductors and are in column IV, each with four active electrons. They share these with their four closest neighbors in what is called cova­lent bonds. These are indicated by the double lines in Fig. 3.35. These bonds are so strong that the atoms are held in a rigid lattice, called a crys­tal. The actual lattices are three-dimensional and not as simple as in the drawing. The crystal is an insulator until a photon makes an electron-hole pair by knocking an electron into the conduction band, as we saw in Fig. 3.32.

II III IV V VI

5

B

6

C

7

N

13

Al

14

Si

15

P

31

Ga

32

Ge

33

As

N

49

In

50

Sn

51

Sb

м

Fig. 3.34 The periodic table near silicon

Подпись: b
Подпись: a Fig. 3.35 A silicon lattice doped with (a) boron and (b) phosphorus
image125

Box 3.4 (continued)

However, there is another way to make Si or Ge conduct. We can replace one of the silicon atoms in Fig. 3.35a with an atom from column III, for instance, boron. We would then have a “hole.” That’s because boron (B) has only three active electrons and leaves a place in a covalent bond where an electron can go. Since holes can move around and carry charge as if there were positive electrons, this “doped” semiconductor can conduct. We can also dope Si with an atom from column V, such as phosphorus (P), as shown in Fig. 3.35b. Since phosphorus has five active electrons, it has an electron left over after forming covalent bonds with its neighbors. This is a free electron which can carry current. Note that the P nucleus has an extra charge of +1 when one electron is removed, so the overall balance of + and — charges is still maintained. The conductivity can be controlled by the number of dopant atoms we add. In any case, only a few parts in a million are sufficient to make a doped semiconductor be a good enough conductor to interface with metal wires. Any element in column III, boron (B), aluminum (Al), gallium (Ga), or indium (In), can be used to make a p-type semiconductor (those with holes). Any element in column V, nitrogen (N), phosphorus (P), arsenic (As), or antimony (Sb), can be used to make an n-type semiconductor. When the doping level is high, these are called p+ and n+ semiconductors.

Now we can do away with silicon! We can make compounds using only elements from columns III and V, the III-V compounds. Say we mix gallium and arsenic in equal parts in gallium arsenide (GaAs). The extra electrons in As can fill the extra holes in Ga, and we can still have a lattice held by cova­lent bonds. We can even mix three or more III-V elements. For instance, GaInP2, which has one part Ga and one part In from III and two parts P from V. There are just enough electrons to balance the holes. This freedom to mix

Box 3.4 (continued)___________________________________________

any of the III elements with any of the V elements is crucial in multijunction solar cells. First, each compound has a different bandgap, so layers can be used to capture a wide range of wavelengths in the solar spectrum. Second, there is lattice-matching. The lattice spacing is different in different com­pounds. Current cannot flow smoothly from one crystal to another unless the spacings match up. Fortunately, there is so much freedom in forming III-V compounds that multijunction cells with up to five compounds with different bandgaps have been matched. Figure 3.36 shows how the three layers of a triple-junction cell cover different parts of the solar spectrum.

At the bottom of Fig. 3.34, we have shown a II-VI compound, cadmium telluride (CdTe). Each pair of Cd and Te atoms contributes two electrons and holes. This particular II-VI material has been found to be very efficient in single-layer solar cells. It is one of the main types of semiconductors used in the rapid expansion of the thin-film photovoltaic industry.

image126

Fig. 3.36 The parts of the solar spectrum covered by each subcell of a triple-junction solar cell (http://www. amonix. com/technology/index. html)

By adjusting the compositions of these III-V compounds, their bandgaps can be varied in such a way as to cover different parts of the solar spectrum. This is illus­trated in Fig. 3.37. The spectrum there will be explained in Fig. 3.40. The different cells are then stacked on top of one another, each contributing to the generated electric current, which passes through all of them. There are many layers in such a “multijunction” cell. The layers of a simple two-junction cell are shown in Fig. 3.38. The top cell has an active layer labeled n-GaInP2 and is sandwiched

image127

Fig. 3.37 Top: the solar spectrum plotted against photon energy in eV. Long (infrared) wavelengths are on the left, and short (ultraviolet) wavelengths are on the right. The visible part is shown in the middle. Bottom: bandgaps of various semiconductors plotted on the same eV scale. The bandgaps of Ge, GaAs, and GaInP2 are fixed at the positions marked. In InGaN, half the atoms are N, and the other half In and Ga. The bandgap of InGaN, given by the data points, varies with the percentage of Ga in the InGa part. As illustrated for the marked point, the part of the spectrum on the blue side of its bandgap is captured, and the part on the red side is lost (adapted from http://emat-solar. lbl. gov)

between the current-collecting buffer layers labeled n-AlInP2 and p+GaAs. This is the basic cell structure shown in Fig. 3.33. The bottom cell has an active element labeled n-GaAs surrounded by buffer layers. Connecting the two cells is a two-layer tunnel diode, which ensures that all the currents flow in the same direction. Up to five-cell stacks have been successfully made,38 yielding efficiencies above 40%, compared with 12-19% for single-silicon cells. Each cell in a stack has three layers plus the connecting tunnel diode. However, not all the layers are equally thick as in the diagram, and the entire stack can be less than 0.1 mm thick! Pure crystalline silicon needs at least 0.075 mm thickness to absorb the light and at least 0.14 mm thickness to prevent cracking [7], but this does not apply to the other materials.

The semiconductor layers are the main part of a solar cell, but they are thin com­pared with the rest of the structure. A triple-junction cell is shown in Fig. 3.39. The support layer could be a stainless steel plate on the bottom or a glass sheet on the top. The top glass can also be grooved to catch light coming at different angles. At the bottom is a mirror to make the light pass through the cell a second time.

Antireflection coating

AR and conductive gnd coating

Power collection grid

n — AllnP2

Top cell

n-GalnP2

p+GaAs

p+GaAe

Tunnel Diode

n+-GaAa

n-AIGaAs

Bottom cell

n-GaAa

p-GaAs

p+-GaAs

Substrate

Fig. 3.38 The parts of a two-cell stack using gallium-indium-phosphide (GaInP2) and gallium arsenide (GaAs) (http://www. vacengmat. com/ solar_cell_diagrams. html)

At the top is an antireflection coating such as we have on camera or eyeglass lenses. The current is collected by a grid of “wires,” formed by a thin film of conducting material. The top grid has to pass the sunlight, so it is made of a transparent conduc­tor like indium-tin oxide, which is used in computer and TV screens for the same purpose. The photovoltaic layers have to be in a specific order. At the top is material with the largest bandgap, which can capture only the blue light, whose photons have the highest energy. The lower energy photons are not absorbed, so they pass through to the next layer, labeled “green” here. This has a lower bandgap and captures less energetic photons. Last comes the “red” layer, which has the smallest bandgap and can capture the low-energy photons (the longest wavelengths) which have passed through the other layers unmolested. If the red layer were on top, it would use up all the photons that the other layers could have captured, but it would use them ineffi­ciently, since the voltage generated is the same as the bandgap voltage.

The voltage generated by each cell is only about 1.5 V, so cells are connected into chains that add up the voltage in series to form a module. Modules giving a

Подпись: Transparent conductive oxide film
Подпись: Thickness of solar cell less than one micron і

Back reflector
film layer

Flexible stainless
steel substate

Fig. 3.39 A typical multijunction solar cell assembly. All the layers in the active part of this ceil are less than 1 pm (1/1,000th of a millimeter) thick (http://www. solarnavigator. net/thin_film_ solar_cells. htm) voltage of, say, 12 V are then grouped into arrays, and thousands of arrays make a solar farm. Modules and arrays generally need to be held in a frame, adding to the cost, and the frames have to be supported off the ground. There is a problem with the series arrangement of the cells. If one cell fails, the output of the entire chain is lost, since the current has to go through all the cells in a chain. Similarly, if one of the layers in a cell fails, there can be no current going out of that cell. Fortunately, the failure rate of commercial units is known and is not bad. Solar cells can still produce 80% of their power after 25 years or more, at least for single-junction cells.

Solar cell efficiency is degraded by another effect: the colors to which a cell responds is fixed in the design of the photovoltaic layers, but the color of sunlight changes with time and place. At sunset, the light is redder and yellower. This means that the blue cell cannot put out as much current. Since the same current flows in series through the whole stack, the red cell’s larger current cannot all be used; its excess current turns into heat. The atmosphere alters the solar spectrum more than you might think. This is shown in Fig. 3.40. In space, the spectrum is almost exactly like that of a classical blackbody. In the visible part of the spectrum, about 30% the intensity is absorbed by the atmosphere. In the infrared region, large absorption bands are caused by gases in the atmosphere. This spectrum is further degraded by the atmosphere during the day as the sun goes lower in the sky.

image130Multijunction and crystalline silicon solar cells are so expensive that they are not suitable for solar farms, but they have two good applications. First and foremost, these are used where cost is not a prime concern: in space satellites. The ruggedness of

UV, visible

 

nfrared

 

Sunlight at Top of the Atmosphere

 

5250 C Blackbody Spectrum

 

Radiation at Sea Level

 

image131

image132

image133

250 500 750 1000 1250 1500 1750 2000 2250 2500

Wavelength (nm)

Fig. 3.40 The solar spectrum in space ( yellow) and on the earth’s surface (red). The visible region is shown by the small spectrum at the bottom. Parts of the spectrum are heavily absorbed by water vapor, oxygen, and CO2 (http://en. wikipedia. org/wiki/Image:Solar_Spectrum. png) silicon and the efficiency of multijunction are needed out there. The sunlight is stronger, and cooling has to be considered because there is no air. Missions to the moon and Mars will no doubt have the most expensive solar cells made. On the earth, expensive solar cells can be used in concentrator PV systems. Since multijunction cells are so expensive, it is cheaper to make large-area Fresnel lenses to catch the light and focus it onto a small chip. The solar intensity can be increased as much as 500 times (“500 suns”). The solar cell will be very hot, but cooling on earth is not a problem. This idea has attracted commercial interest. The Palo Alto Research Center of Xerox Corp. has developed a molded glass sheet with bumps like bubble-wrap. Each bump contains two mirrors configured like a Cassegrain telescope to focus sunlight onto a small cell. The amount of PV material needed is reduced by at least 100 times. Making high-quality silicon is very energy-intensive, but some forms of it can be used for terrestrial solar cells. More on silicon is given in Box 3.5.

Box 3.5 The Story of Silicon

Oxygen and silicon are the most abundant elements on the earth’s crust, oxygen mostly in the form of water (H2O) and silicon in the form of rock (SiO2). These molecules are prevalent because they are very stable; it takes a lot of energy to break them up. The solar cell business got a head start because the semiconductor industry had already built up the infrastructure for producing pure silicon. Without a source of silicon, the expense of making a silicon solar cell would have been prohibitive.

Box 3.5 (continued)

The integrated circuits that make computers, cell phones, iPods, and other electronic devices work are made of 99.9999% pure silicon. These chips are made of single-crystal silicon. First, pure silicon is produced from quartz. It is then melted in a crucible by heating to above 1,400°C (2,600°F). This requires a lot of energy: think of the molten rock flowing from the Kilauea caldera in Hawaii into the sea. A seed crystal is then dipped into the liquid and slowly drawn upwards, carrying some silicon with it. As the silicon solidifies, it takes on the crystalline structure of the seed; and a large cylindrical ingot is formed. The entire ingot, 400 mm (12 in.) in diam­eter, is a single crystal. This is then sliced into wafers about 0.2 mm thick. The “sawdust,” or kerf, takes up 20% of the silicon, and it cannot be re-used because of contamination by the cutting tool. To make computer chips, a wafer is processed to make hundreds of chips at once, each containing mil­lions of transistors. The wafer is then sliced into the individual chips, each no larger than 1 cm2 in size. The cost of the silicon wafer is minor, since the chips are worth a million dollars. For solar cells, however, the large areas required mean that the silicon is the main expense, even when off-grade material rejected by the semiconductor industry is used. Silicon shortages cause large fluctuations in price. Note that to form solar cells, the silicon has to be re-melted, using more energy.

Single-crystal solar cells are the most efficient because electrons and holes flow easily along the lattice. However, silicon made of small crystals is cheaper and easier to make. The silicon can be poured into a crucible without the slow drawing-out process. Depending on the crystal size, this is called multicrystalline, polycrystalline, or microcrystalline silicon. In these materials, electron flow is interrupted by their bumping into grain boundaries. This causes a higher resistivity and hence loss of energy into heat. Most silicon solar cells are made of polycrystalline silicon.

There is also amorphous silicon, which is really a thin-film material. The silicon atoms are not in a lattice at all but are randomly distributed. The produc­tion process is entirely different. A glass substrate is exposed to silane (SiH4) and oxygen (O2) in a plasma discharge, where the hydrogen latches on to the oxygen to form water, and the silicon is deposited onto the glass. The electrical conductivity of amorphous silicon is very poor, and it has to be improved by adding hydrogen in a subsequent hydrogenation process. The result is called a-Si:H. Its power output decreases about 28% at first use, so it has to be “light — soaked” for about 1,000 h before it stabilizes. It is also less efficient in the winter, when the temperatures are lower. The efficiency is only about 6%, but amorphous silicon is much cheaper than any crystalline form and can be used in large installations. Crystalline silicon, on the other hand, is suitable for space applications but not for large solar farms.

 

Other Renewables

Hydroelectricity

Hydroelectric power is the simplest, most direct way to produce electricity. A dam is built, and water is released to turn large generators. No heat, no complicated equipment, no fuel transport, and no pollution. The power is available in control­lable amounts any time. This is an ideal situation that no other source can emulate. Of course, it is not available everywhere. Worldwide, hydro accounted for only 2.2% of total energy consumption in 2006, compared with 6.2% for nuclear.85 Some countries, such as Bhutan, depend entirely on hydroelectricity, and Bhutan actually exports part of it. Iceland uses hydro for 73% of its energy. The role of hydro in various parts of the world is shown by the blue bars in Fig. 3.62. In the USA, hydro accounts for 7% of electricity generated and 36% of all electricity from renewable sources.86 Renewables provided 7% of all energy consumed in the USA in 2007. China has the most hydro power. The Three Gorges Dam, completed in 2008, has generating capacity for 26.7 GW of electricity. This is comparable to the output of 25 coal plants.

Construction of dams can change the landscape and displace wildlife, espe­cially fish, but this is a small price to pay for free energy. Dam breaks pose a danger to downstream residents. Climate change can affect the distribution of rain and snow, causing some rivers to increase, and some to decrease their flow rates.

image166

Fig. 3.62 Fuel sources in regions of the world by percent. From the bottom up, the sources are oil, gas, nuclear, hydro, and coal (BP Statistical Review of World Energy 2008)

However, these drawbacks are minor, and hydroelectricity will continue to be an important part of our energy mix even if most of the best hydro sources are already being used.