Category Archives: An Indispensable Truth

Carbon Sequestration

To continue using coal, we have to capture the emitted CO2 and bury it. This is called carbon capture and storage (CCS), but we will continue to avoid acronyms when possible. There are three steps: first, CO2 has to be separated from the flue gas out of a coal burner; second, the CO2 has to be transported to a burial site; and third, it has to be injected into a geological formation that can hold it forever. The last part is of course highly debatable; but it is the first part, capture, that is the most expensive. There are three basic ways to do this.9 In the first method, the flue gas is mixed with a liquid solvent called MEA into which the CO2 dissolves. The MEA’s chemical name is not always spelled the same way, but it is a corrosive liquid found in household products such as paint strippers and all-purpose cleaners. When the MEA is heated to 150°C, pure CO2 is released, and the MEA is cleaned up with steam to be reused. This method can be retrofitted to existing plants, but there is a huge penalty. The heating and steam production takes up to 30% of all the energy produced by the power plant! The cost of this step can be as much as four times higher than that of the other two steps. At the moment, other absorbers are being tried to lower this cost.10

In the second method, the flue gas mixture is controlled by burning the coal in a specific way. When it is burned in air, which is 80% nitrogen and 20% oxygen, there is a lot of nitrogen in the mix, and N2O is a greenhouse gas. A better way is to remove the nitrogen from air at the outset and burn the coal in pure oxygen. What comes out is water and pure CO2, ready to be sequestered. However, separating the nitrogen from the air to get pure oxygen requires 28% of the power plant’s energy, still a steep penalty. This method is being tested by Vattenfall, Sweden’s energy company, in the town Schwarze Pumpe in Germany. The experiment is fairly large — 30 MW — but not of electric utility size. A novel feature was added to this “oxyfuel” process: the flue gas is recirculated into the burner with the oxygen. This keeps the temperature low enough to prevent melting the boiler walls, as would happen with pure oxygen. In effect, the CO2 in the flue gas replaces the nitrogen in air, diluting the oxygen without using nitrogen.

The third method is coal gasification: the coal is heated to a high temperature with steam and oxygen, turning the coal into a gas, called syngas, which is a mixture of carbon monoxide (CO) and hydrogen (H2), plus some nasty contaminants. After the syngas is purified, it is the fuel for generating electricity in an “integrated gasifi­cation combined cycle,” or IGCC, an acronym that seems unavoidable in this case. Coal gasification has been tested in fairly large power plants, but the IGCC sounds like a Rube Goldberg type contraption that has yet to be verified on a large scale. An air separation unit to get pure oxygen is still required both for syngas generation and for burning the syngas later. After the pollutants are taken out, the gas goes into a chamber where the CO combines with steam (H2O) to form CO2 and H2. Pure hydrogen is separated out through a membrane, giving carbon-free fuel. The rest of the gas, containing CO2, CO, and H2, is burned with oxygen in successive turbines, a gas turbine and a steam turbine, to generate electricity. The pure hydrogen sepa­rated by the membrane can be sold or burned to generate more electricity cleanly. The IGCC can be 45% efficient, compared with 35% in ordinary coal plants limited by the Carnot theorem that we described earlier. Meanwhile, the CO2 generation is lower, and it comes out in pure form to be stored. This separation system adds only 25% to the cost of electricity. An even more efficient method called chemical looping is under development.9 New chemical structures for capturing CO2 are described in Chap. 3 under Hydrogen Cars.

In 2003, the FutureGen Alliance had proposed a plan to test IGCC on a large scale by building a $1 billion plant in Illinois, finishing in 2013. That project was canceled by President G. W. Bush in 2008 because the projected cost had almost doubled. Unbelievably, this figure was an accounting error; the actual increase was to only $1.5 billion. Under President Obama, Energy Secretary Steve Chu has pledged $1.1 billion of economic stimulus money to restart the project, with the other funds to be raised by FutureGen. There is $2.4 billion of stimulus money slated for CCS research. This is to be compared with $3 billion spent by the Department of Energy for this purpose since 2001.

Now that we have separated out the CO2, the problem is where to put it. There are three main places: old wells, underground, and undersea. The oil and gas that we mine have been trapped in the earth for millennia, so it is possible that porous rock or underground caverns can hold liquids and gases stably. To carry CO2 to these sites, the gas has to be highly compressed to a small volume and transported by truck or rail. This step entails a certain amount of danger, should there be an accident causing the container to explode and release tons of CO2 into the atmo­sphere. The gas is then injected under pressure into depleted oil or gas wells, where it could stay for millennia if it were not for the leaks made in drilling the wells in the first place. These old wells have to be sealed tightly. The trouble is that carbon dioxide and water combine to form carbonic acid, and the seal has to withstand this acid attack. This storage solution is well tested because it is used to store excess gas and oil mined in the summer for use in the winter. The difference here is that the storage has to be stable essentially forever. The possibility of leaks has to be care­fully monitored. Injection of CO2 into oil wells is actually beneficial, for it helps to push the oil up. Toward the end of life for an oil well, the oil gets quite thick; and gas, which might as well be CO2, is injected to lower the viscosity. This is what happening in those nodding pumps seen along the California coast.

There are many large subterranean formations that can hold carbon dioxide. These are porous sandstone deposits covered with a cap of hard, impervious rock. For instance, such a depository has been found below a little town called Thornton somewhere south of California’s capital of Sacramento. It is estimated that it can hold billions of tons of CO2 in its pores, enough to store away hundreds of years of California’s emissions.11 Of course, no one knows whether it will leak.

image069

Fig. 2.17 The Sleipner Platform in the North Sea (http://images. google. com)

There are plans to drill into this formation and test it, to the dismay of local residents. The reaction, NUMBY (Not Under My Back Yard!), is a switch from NIMBY (Not In My Back Yard!), an epithet used against wind and solar power.

Large geologic formations under the sea have also been found for CO2 storage. These are layers of porous sandstone called saline aquifers lying deep below the seabed and capped by impermeable slate. Storage in these aquifers is the only seques­tration method that has been tested on a large scale, and this is a story in itself.9, 11-13 The Sleipner Platform, shown in Fig. 2.17, is a huge oil drilling and carbon sequestra­tion plant located in the middle of the North Sea, halfway between Norway and England. It was built in 1996 by Statoil, Norway’s largest petroleum company to produce oil while testing sequestration. Built to withstand the frigid conditions and storms with 130-mile winds and 70-foot waves, it houses a crew of 240 whose jobs are considered the most dangerous in the world. Below Sleipner lies not only a rich field of natural gas but also a saline aquifer called the Utsira Formation lying a kilo­meter below the seabed (Fig. 2.18). The aquifer is very large: 500 x 50 km in area and 200 m thick. It can hold 100 times Europe’s annual CO2 emissions.

There was a special reason to build sequestration into the plant ab initio. The gas from the Sleipner field contains about 9% CO2, too high to burn properly unless reduced to 2.5%. The gas has to be scrubbed using the MEA solvent described above, thus releasing a million tons of CO2 a year that has to be stored. The way the CO2 is injected involves a little physics. It is compressed to 80 atmospheres because at this pressure it turns into a liquid about 70% as dense as water. So it is stored as a liquid. When it is mixed with the salt water in the aquifer, it tends to rise, since it is less dense. One worries how fast it moves and whether the 200-m thick layer of shale above the storage volume can spring a leak. Such leaks can arise from drilling through the cap to inject the gas, and these holes have to be carefully sealed with acid-proof material. Statoil has spent millions of dollars to develop a way to measure the spreading and leaking of the CO2 using sound waves. Since the system has 25-m resolution and the area is measured in kilometers, the amount of data is many megabytes. These data clearly show that the CO2 is spreading sideways as well as upwards, and that there are no leaks so far. In the best scenario, the CO2 will eventually dissolve into the brine (in 1,000 years or so) and thus become a liquid heavier than water. This then moves safely downwards, and on a geologic timescale will turn into a mineral, thus locking the carbon away permanently. All fossil fuels will be but a distant memory by that time.

The Utsira formation is unusual in that it is located at the same place as the gas deposit, so that no transportation of the CO2 is necessary; but it is not unique as a large burial site. It is estimated that the USA has subterranean reservoirs capable of storing 4 trillion tonnes of CO2, enough to take care of its emissions until coal runs out. Statoil would not have built the Sleipner plant if it did not have to pay an annual $53M carbon tax imposed by the Norwegian government. Global warming cannot be halted without strong legislation by enlightened political leaders. The cost of separating the CO2 and burying it is estimated to be about $25-$50 per tonne. Though this may come down as new techniques are developed, it is still a huge expense. Three tonnes of CO2 is produced for each tonne of coal burned, and a fairly large (1 GW) coal plant gives off 6 million tonnes of carbon dioxide per year. The cost of up to $300M would be passed on to the consumer. That is not even the main problem. It is simply not possible to make a fundamental change in all coal plants or to build enough new-technology plants in a short time. Up to now, except for Sleipner, only small, scattered projects for cleaning up coal have been funded,

image070

Fig. 2.18 Diagram of the gas field and saline aquifer below Sleipner (http://images. google. com)

with no integrated plan for replacing all dirty coal power with clean coal power. This is in stark contrast to the ITER project for developing fusion power; there, even the political problems of a large international collaboration have been tackled and solved. It may take two or three decades to clean up all coal power, and this is no shorter than the time needed to commercialize carbon-free renewable sources.

Evidence for Climate Change Paleoclimate

What the earth’s temperature and CO2 levels were can be determined, surprisingly enough, as far back as 650,000 years ago. For the last millennium, accurate records of temperatures recorded with thermometers can be found. Before that, there are ancient documents telling of extreme weather events, the dates of spring planting, or occurrence of plagues from which some idea of the weather can be gleaned. For prehistoric eras, there were no direct observations, but data can be found indirectly from what are called proxies. Tree rings, ice cores, and cores of layered sediments in soil or sea bottoms give annual records that can be counted ring by ring. Trapped air bubbles in ice cores give the CO2 concentration hundreds of millennia ago. The frac­tional abundance of oxygen or hydrogen isotopes in ice cores and coral yields the temperature, as do other ratios, such as Mg to Ca. These proxies can be correlated with one another to give higher accuracy in recent times for which there are more data. The result from Antarctica ice is shown in Fig. 1.4. As the earth undergoes long glacial ages and short interglacial warm periods, the CO2 and CH4 abundances follow the temperature quite closely. Of course, we cannot tell which is the cause and which is the effect here. The present warm period, which allows life to exist, looks no dif­ferent from previous interglacial periods, except for the spike seen at the far right. For that, we now know that CO2 is the cause, and the temperature rise the effect.

image005

Fig. 1.4 Paleoclimatic data on the variation of temperature and CO2, CH4, and N2O abundances from Antarctic ice cores [6]. The temperature is represented by the deuterium abundance proxy (bottom curve). The shadings indicate interglacial warm periods

When considering the climate tens of thousands of years back, we have to take into account changes in the earth’s orbit. The earth’s spin axis is not perpendicular to the plane of its orbit but is tilted at 23.5°, thus causing winter in the northern hemisphere while it is summer in the south. This tilt can change from 22° to 24.5° over a period of 20,000 years or so. This does not change the total sunlight on the earth, but it distributes differently between the northern and southern hemispheres. Since there is more land in the north and more water in the south, this re-distribu­tion of sunlight can affect the climate. A bigger effect comes from the precession of the equinoxes, when the earth’s axis spins around like a gyroscope. The effects come from an interaction with the ellipticity of the earth’s orbit, which means that solar radiation is stronger when the earth is near the sun (perihelion) than when it is far away (aphelion). Thus, in one orientation, the northern hemisphere has sum­mer during perihelion; and, 10,000 years later, the southern hemisphere gets the hotter summers. The shape of the earth’s orbit can also change between more cir­cular and more elliptical due to the pull of other planets, mainly Jupiter. This hap­pens every 100,000 years or more. The ice ages may have started at a coincidence of these orbital forcings, triggering runaway feedback, as we described before. The recovery into warm periods is equally remarkable. There is an intriguing theory that the most recent recovery (the last shaded bar in Fig. 1.4) may have been caused by humans when they started farming about 11,000 years ago [7]. Methane is pro­duced by decaying vegetative and animal matter produced in agriculture, and deforestation decreases CO2 absorption by trees.

image006

Fig. 1.5 CO2 levels in parts per million (ppm) and CH4 levels in parts per billion (ppb) vs. year before 2005, as measured from different sources [6]. NH and SH stand for northern and southern hemisphere, respectively

The paleoclimate data for the last 20,000 years on how CO2 and CH4 abundances changed with time are so good that observations from different proxies agree amaz­ingly well. This is shown in Fig. 1.5. The CO2 level increased slowly from 190 ppm to the preindustrial level of 280 ppm, followed by the recent rapid increase to 379 ppm in 2005. The present level is much higher than any level that existed over the past 650,000 years (indicated by the gray bar at the left). The current spike is also seen in the methane data.

The Growth of Wind

Being the most economical of the renewable technologies, installation of wind turbines has grown rapidly in the last few years. In Fig. 3.2, we see that Europe

image075

Europe Americas Asia Rest of World

Fig. 3.2 Accumulated installed wind power from 2006 to 2008 in three continents. The scale is in gigawatts (GW), which are millions of megawatts. Redrawn from Vestas Wind, No. 16, April 2009. Original data from BTM World Market Update 2008 (March 2009)

image076

USA Germany Spain China Denmark

Fig. 3.3 Installed wind power in the top four countries plus Denmark [Vestas Wind, No. 16, April 2009. Original data from BTM World Market Update 2008 (March 2009)]

leads in this field, being more dependent on foreign oil than other continents. It has also had a head start, but other nations have been advancing more rapidly. Between 2006 and 2008, wind capacity has more than doubled in America and Asia. The units in Fig. 3.2 are in gigawatts (GW) or millions of megawatts. A large coal plant generates roughly 2 GW of heat, giving 1 GW of electricity. So the 65 or so GW of peak wind power in Europe in 2008 would replace, roughly, 65 coal plants. We will see later that the average power of wind turbines is much less.

The installed wind capacity of the top countries is shown in Fig. 3.3, again in gigawatts. We see here that the head start of the European countries is being rapidly

overtaken by the USA and China. Wind power has more than doubled in the USA and more than quadrupled in China in the two years. Denmark’s wind capacity is typical of many other small European countries, and it is shown here because Denmark has been a leader in developing the technology of wind turbines and their deployment onshore and offshore. Currently, 20% of the electricity in Denmark is supplied by wind.8 It is estimated that by 2013 electricity from wind will cost $0.055/kWh, compared with $0.05 from coal or gas, $0.06 from nuclear, and $0.22 from solar.8

At one time, after the Chernobyl accident, Germany wanted to eliminate all its nuclear reactors, replacing them with wind and solar plants. A feed-in law has been in place since 1990, requiring utilities to buy energy from green sources that feed into their grid.9 The plan was to install 500 MW of offshore wind in the North Sea by 2006 and 2,500 MW by 2010. The major players are the large utility com­panies E. ON Netz, REpower Systems, and the giant Swedish firm of Vattenfall. However, this was harder than they thought, the subsidy was too small, and the enviromentalists were too vocal. Only a few offshore turbines have been installed. Chancellor Angela Merkel lowered the costs by shifting the burden of new trans­mission lines to the power grid operators from the wind developers. Now 900 MW of turbines have been ordered, and E. ON Netz will spend $254 million (€180) to build a cluster of turbines in the North Sea, using some of the huge 5-MW turbines from REpower (later in this chapter). Nonetheless, wind is so capricious that it can supply only a small fraction of the energy now generated by nuclear reactors.9

In the USA, installed capacity was close to 30 GW by the middle of 2009, pro­viding 1.4% of the country’s electricity. The states with the most wind power are Texas (7.1 GW), Iowa (2.8 GW), and California (2.5 GW). The largest wind farms are the Horse Hollow, Capricorn Ridge, and Sweetwater farms in Texas; Altamont, Tehachapi, and San Gorgonio in California, and Fowler Ridge in Indiana.10 Wind supplies 5% of the renewable energy in the USA, compared with 1% for solar; and renewables account for 7% of total energy. The Great Plains states, like Kansas, have great potential for further development. The current rate of buildup (Fig. 3.3) is on track to attain the Obama administration’s goal of doubling clean energy by 2012. Little has been done so far about offshore wind capture. There are plans to try this on the East Coast. The technology will be far behind that of the Danes, who have been researching this for many years. The economic crisis may slow down the investment in this field. For instance, T. Boone Pickens had planned to spend $10 billion to build the largest wind farm in Texas, but the plans were scrapped when the price of oil dropped to the point where wind became too expensive. Ideology is again the slave of economics.

For the far future, the proponents of wind power have no such reservations. Figure 3.4 shows the predictions of the experts at Vestas Wind Systems of Denmark. The blue part of the curve shows the 16-fold increase of the world’s wind turbine capacity from 1997 to 2008. The red part of the curve shows the expected future growth up to 1.3 trillion watts by 2020. Whether this will actually happen is problem­atical. As we shall see, this would require a large amount of backbone power to back up the wind power, and too much fluctuating power may make the power grid unstable. The good news is that wind installations have a very small fossil footprint.

Fig. 3.4 Actual (blue) and predicted (red ) wind capac­ity, in gigawatts, from 1997 to 2020 [Vestas Wind, No.

image07716, April 2009. Original data from BTM World Market Update 2008 (March 2009)]

Slowing the Inevitable

Regardless of the scientific basis of climate change, what can be done about it is a political and economic problem. What makes money is what will happen, but this can be influenced to some extent by laws and subsidies enacted by a savvy government. This well-publicized subject falls outside of the scientific tenor of this book, and only a brief summary is given here. Since the ways to combat global warming depend so much on the way of life and the political setup of each country or community, even the IPCC Working Group 3’s voluminous report [16] on miti­gation gives few substantive conclusions or recommendations. There is disagree­ment about the predictions of the IPCC report. Some say that it is too pessimistic, and we need not over-react to the forecasts; others say that the report is not strong enough, and we should act faster than we are doing now. In any case, it is known that the anthropogenic climate change (the only part we can control) is mostly due to GHG emission, particularly of CO2, and that much of this will persist in the atmosphere for hundreds of years. We can hope to slow down the increase in warming potential, but we cannot expect to recover from our profligate habits for at least half a century.

Mitigation consists of three steps: adaptation, conservation, and invention. Adaptation means taking immediate steps to protect ourselves from impending disasters, such as sea-level rise and violent storms. This means building sea­walls, raising bridge heights, strengthening and raising structures near the shore, and so forth. Conservation requires no new technologies or expenses, and many organizations are already promoting this. Lights can be turned off by infrared or motion detectors when no one is in the room. Electronic equip­ment can be made to draw no current when off. Gasoline can be saved by driving slowly, by carpooling, and by bicycling, for instance. Thermostats can be turned higher in summer and lower in winter. Recycling programs are already in place to save fossil energy used in mining and refining. Everyone is familiar with this list, and many books have been written on “green” living. Along with conservation is efficiency: switching to more energy-efficient appliances which have already been invented. The change from incandescent lamps to fluorescent and LED is being widely implemented. Every time an appliance like a refrigerator has to be replaced, it should be a new, efficient model. Gas-electric hybrid cars and upcoming plug-in hybrids will cut fossil fuel usage, but unfortunately their popularity rises and falls with gasoline prices. The worldwide use of computers has become a large consumer of elec­tricity from fossil fuels. Energy efficiency of computers are increasing all the time, but computers cannot be recycled. New computers all have a large fossil footprint. Houses can be built with better insulation and use of solar energy. Power plants can greatly increase efficiency by co-generation, in which waste heat from electricity generation is recaptured for heating and cooling. Conservation and efficiency are relatively easy to implement, and there is a public will to do this.

The third step in mitigation is the invention of new devices, a longer-term objective. Foremost among these are new ways to generate energy that do not emit CO2, and these are the subject of Chap. 3. Controlled fusion, the topic of this book, fits into this category of long-term solutions. Shorter-term needs are, for instance, the invention of better batteries or new chemistries for making synthetic fuels. Energy storage is a problem both for transportation and for inter­mittent energy sources such as solar or wind power, and there has so far been no great breakthrough on batteries. Paradigm-changing inventions may require going back to basics. Forward thinking in the US Department of Energy’s Office of Basic Energy Sciences led to a series of ten workshops on Basic Energy Needs such as electrical energy storage, solar energy utilization, and catalysis for energy. The resulting Energy Challenges Report New science for a secure and sustainable energy future summarizes the basic scientific advances needed in the long term.13

The magnitude of the long-term problem — controlling or reversing global warming in the next 50-100 years — can be seen from the following graphs. We have seen at the beginning of this chapter that anthropogenic forcing of global warming comes mainly from the emission of GHGs, of which CO2 is the main culprit. Figure 1.25a shows that the major part of this comes from the burning of fossil fuels, so that we must either develop new energy sources or find ways to eliminate the CO2 pollution. Figure 1.25b shows the distribution of GHG emissions from various human activities worldwide. These activities are so var­ied among different countries that general methods of mitigation cannot be applied.

From 1970 to 2004, the CO2 concentration grew by 80%, and the total GHG warming potential increased by 70%. About half of this comes from highly developed nations representing only 20% of the world population. Aggravating the problem is the growth of both population and production. Figure 1.26 shows predictions of population and gross domestic product (GDP) growth and calculations in different scenarios, some 400 in all, without intervention by mitigation techniques. A large divergence of results can be seen, since human behavior has to be assumed in addition to the physics effects considered in climate simulations. Pre-2000 computations are shown by the blue shading, while more recent ones, using different methods, are shown by the lines. Population growth rate has slowed recently, so that the lines give a more opti­mistic view. Third-world countries will increase their GDPs rapidly as they become industrialized. China has already overtaken the USA as the world leader in CO2 emissions.

When mitigation is added to the scenarios, different assumptions have to be made for each economic sector in each country or region, and even larger diver­gence of results is produced. To make sense of the mass of data from some 800 different scenarios, the IPCC has grouped them according to the GHG concentra­tion level or, equivalently, the radiative forcing that each scenario ends up with and has plotted the range of mean global temperature increase above the preindustrial level as predicted by all these models. This is shown in Fig. 1.27. Each category

image028

image029

Fig. 1.25 (a) Major constituents of anthropogenic GHGs; (b) GHG emission by various sectors. Here, F-gases are the ozone-depleting fluoiinated gases [16]

from I to VI lumps together scenarios resulting in an increasing range of GHG levels, and the curves show the range of temperature rises that the scenarios in that group predict. The results are also shown in Table 1.1. Here, it is seen that the CO2 level can be made to peak at some time in the next century and then go down. The larger the CO2 level, the later this peak will occur. Category IV has the most sce­narios; apparently, this is the most anticipated range.

image030

b

As complicated as these computations are, they do not tell us how to achieve the stabilization levels specified. No one method of mitigation will do the trick. A simple and attractive way to analyze the problem has been given by Socolow and Pacala [17-19]. They address the intermediate term of the next 50 years, relying on existing methods of conservation and efficiency enhancement but not counting on any new inventions which may come later. Since CO2 is the dominant GHG, only
that gas is considered here to simplify the problem. In Fig. 1.28, the wiggly line shows the data for yearly carbon emissions measured in billions of tons (gigatons) per year (GtC/year). The dashed line is the current path that we are on, and it will lead to a tripling of our current level of about eight GtC/year by the end of the century. The horizontal line is the desired goal of maintaining emissions at the present level. The yellow triangle between these lines represents, then, the reductions in emissions that we have to make to achieve this goal. This triangle is enlarged in Fig. 1.29.

Equilibrium global mean temperature increase above pre-industrial (°С)

Подпись: 0+ 1 1 1 1 1 1 300 400 500 600 700 800 900 1000

101

GHG concentration stabilization level (ppm C02 eq)

Fig. 1.27 Range of predictions for global temperature rise according to scenarios sorted into Groups I-VI according to the GHG concentration level achieved with mitigation methods [16]

Table 1.1 If the target CO2 level in column 3 (or the equivalent CO2 level of ah GHGs in column 4) is achieved, the year in which the GHG peaks is given in column 5, and the percentage change in emissions is in column 6 [16]

Category

Additional

radioactive

forcing

(W/m2)

CO2

concentration

(ppm)

CO2-eq

concentration

(ppm)

Peaking year Change in global for CO2 emissions in emissions 2050 (% of 2000 (year) estimations) (%)

No. of scenarios

I

2.5-3.0

350-400

445-490

2000-2015

-85 to -50

6

II

3.0-3.5

400-440

490-535

2000-2020

-60 to -30

18

III

3.5-4.0

440-485

535-590

2010-2030

-30 to +5

21

IV

4.0-5.0

485-570

590-710

2020-2060

+10 to +60

118

V

5.0-6.0

570-660

710-855

2050-2080

+25 to +85

9

VI

6.0-7.5

660-790

855-1130

2060-2090

+90 to +140

5

Total

177

image032

Fig. 1.28 Socolow-Pacala diagram showing the amount of mitigation (yellow triangle) needed to keep CO2 emissions constant at the present level [17-19]

image033

Fig. 1.29 Division of the stabilization triangle into wedges, each representing a cut of one billion tons of carbon emission per year. (Design originated by the Carbon Mitigation Initiative, Princeton University, and replotted from the data in refs. [17-19])

The triangle can be divided into eight “wedges,” each representing the contribu­tion of one stratagem to these carbon reductions. Each wedge represents a reduction of one GtC/year in carbon emissions. Together, these wedges would hold carbon emissions to eight GtC/year instead of the 16 GtC/year expected by 2058. Looking at this way, the problem is not so overwhelming. Each sector simply needs to focus on that amount of reduction in its activities. The lines, of course, are not exactly straight; they have been straightened to simplify the idea and make it understand­able to all. In fact, the idea is now so simple that the authors have made it into a game that can be played in the classroom, with each student or group of students responsible for finding out how to achieve the goal in one sector. There are numer­ous ways to make a wedge, but these may overlap. For instance, building 700 fewer coal plants in the next 50 years is one wedge, and so is building 2.5 times more nuclear plants than exist now; but these are the same wedge if the coal plants are replaced by nuclear plants. The wedges in Fig. 1.29 are a few examples chosen so as not to overlap.

From top to bottom: one wedge can be gained if cars averaged 60 miles per gal­lon (3.9 liters/100 km) instead of 30 mpg (7.8 liters/100 km). Hybrid technology already exists for this. Driving 5,000 miles per year instead of 10,000 would give another wedge. Bicycling, ride sharing, and public transportation could achieve this, but at the expense of personal time. Buildings use 60-70% of all electricity produced, and much of this is unnecessary. Cutting this in half can yield two wedges. Requiring 800 coal plants to sequester their CO2 output would yield one wedge. Building more renewable energy sources such as wind and solar could give one wedge without inventing new technologies. Replacing coal plants by nuclear plants up to 2.5 times their current number would yield one wedge. Cutting in half the area of forests destroyed per year yields one wedge. With so many ways to tackle the problem, this way of dissecting it makes the problem not as mind-bog­gling as it first seems. It is easier to evolve a strategy. Holding the line is achievable with effort and government incentives. As under-developed countries increase their use of electricity and fuels for cooking, the number of wedges needed will increase, but only by one-fifth of a wedge [17-19].

You may wonder how billions of tons of carbon can get into the air when it goes up as CO2, which is just a gas weighing no more than the bubbles coming out of a carbonized beverage. Box 1.3 explains how.

Most nations have taken action to do their share in reducing its carbon emis­sions. With Chancellor Angela Merkel (a physicist) at the helm, Germany leads the way, and other nations have followed. It is the largest market for solar cells and is the third largest producer, behind China and Japan. A feed-in tariff of about 0.5 euro per kilowatt-hour is paid for electricity fed back into the grid. Germany is also a major user of wind power. Its renewable energy sources pro­duce 14.2% of its power, compared with the European Unions’ target of 12.5% by 2010.14 The program is funded by adding 1 euro to monthly electric bills, and the worry is that this will increase with the rapid growth of solar energy deploy­ment. Tony Blair has set emissions goals of a 50% cut by 2050 for the UK. In the USA, California leads the way under Governor Arnold Schwarzenegger, who has introduced ambitious legislation to reduce CO2 emissions to 1990 levels by 2020 and to 80% below 1990 levels by 2050. The USA, however, has a history of dragging its feet on energy and environment issues since it has more fossil

Box 1.3 How Can CO 2 Weigh So Much?___________________________

Here, we are talking about billions of tons of CO2, a gas as light as the air we breathe. Can our cars and factories actually emit that much weight in a gas? Indeed they can, and here is how. First, a billion is such a large number that it is hard to visualize even though we know that it is a thousand million in the USA and a million million in the UK [A gigaton (Gt) is a US billion.] So let us bring it down to something more palpable. There are about a billion cars in the world, so each car emits about a ton of pollutants a year, or almost the car’s own weight, on average. That is still an unbelievable amount.

The weight of gasoline is mostly in carbon, since gasoline molecules are hydrocarbons with a ratio of about two hydrogen atoms (atomic weight 1) to one carbon atom (atomic weight 12). So 12/14th of the weight of gasoline is the weight of the carbon in it. Gasoline is lighter than water; one liter of it weighs 0.74 kg, compared with the standard weight of 1 kg per liter of water. Of the 0.74 kg, 0.63 kg (six-seventh of it) is carbon. How much does a tank­ful of gasoline weigh? Say it takes 45 liters (12 gallons) to refill a tank. The weight of a tankful is then about 45 x 0.74 = 33 kg (73 lbs), containing 45 x 0.63 = 28 kg of carbon. When the gasoline is burned, the carbon and hydrogen combine with oxygen from the air to form CO2 and H2O, respec­tively. Since oxygen’s atomic weight is 16, a molecule of CO2 has atomic weight 12 + (2)(16) = 44, and the weight of the carbon is multiplied by 44/12 = 3.7 by picking up O2 from the air! So when a whole tankful of gaso­line is burned, it emits 28 x 3.7 = 104 kg (228 lbs) of CO2 into the air. Suppose a car refuels once every two weeks or 26 times a year, its CO2 emis­sion is then 26 x 104 = 2,700 kg of CO2. This is 2.7 metric tons per year or about 3 US tons! The carbon footprint of driving is even larger, since it takes a lot of fossil energy to make the gasoline in the first place.

The discussion about wedges used units of gigatons of carbon, not CO2, per year. To get back to carbon, we have to divide by 3.7, so our example car can emit 2.7/3.7 = 0.73 tons or almost a ton of carbon a year. If we increase miles per gallon by a factor of 2, we would save 0.5 ton per year per car or 0.5 GtC/year for one billion cars. By 2059, we expect to have two billion cars and that doubles the savings back to one GtC/year. Hence the top wedge in Fig. 1.29. While we are merrily driving along the highway, the car is spewing out this odorless, colorless gas in great quantities the whole time!

reserves than most countries outside OPEC. The USA did not sign the Kyoto Protocol because it would have cost too much. The 2008 climate-change strate­gic plan by the Department of Energy called for $3 billion in energy research, which is the same amount as in 1968 in adjusted dollars. Under the Bush administration, the USA failed to live up to its commitment to ITER for two years. ITER is the international project to develop fusion power and is described in Chap. 8. President Obama has appointed Steve Chu as Secretary of Energy and John Holdren (formerly a plasma physicist) as Science Adviser. This admin­istration has already taken steps to move forward aggressively in protecting the environment. For instance, $777 million has been allocated to establish 46 Energy Frontier Research Centers in US universities and laboratories, and a new ARPA-Energy program has been started in the Advanced Research Projects Agency to stimulate new ideas for energy efficiency and curbing of carbon emissions.

The first step that is usually taken for economic reasons is to install a Cap and Trade system, in which companies with large carbon emissions can buy credits from other companies that have emissions below the legislated level. This does not directly reduce overall emissions unless low-carbon companies are new ones using clean energy. Coal plants will find it cheaper to buy carbon credits than to install equipment to capture and sequester their CO2. A carbon tax would be about $100- $200 per ton of carbon emitted, equivalent to $60 per ton of coal burned or $0.25 per gallon of gasoline [17-19]. Perhaps in anticipation of this tax, which will raise electricity bills, it is encouraging that large companies like Walmart and Google have installed solar panels on their roofs.

Enlightened legislation has succeeded in protecting the environment in the past: CFCs have been eliminated to cure the ozone-hole problem, and lead has been taken out of gasoline, paints, and plumbing. We can succeed again with global warming.

Legislation is also necessary because mitigation involves entire communities, not just individuals. “Greener than thou” is not the right attitude. Here is an example. There was a television program showing the construction of a “green” skyscraper in New York. It was noted that the high building intercepts 40 times as much sun­light as would normally fall on that area. By using partially reflecting windows, the heat load on the building could be reduced, with substantial savings in the energy required for air conditioning. Erecting a building, however, does not change the amount of heat that the sun deposits on each square meter of the earth. What hap­pens is that the building throws a shadow, thus cooling the buildings behind it. This benefit accrues regardless of window design. Reflecting windows would heat the buildings in front, thus increasing their air-conditioning energy. Thus, whether total energy is saved or not depends on the energy efficiency of the neighbors’ equipment. Market-driven savings are necessarily selfish, and one has to be wary of such profits.

This discussion of mitigation is about the near term of the next 50 years. In the latter half of the twenty-first century, the world will be quite different. New tech­nologies will exist that we cannot imagine now. We went from the Wright brothers to the Boeing 747 in only 67 years.

In 2050, the remaining supplies of oil and gas will be prohibitively expensive. Local power by solar and wind will be commonplace. Coal and nuclear will supply base power in spite of the problem of storing their wastes and the cost of mining. Controlled fusion, which has neither problem, will be coming online as the primary power source. Much of the expense of developing and commercializing new energy technologies can be spared if we finish the development of fusion sooner.

Notes

1. Subsequent letters and rebuttals published in this journal and in APS News showed that a number of physicists believed that variations in solar radiation could have caused the earth’s temperature rise. Their proposal to mitigate the American Physical Society’s strong statement that climate change is caused by humans was overwhelmingly rejected by the Society.

2. http://www. ipcc. ch or just google IPCC AR4.

3. Note that this is not the half-life of CO2 concentration in the atmosphere, which is 30 years. CO2 molecules go in and out of the ocean, and four years is the recycling time. Courtesy of R. F. Chen, University of Massachusetts, Boston, who read this chapter critically.

4. For instance, Hegerl et al. [10], countered by Schneider [11]. Also, Scafetta and West [4] who elicited seven letters to the editor in Physics Today, October 2008, p. 10ff.

5. Not exactly, since fresh water is about 2.5% less dense than seawater.

6. National Geographic News, December 5, 2002.

7. A. Gore, An Inconvenient Truth, DVD (Paramount Home Entertainment, 2007).

8. The eastward motion is the result of what physicists call the Coriolis force. The earth rotates west to east (making the sun move east to west daily), and the air picks up the large “ground speed” near the equator. As the air moves northward, it goes into a region of lower ground speed and moves faster eastward than the ground does.

9. What this IPCC graph (FAQ 3.2, Fig. 1.1) means in detail is too complicated to explain and is shown here only to illustrate the large local variations in rainfall data.

10. An impressive graph of the changes in several species appeared in Audubon Magazine, March/April 2009, p. 18.

11. The Ocean Conservancy newsletters, Spring 2008 and Winter 2009.

12. National Geographic Video Program, Six Degrees Could Change the World (2009).

13. http://www. sc. doe. gov/bes/reports/list. html.

14. New York Times, May 16, 2008.

Oil and Gas Pipedreams

We discussed the shortage of oil earlier in this chapter but gave short shrift to natural gas, which supplies as much energy as oil, as seen in Fig. 2.1. That is because gas and oil mostly occur in the same places, are mined the same way, and are similarly depleted. We also ignored the minor overlap between oil and gas: oil can be converted to propane and butane gases, which we use for camping and power in remote houses; and gas can be liquefied at low temperatures for more convenient transport as LNG (liquefied natural gas). In this section, we will again consider these fuels together as we consider the various proposals for extending their supplies.

The price of oil can jump wildly, as it did in 2008-2009 from higher than $140 to less than $40 per barrel, and it can jump back. The price of gasoline follows, and this has a great effect on the economy as people travel less and buy fewer large cars. The gas crisis of 1973 even triggered legislation setting the speed limit in the USA at 55 miles per hour. These rapid changes are not our concern here; we are worried about the end of oil and gas altogether. In 2007, BP (British Petroleum) reported that proven reserves are 15% higher than previously thought, so that oil will last another 40 years,14 30 more than predicted in Fig. 2.16. There was widespread doubt, however, about this result from a normally reliable source. For instance, the IEA (International Energy Agency) assessed the top 400 oil fields and found them old and in bad condition.15 They did not see how the present consumption of 87 million barrels per day can exceed 100 million, much less than the 116 million predicted by 2030. Similarly, the six oil fields that produce 90% of Saudi oil were found to be greatly depleted.16 In the USA, the crunch is already felt as the Alaskan pipeline, built in the 1970s to carry most of our domestic oil and gas, is carrying only one-third as much these days because the wells at Prudhoe Bay are being depleted at the rate of 16% per year. Figure 2.19 shows that discoveries of new oil fields have been declining since 1964.17

Russia exports more oil and gas than any other country. It produces 11.8% of the world’s oil, compared with 9.9% for Saudi Arabia and 12.4% for Iran, United Arab Emirates, Kuwait, and Iraq combined.15 Its state utility, Gazprom, is so powerful that it held the Ukraine and other parts of Europe at its mercy by shutting off gas deliveries through its pipeline. The politics of gas and oil are changing. The former holders of power, ExxonMobil, Chevron, BP, and Royal Dutch Shell are being replaced by the new “Seven Sisters”: Aramco (Saudi Arabia), Gazprom (Russia), CNPC (China), NIOC (Iran), PDVSA (Venezuela), Petrobas (Brazil), and Petronas

image071

Fig. 2.19 Rate of oil discoveries since 1900 (http://www. theoildrum. com)

(Malaysia).18 The IEA predicts that 90% of new oil and gas discoveries will come from developing nations. We will next show the different ways in which the indus­try is trying to explore beyond “proven” reserves.

Computer Modeling

This science has improved greatly since the 2001 ICPP report, and predictions are therefore more reliable. It is a very complicated problem [8]. There are standard physics equations that tell you how air and water move and how heat is transferred from one medium to another, but weather varies with location and changes by the hour. To predict climate, one has to divide the space into a finite number of cells, few enough for computers to handle. These cells are about 200 km laterally and 1 km vertically (in the atmosphere), decreasing vertically to maybe 100 m near the ground. To divide up time, 30-min averages are taken for climate, and shorter time steps for weather forecasting. The computer program then takes the average condi­tions in one cell and predicts what the conditions will be in the next time step. The conditions include, for instance, temperature, wind speed, water vapor, snow cover, and all the effects mentioned earlier in this chapter. We did not mention the history of CO2. About 45% of the CO2 that man generates goes into the atmosphere, 30% into the oceans, and the rest into plants. The CO2 absorbed by oceans diffuses downward over many years. The CO2 in the atmosphere has a mix of different life­times; roughly speaking, half goes away in 30 years and half stays for centuries. All such effects have to be accounted for in the models.

The key word is “average.” How does one find the average conditions in a 100 x 100 x 1 km cell 1 km above Paris, for instance? Clouds are forming and mov­ing all the time. Modelers have developed parametrization, a technique for averaging over small-scale and short-time conditions. Clearly, it takes many decades of expe­rience to get parameters that give the right averages, and different workers will arrive at different parameters. This does not inspire great confidence, and most skeptics of climate change distrust modeling and correctly point out that this is the weak point in forecasts of impending disaster. Fortunately, there is a way to check. Starting a couple of centuries ago, accurate data on temperature, CO2 content, and so forth became available. A modeler can take those data and predict what hap­pened later using his or her parameters. Then he/she can check with what actually happened and adjust his parameters to give the correct result. The only uncertainly is then whether or not the parameters of a century ago are the same today. We will show that different workers have varying success in their predictions, but all show that the current global warming is man-made.

When is a Megawatt Not a Megawatt?

When they talk about wind turbines, the quoted power of a turbine is the peak power, the most it can generate when the wind is strongest. The power output of a turbine varies as the cube of the wind speed. That means that if the wind drops from 20 miles per hour (9 m/s) to 10 mph (4.5 m/s), the electric power produced goes down by a factor of eight. The average number of megawatts generated is then much lower than the maximum that the turbine is built for. Figure 3.5 gives an idea of how variable wind power is. The data are from the area controlled by E. ON Netz, a large company that controls 40% of Germany’s wind capabilities. During the year, this example shows that the power varied from 0.2 to 38% of the peak grid power! For this reason, a turbine built to generate 5 MW actually produces much less than that on the average. Just how much is shown in Fig. 3.6. This graph shows how much time during the year the wind power generated in a certain area was the number of gigawatts shown on the vertical scale. The time is measured in quarter-hours. We see that the maximum installed capacity of 7 GW was never reached, and even 6 GW was produced for a very short time. The average power over the year was less than one-fifth of the installed power capability. For half the year, less than 14% of the installed capacity was usable.11 So 7 MW can mean only 1.3 MW!

image078

Fig. 3.5 Daily fluctuations of wind power in 2004 in the E. ON Netz control area. The scale gives the contribution of wind power to the peak grid load. Adapted from E. ON Netz Wind Report, 2005

image079

Fig. 3.6 The number of quarter-hours in 2004 in which the wind power generated by E. ON Netz was the number of gigawatts plotted on the vertical scale. (There are 35,000 quarter-hours in a year.) For instance, there were about 5,000 quarter-hours in which the power was 3 GW, and about 17,000 quarter-hours when the power was 1 GW. The average was 1.3 GW. Adapted from E. ON Netz Wind Report, 2005

We often see statements like “The 5-MW titan [in Denmark]…will average enough power for 5,000 homes,”8 or “The 108 [1.5-MW] turbines. in the Colorado Green project.. .produces roughly enough electricity each year to supply more than 52,000 homes4.” The first averages out to 1 kW (peak) per home, while the second works out to be 3.1 kW (peak) per home. Clearly, this number will depend on the amount of wind at each locale as well as the lifestyle there in terms of electricity use.

In 2001, the yearly average electricity consumption in the USA was 1.2 kW per home12 or 0.47 kW per average person. This is on a steady basis, averaged over a whole year. Now, 1.2 kW goes into 1 MW (=1,000 kW), 833 times. So if average wind power is only 20% of the peak power, as we found above for Germany, 1 MW would supply only a fifth of 833 or 166 homes. This is a little less than the number of 250-300 homes quoted in footnote 4, but the discrepancy can be accounted for if steadier winds were assumed. The Colorado example above works out to give an average-to-peak wind factor of 38%, just twice the number for Germany. This means that 1 MW of peak power in Colorado would supply 320 homes, in good agreement with the quoted number of 250-300 homes for the US average. In the Denmark example above, 1 MW of peak power would provide average power for 1,000 homes, about three times the number in the Colorado case. It is quite possible, however, that electricity is used much more sparingly in Denmark than in the USA.

In summary, the average power from wind turbines is only 19-38% of the installed power capability, depending on the location. The number of homes a wind farm can power also depends on the energy usage pattern in that location. Consequently, claims about the efficiency of wind farms can vary widely and can­not always be believed without checking the facts.

The Future of Energy I: Fossil Fuels[2]

There are three different types of power: backbone power, green power, and mobile power. Backbone power is the primary energy source that is always there when we need it. Green power comes from renewable energy sources which do not pollute. Mobile power drives our cars, planes, and other vehicles and has the special requirement of transportability. We will discuss each of these in turn.

Backbone Power

Only 40% of the world’s energy use is in the form of electricity; the rest is used for heating and manufacturing. But it is the electric power that governs our way of life in developed countries. During a hot summer day, you have probably experienced a rolling blackout. Night falls and you light a candle. So far so good, and it might even be romantic; but it is too dim to read by. You turn on the radio to find out what the problem is. It does not work. You want to watch TV or play a disk, but those do not work either. You try to call your neighbor to talk about it, but the phone does not work either. Now, where is that phone that connects directly without a power brick? Well, I have all this time to surf the web, you think. The computer is dead as a door nail, and so is the modem. A cup of hot tea would calm your nerves, but… oops! The stove is electric, and so is the hot water heater. Maybe we can take a drive in the moonlight until the power comes back on. But the garage door would not open. There is nothing to do. During the 10-h New York blackout in 1965, people did what came naturally; and the maternity hospitals were jammed nine months later.. .or so it was reported. This story has been debunked since then.

Heating of homes uses mostly oil and gas, but reliable electric power is still needed in a pinch. Mrs. Johnson, a widow, lives alone in her house in suburbia. The snow is so deep that oil trucks have difficulty in making deliveries. The electricity goes out when a large generator goes down in the public utility. A fierce storm rages outside, and there is no sun. The gusting wind does not provide enough wind power to make up for the shortfall. The inside temperature falls to below zero. Mrs. Johnson has an electric heater, but there is no power. She cannot cook without electricity. After two days, she unfreezes a can of soup by lying next to it in bed. On the third day, she looks at a picture of her grandchildren on her nightstand and wonders if she will ever see them again. Then, on the fourth day, the power goes back on. Yes, she will see them again. Thank goodness for backbone power! This is a dramatization, but loss of backbone power can have deadly consequences. Fortunately, most hospitals have emergency power systems that run on fossil fuels. This is one use of fossil fuels that is defensible.

Renewable energy sources are absolutely necessary for limiting greenhouse gases, but the ones that most people know about — wind, solar, and hydro — are not sufficient or dependable enough to be the primary energy source. Great strides are being made to increase the fractional contribution of these sources, but they can only supplement the primary source. That is because we cannot store energy from intermittent sources or transport that energy from where it is produced to where it is needed. Backbone power has got to be available at all times. This means that reserve generating capacity has to be built to supply power when all else fails. Backbone power keeps people alive and functional in their normal activities. Green energy can save on fuel cost, but not on capital costs, because backbone power plants still have to be built to supply the necessary standby capacity. This will be quantified in the section on wind power. Only three energy sources fulfill the requirements of backbone power: fossil fuels, fission, and fusion. Of these, only fusion energy has the prospect of being backbone, green, and safe.

Deep Drilling

There are new oil fields to be found if one is willing to drill deep enough. In addi­tion to the Caribbean, deeply lying deposits are believed to exist in the North Sea, the Nile River Delta, the coast of Brazil, and West Africa.19 To see how hard this is, consider Chevron’s Jack 2 well, 175 miles offshore in the Gulf of Mexico. The drill goes 1 mile in water down to the bottom, then four more miles down into the ground. To find such large deposits, modern supercomputers are used to analyze seismic signals in three dimensions, requiring the processing of huge amounts of data. A new generation of drilling rigs had to be built to go twice as deep as ever before. These platforms, almost as large and as dangerous as that at Sleipner, cost half a million dollars a day to rent, but they could still be profitable if oil prices stay above $45 a barrel. This large deposit could yield 15 billion barrels, just a drop in the bucket compared with the world’s proven reserves of 1,200 billion barrels. New deposits have been found that can be accessed only by horizontal drilling.20 From a central platform, pipelines are drilled down and then horizontally out to deep — lying deposits kilometers away. The oil collected from these wells is then pumped to the mainland in a large pipeline. Figure 2.20a shows what a normal-size drilling rig looks like when the weather is nice. These are ships that go wherever they are needed. Storms and uncontrolled fires make oil drilling a dangerous occupation. An oil platform under less ideal circumstances is shown in Fig. 2.20b.

image072

Fig. 2.20 (a) A drilling vessel in the Gulf of Mexico (http://images. google. com); (b) The Deepwater Horizon, 2010 (National Geographic Channel, July 2010)

These words, written previously, were brought to a focus when the Deepwater Horizon platform in the Gulf of Mexico exploded on April 20, 2010, killing 11 workers. The huge rig burned for days, and the oil leaking into the Gulf was uncon­trolled, contaminating thousands of square miles and disrupting the fishing and shellfish industries in Louisiana. The damage to aquatic and avian wildlife is yet to be determined. Before the leak was capped in August, 4.9 million barrels of oil had been released, exceeding the 3.3 million barrels in the Ixtoc 1 blowout off the Yucatan peninsula in 1979. These numbers overwhelm the 257,000 bbls from the 1989 Exxon Valdez tanker spill in Alaska, whose effects are still felt 30 years later. Energy giant BP, owner of the Deepwater Horizon, suffered severe economic losses. The accident triggered legislation to regulate and restrict deepwater drilling. Aside from ecological concerns, it is becoming apparent that it would be cheaper to develop a substitute for oil than to ferret out the last of the earth’s deposits.

Modern Data

Before showing the modeling results for the modern era, it is instructive to show the amount of data now available for analysis, as opposed to what is used for paleo- climates. When one computes the global average temperature, isn’t that just a weighted average over a finite number of places on the earth, say, a few hundred? Now that we have satellites, the coverage is much better. Here are three examples. Figure 1.6 shows the tremendous increase in the number of measurements of ocean temperature between the 1950s and the 1990s. Figure 1.7 shows the fine detail that satellite coverage gives on the altitude change in each part of Greenland and Antarctica. The loss of ice thickness can be seen clearly where glaciers and ice sheets have slid into the sea. Figure 1.8 shows the distribution of aerosols over the globe as obtained by opacity measurements by satellites. This is supplemented by a finite number of ground-based observations which can also determine the size and material of the particulate matter.