Category Archives: Nuclear Power and the Environment

Environmental Chemistry of Key Contaminants

Radioactive waste contains a wide range of isotopes, but many are either short-lived radionuclides or are stable (i. e. non-radioactive) isotopes. The radionuclides that are problematic environmentally are those that are long-lived, have a high activity, are present in relatively large quantities and/or are bioavailable. Details of some key radionuclide contaminants are given in Table 4. All are present in significant quantities in the environment from nuclear weapons testing or the nuclear fuel cycle, except for 60Co; uranium and radon also occur naturally. Cobalt-60 is widely used in medical and industrial appli­cations requiring a radiation source, and is created by neutron activation of 59Co.

Several of these radionuclides are bioavailable. Radon exists as a gas and inhalation of radon can cause cancer; it is the second leading cause of lung cancer in the USA.39 Strontium(n), an analogue for Ca21 can be accumulated in bone, whilst Cs1 is analogous to K1 and so can be transported into cells via the K1 transport mechanisms; both 99Tc and 129I can be accumulated in the thyroid gland.40,41

The environmental fate of radionuclides is controlled by a number of factors; these will be discussed in detail in sections 4 and 5. However, the oxidation state

Table 4 Key radionuclide contaminants.

Oxidation states

Key Isotopes

Half-life

Major decay mode

Fission products

Strontium

+2

90Sr

29.1 y

beta

Technetium

+4, +7

99Tc

2.15 x 105y

beta

Iodine

-1, 0, +5

129i

1.57 x 107 y

beta, gamma

Caesium

+1

137Cs

30.17 y

gamma

Actinides

Uranium

+3, +4, +5, +6

238U

4.47 x 109 y

alpha

Neptunium

+3, +4, +5, +6, +7

237Np

2.14 x 106 y

alpha

Plutonium

+3, +4, +5, +6, +7

238Pu

87.7 y

alpha

239Pu

2.41 x 104 y

alpha

240Pu

6.55 x 105 y

alpha

241Pu

14.4 y

beta

Americium

+3, +4, +5, +6, +7

241Am

432.7 y

alpha

Other

Cobalt

+2, +3

60Co

5.271 y

beta, gamma

Radon

0

222Rn

3.8 d

alpha

of the radionuclide will have a significant impact on its chemical behaviour, transport and bioavailability, particularly for redox-sensitive radionuclides. As can be seen from Table 4, the actinide elements can exist in a range of oxidation states, leading to fairly complicated chemical behaviour. The most stable oxidation state of uranium in the environment is U(vi), as UO221, but it is also stable as U(iv) under reducing conditions.42 The most stable and dominant oxidation states of neptunium and plutonium in the environment are +v (as NpO2+) and +iv, respectively. However, in the environment, neptunium can also exist in the +iv and +vi oxidation states, whilst plutonium can also be present in + iii and +v oxidation states.43 The higher, environmentally stable oxidation states (v, vi) of the actinide elements tend to be more soluble and therefore more mobile, whilst An(iv) species (An = actinide), with a high charge/radius ratio, are prone to hydrolysis and polymerisation, forming colloids and precipitates, and readily sorb to mineral surfaces.1,44

Of the fission products listed, technetium and iodine are redox active, but caesium and strontium have only one stable oxidation state each: Cs1 and Sr21. Therefore changes in the redox environment do not directly affect the chemistry of caesium and strontium, but the environmental behaviour and bioavailability of Cs1 and Sr21 do relate to their oxidation state. The low charge density on Cs1 means that it is only weakly complexed by ligands and tends to bond via electrostatic interactions rather than covalent bonding. it is also highly soluble and so is mobile in the environment, with interactions with mineral phases being the dominant mechanism of retardation.44 Like Cs1, Sr21 does not complex strongly to ligands and tends to be soluble in the environment, but it can co-precipitate with calcium sulfate or carbonate.44,45 The mobility of technetium is primarily controlled by the oxidation state, with two stable states found in the environment: +vii and +iv. Under aerobic conditions, technetium will exist in the+vii oxidation state, as TcO4, and in this form it is highly soluble and mobile. Under more reducing conditions, Tc(iv) is stable and will tend to exist as insoluble TcO2.2 For iodine, the most important oxidation states in the environment are: 41, 0 and +v. In aqueous environments, +v (as IO3_) and 41 (as i4) are the dominant forms, but in soils, iodine can be mostly present as organic species.1,2 The redox chemistry of cobalt is relatively simple, with just two stable oxidations states: +ii and+111. Co(ii) is the dominant oxidation state in solution, as it tends to be more soluble than Co(iii), but Co(iii) can be stabilised and mobilised by certain ligands (see section 5.3).46,47

Dose Constraints and Reference Levels

The concepts of dose constraints and reference levels are used in conjunction with the optimisation of protection to restrict individual doses. A level of individual dose, either as a dose constraint or a reference level, always needs to be defined. Thus the initial intention would be to not exceed, or remain at, these levels; and the ambition would be to reduce all doses to levels that are as low as reasonably achievable, economic and societal factors being taken into account. In planned exposure situations (with the exception of medical exposure of patients), the term used is ‘‘dose constraint’’, but for emergency and existing exposure situations the term ‘‘reference level’’ is used to describe this level of dose. The difference in terminology is because that, in planned situations, the restriction on individual doses can be applied at the planning stage, and the doses can be forecast so as to ensure that the constraint will not be exceeded. In the other situations, however, a wider range of exposures may exist, and the optimisation process may apply to initial levels of individual dose above the reference level. [Diagnostic reference levels are used in medical diagnosis (i. e., planned exposure situations) to indicate whether, in routine conditions, the levels of patient dose or administered activity from a specified imaging procedure are unusually high or low for that procedure].

A dose constraint is thus a prospective and source-related restriction on the individual dose from a source in planned exposure situations, serving as an upper bound on the predicted dose in the optimisation of protection for that particular source. It is a level of dose above which it is unlikely that protection is optimised for a given source of exposure, and will always be lower than the pertinent dose limit. For potential exposures, the corresponding source-related restriction is called a ‘‘risk constraint’’.

For occupational exposures, the dose constraint is a value of individual dose used to limit the range of options, such that only options expected to cause doses below the constraint are considered in the process of optimisation. For public exposures, the dose constraint is an upper bound on the annual doses that members of the public could receive from the planned operation of a specified controlled source. If a dose constraint is exceeded, then it is necessary to determine whether protection had been optimised, the appropriate dose constraint had been selected, and further steps to reduce doses to acceptable levels would have been appropriate.

Emergency or existing controllable exposure situations are somewhat different. Here the reference levels represent the level of dose or risk above which it is judged to be inappropriate to plan to allow exposures to occur, and for which therefore protective actions should be planned and optimised. The chosen value for a reference level will depend upon the prevailing circumstances of the exposure situation. Quite obviously, when an emergency exposure situation has occurred, or an existing exposure situation has been identified, and protective actions have been implemented, doses to workers and members of the public can be measured or assessed. The reference level may then assume a different function, and serve essentially as a benchmark against which protection options can be judged ret­rospectively. One has to bear in mind that the distribution of doses that result from the implementation of a planned protective strategy may or may not include exposures above the reference level, depending on the success of the strategy.

Because at doses higher than 100 mSv there is an increased likelihood of deterministic effects, and a significant risk of cancer, the maximum value for a reference level should be 100 mSv incurred either acutely or in a year. Higher exposures would only be justified under extreme circumstances, and thus either because the exposure is unavoidable or because the situation was exceptional — such as the saving of life or the prevention of a serious disaster. No other individual or societal benefit would compensate for such high exposures.

Below this, the values fall into three defined bands. They apply across all three exposure situations and refer to the projected dose over a time period that is appropriate for the situation under consideration. Constraints for planned exposures and reference levels in existing situations are conventionally expressed as an annual effective dose (mSv in a year). In emergency situations, the reference level will be expressed as the total residual dose to an individual as a result of the emergency that the regulator would plan not to exceed, either acute (and not expected to be repeated) or, in case of protracted exposure, on an annual basis.

The first band, of 1 mSv or less, applies to exposure situations where individuals receive exposures — usually planned — that may be of no direct benefit to them, but the exposure situation may be of benefit to society, the exposure of members of the public from the planned operation of nuclear power being a prime example. Constraints and reference levels in this band would be selected for situations where there is general information and environmental surveillance, or monitor­ing, or assessment, and where individuals may receive information but no training. The corresponding doses would represent a marginal increase above the natural background, and at least two orders of magnitude lower than the maximum value for a reference level, thus providing a rigorous level of protection.

The second band, greater than 1 mSv but not more than 20 mSv, applies in circumstances where individuals receive direct benefits from an exposure situation. Constraints and reference levels in this band will often be set in circumstances where there is individual surveillance or dose monitoring or assessment, and where individuals benefit from training or information — as is the case in occupational exposure from planned exposure situations. Abnor­mally high levels of natural background radiation, or stages in post-accident rehabilitation, may also be in this band.

The third band, greater than 20 mSv but not more than 100 mSv, applies in unusual, and often extreme, situations where actions taken to reduce expo­sures would be disruptive. Reference levels and, occasionally for “one-off” exposures below 50 mSv, constraints could also be set in this range in cir­cumstances where benefits from the exposure situation are commensurately high. Action taken to reduce exposures in a radiological emergency is the main example of this type of situation. Any dose rising towards 100 mSv will almost always justify protective action. In addition, situations in which the dose threshold for deterministic effects in relevant organs or tissues could be exceeded should always require action.

Expansion of Nuclear Power

The large scale use of nuclear power during the 1950s and 1960s was con­centrated in the USA, UK, Russia and Canada. Whilst some Western European countries began developing research programmes (often with experimental reactors), many full scale nuclear plants did not start producing electricity until the later 1960s and 1970s (Sweden, Japan, West Germany). West Germany was the first non-weapons nation to start up a nuclear power station in November 1960. However, a commitment to nuclear power is at the heart of the European Union. The Euratom Treaty signed in 1957 is one of the founding treaties of the European Union. The treaty recognised the need for an expansion in the supply of electricity for European economic growth, stating that ‘‘nuclear energy represents an essential resource for the development and invigoration of industry’’. It was also touted as a solution to the urban pol­lution caused primarily by coal-fired power stations located close to urban areas that plagued many of Europe’s cities in the immediate post war period.17

Whist the state took control of planning and construction of nuclear plants in many European countries, in the United States the federal government was keen for the private sector to invest in nuclear power. This proved extremely difficult to do in a situation where cheap oil and coal were available to the energy utilities. As a result, the federal government financed and built a number of demonstration reactors to prove to the private sector that nuclear was feasible. As a result, Westinghouse designed the first fully commercial PWR, a 250-MWe reactor at Yankee Rowe, which started up in 1960. Meanwhile a 250-MWe Boiling Water Reactor (BWR) was designed by General Electric, Dresden-1, which also started up in 1960.[10] [11]

In an attempt to build up the market for nuclear reactors the two major companies, Westinghouse and General Electric initially sustained losses of up to $1 billion per plant. This high risk strategy eventually paid off as a rush of orders from Energy Utilities ensued, amounting to 44 reactors during 1966-1967 alone (Scurlock 2007b). By the end of the 1960s, orders were being placed for PWR and BWR reactor units of more than 1000MWe.[12]

A pamphlet published by the nuclear company Westinghouse in 1967 cap­tures the prevailing optimism about the promise of nuclear power at the time:

‘‘It will give us all the power we need and more. That’s what it’s all about.

Power seemingly without end. Power to do everything that man is destined

to do. We have found what may be called perpetual youth’’.118

The first full-scale civilian reactor to provide electricity to a national grid, a gas graphite reactor, came online in the UK at Calder Hall (on the same site as the Windscale plutonium piles) officially opened with much fanfare by Queen Elizabeth II on 17 October, 1956. Calder Hall eventually comprised four ‘‘Magnox’’ reactors (so called because of the alloy cladding around the fuel rods) which generated 50 MWe of power each, with the plants having both a commercial and military use.[13] [14] [15]

In February 1955, a White Paper A Programme of Nuclear Power took the engineering industry by surprise when it announced a major programme of 12 Magnox stations to be built between 1957 and 1962 (ref. 19, 20). The White Paper justified this by arguing that there would be a growth in demand for electricity that the coal industry could not meet, and that over time electricity from nuclear stations would be cheaper than coal.21 There was a naive assumption that nuclear plants would be no more challenging to build than coal-fired stations. The 1955 White Paper was brimming with such confidence, suggesting that the Magnox programme could contribute 25% of the nation’s electricity at a cost today of $5.7 billion. x™ The Suez crisis created concerns over energy independence which led to increased calls for more nuclear power.23 However, the Magnox programme was characterised by escalation in costs and time overruns which reflected problems in the tendering process, with competing consortia winning contracts at individual plants which meant novel design changes at each site, and as a result economies of scale could not then be realised.™”

Sir John Cockcroft, former head of Harwell, advised Government that electricity generated from nuclear would in all probability be more expensive than alternatives (such as coal). Eventually the Labour government conceded that coal-fired power plants were 25% cheaper than nuclear. Nuclear power stations were seen as additionally useful to reduce the bargaining power of the coal miners’ unions, with uranium being seen as ‘‘strike proof” given that only small amounts were needed to power reactors.24 Indeed one of the earliest campaigns against nuclear power in the UK was initiated by the National Coal Board (NCB),[16] which tried to expose the subsidies provided to the nuclear industry by government. The NCB did not believe the CEGB’s claim that nuclear was cheaper than coal, but were frustrated given they could not gain access to the data on costs which were covered by the secrecy laws that made it impossible to gain information on nuclear matters.

As a result of the 1964 White Paper The Second Nuclear Power Programme, the government chose the 600-MWe Advanced Gas Reactor, which would complement the 4190 MWe generated by the Magnox stations with 8380 MWe of AGR capacity. AGRs were eventually built at seven sites across the UK.25 Each station was built by a different consortia which drove up costs and, as with the Magnox build, this hindered economies of scale. The White Paper on fuel policy in 1967 further reinforced the preference for nuclear over coal.[17] [18] [19] A government statement to the House of Commons in 1963 stated that nuclear generation was more than twice as expensive as coal. The ‘‘plutonium credit’’ which assigned a value to the plutonium produced was used, initially secretly, to improve the economic case, although the operators of the power stations were never paid this credit. During this period of British history, even con­servationists expressed a preference for nuclear power over coal mining in relation to the perceived lesser negative impact on the natural environment.27 Indeed as late as 1972, an article appeared in the journal, Environment, arguing that ‘‘there has been very little public opposition to nuclear power in England’’.™

The role of government regulation and liability guarantees were central to the success of nuclear power. In the USA for example, the Price Anderson Act of 1957 which established a ceiling of $560 million for private sector liability for nuclear accidents, enabled private sector involvement in nuclear power production to proceed.™1

One event was to provide a huge boost to the fortunes of the nuclear industry: the OPEC oil crises of 1973-1974. The oil crises of 1973-1974 which saw oil prices quadruple overnight made energy independence and energy security key policy issues worldwide. In France for example the result was a government review culminating in the ‘‘Messmer plan’’ whose aim was to secure energy independence. As a result 56 nuclear reactors were eventually built.29

Prior to 1973 much of the country’s electricity came from oil. The oil crisis exposed a worrying dependence on foreign states. Lacking domestic fossil fuel, they found themselves highly vulnerable to sudden spikes in the price of oil. Government leaders saw a centrally planned nuclear programme, launched through collaboration with Electricite de France (EdF), as a rational solution. The nuclear option was also portrayed as a resurrection of la patrie—frequent comparisons were made between reactor sites and such hallowed monuments as Notre Dame and the Eiffel Tower. During these years of expansion nuclear energy seemed to embody modernisation, industrialisation and aspirations for technological achievement. In France nuclear power was seen to contribute to the ‘‘radiance of France’’ to counteract the country’s rapidly declining influence in world politics.30,31 As Hecht observes, ‘‘The image of a radiant and glorious France appeared repeatedly in the discourse of engineers, administrators, labour militants, journalists, and local elected officials. These men actively cultivated the notion that national radiance would emanate from technological prowess’’.32,xxul

As a result the percentage of electricity generated by nuclear power in France rose from 7% in 1973 to 78% by 1994 (ref. 33) a level which it has maintained to this day. Other countries such as Japan vowed to intensify their commissioning of new nuclear power stations as a result of the oil crisis, this was particularly acute in nations that had no substantial indigenous energy supplies and that were dependent on imports of oil, coal and gas. Nuclear energy has been a national strategic priority in Japan since 1973. Indeed, across Asia during the 1970s and 1980s a number of countries began to buy and licence western nuclear technologies, leading to a situation today where Japan, South Korea, India and China have burgeoning domestic nuclear R & D capabilities.[20] [21]

The period following the oil crisis then witnessed the biggest increase in nuclear plant orders even seen in France, Belgium, Sweden, Japan and the USSR.34 In this period of exponential growth a total of 423 nuclear reactors were built from 1966 to 1985 (IAEA 2008). The USSR began selling reactors to Bulgaria, Czechoslovakia, East Germany and Poland, even to Finland.

During this period, the nuclear industry was at pains to demonstrate the benefits nuclear power had for consumers. In a 1975 survey of the 24 American utilities which operated nuclear power plants, the industry claimed that $750 million had been saved in customers’ utility bills in 1974, compared with the cost had the electricity come from fossil fuels only.35 We can see the confidence that policy makers had during this time in the ability of nuclear energy to be the main source of electricity. In 1974, the Nixon administration launched ‘‘Project Independence’’ which optimistically called for nuclear power to provide 50% of the nation’s energy needs by the year 2000.36 However the economic recession that followed in the wake of the Oil Crisis led to a drastic reduction in electricity demand in many countries, which steadily reduced the attractiveness of nuclear plants (and for coal plants) for utilities and governments.™

Whilst we can view this period as one characterised by optimistic expansion and public acceptance, it was also a period when a ‘‘counter-expertise’’ was slowly forming, primarily around environmental NGOs and academics. Decision-making itself remained firmly in the hands of engineers, and governments sought to reassure the public rather than foster transparency, but in some countries there were attempts to move away from the DAD approach to decision making (decide, announce, defend). Books with titles such as Man and the Atom: Building a New World through Nuclear Energy, by the celebrated scientist Glenn Seaborg (1971), celebrated the progressive potential of atomic research. Other books also were published, however, which countered this optimism, such as Curtis and Hogans’ (1969) The Perils of the Peaceful Atom: the Myth of Safe Nuclear Plants. Although the optimistic vision remained dominant, voices were being raised which had begun to question the unbridled optimism of an earlier age.

Contrary to conventional wisdom, there was even opposition to nuclear power in France, particularly at the local level. In 1976, 55% of the French population were hostile to nuclear power and the antinuclear movement had managed to penetrate local representative politics.38 However, a number of factors, including: pro-nuclear trade unions, cross-party consensus on the orientation of energy policy and the fact that the French electoral system made it difficult for smaller parties to enter parliament, impeding this movement in gaining influence on national policy.39

In the UK, the secrecy and closed decision-making structure that had been a feature of the nuclear industry also began to come under challenge during the 1970s, with the government pressured to adopt a more open policy style, epitomised by the six-month Windscale Public Inquiry in 1977. The inquiry was held in response to British Nuclear Fuels’ (BNFL) application to build a thermal oxide reprocessing plant (THORP) for national and international spent nuclear fuels (Hall 1986), a project which critics argued would turn the UK into the ‘‘World’s Nuclear Wastebin’’.40 While the inquiry was hailed by some as a ‘‘landmark in British nuclear policy making’’,41 for its broad scope and participatory approach,42 the final report was criticised for failing to justify why the arguments of the opposition had been rejected.43

In countries which are in the vanguard of new nuclear build today, the oil crisis did not impact upon them in the same way as it did in the West. In China for example, given its reliance on domestic coal for the vast majority of its energy needs, there was no impulse to invest in nuclear. Whilst in 1970 the then Chinese [22] premier, Zhou Enlai, argued that China needed to explore the peaceful uses of nuclear energy, the first nuclear plant did not begin construction until 1985 and only became operational in 1991.[23] A number of factors can help to explain this retarded development relative to the West. State funds available to invest in nuclear plants were not made available by the Chinese government during the 1970s through to the 1990s. Chinese policymakers thought that domestic coal reserves were sufficient to meet the growing energy needs of the country, fur­thermore until 2005 nuclear energy was not part of the nation’s strategic energy plan, prior developments being ‘‘haphazard and lacking strategic vision’’.45 However, whilst public opinion in the West has waxed and waned in relation to support of nuclear power, in China public opinion is strongly supportive.46

In the first two decades of operation there had been reactor accidents in Canada, the UK, the USA and Switzerland, leading to a small number of deaths and millions of dollars of damage, none, however, had been of sufficient severity to throw the industry into turmoil and lead to a reversal in public confidence. This was about to change.

Radiation Exposures and Health Impacts

The ban on milk consumption significantly reduced the radiological impact of the accident on the population. Jackson and Jones7 estimated that the ban averted 75% of the ingestion dose to children, and a higher proportion of the ingestion dose to adults. The maximum dose to the thyroid of children in the local area was 160 mSv, with average doses being in the range 10-100 mSv.7 As a result of the milk ban, the main dose pathway was inhalation,10 though in children there was also a significant contribution from ingestion.7 The collective effective dose equivalent from external radiation, inhalation and ingestion was approximately 1900 person-Sv in the UK and 100 person-Sv in the rest of Northern Europe.10 Estimates imply that about 50% of the collective effective dose equivalent was due to inhalation, the remainder being mainly due to ingestion of milk and other foods (about 35%), the rest being attributed to external radiation from the cloud and ground deposits. Clarke11 estimated approximately 100 fatal and 100 non-fatal cancers in the UK population (over a 40-50 year period) resulting from the Windscale fire release. Polonium-210 was expected to give rise to the majority of fatal cancers. Thyroid cancer (from 131I), being in most cases successfully treatable, would be expected to form the majority of non-fatal cancers.

The median dose to 466 workers involved in fighting the fire and in clean-up work was 3.52mSv with the maximum individual dose being recorded as 43.9 mSv (determined from monthly dose monitoring records for October 1957).12 It was reported12 that the collective dose to these workers was 2.33 person-Sv for October 1957, which was approximately double the average monthly collective dose for 1957. As might be expected from the low median individual and collective doses, a study12 of mortality and the number of registered cancers during the period 1957-97 was ‘‘unable to detect any effect of the 1957 fire upon the mortality and cancer morbidity experience of those workers involved in it’’.

Selection of a Decommissioning Approach

There are a number of decisions to be made for each building that is to be decommissioned. This section outlines a set of criteria that can be used to select a preferred decommissioning approach.15 These criteria reflect the issues dis­cussed earlier. A range of decommissioning approaches that may be assessed by this approach are discussed16 18 and include issues of prompt decommissioning versus deferred decommissioning as well as selection of an end state.

The criteria are shown in Figure 5. The criteria form a hierarchy which, at the top level, has the three pillars of sustainability. These objectives are expanded into criteria and sub-criteria. Environmental impact is divided into radiological impact on man and the environment, resource usage, non-radiological dis­charges, local intrusion (which includes such factors as noise and visual pol­lution) and hazard potential (a measure developed as part of the NDA prioritisation process).2,19

These criteria provide a complete set of issues for the assessment of decommissioning projects, however, for any given assessment it may be useful to omit criteria which are not significantly different between all options under consideration, and it may also be convenient to subdivide other criteria to make best use of more readily available metrics.

It is important that the criteria are assessed over the whole lifecycle of each proposed decommissioning approach and that the end points are the same in each case in order to obtain the preferred solution.

If this approach is followed then optimum approaches to decommissioning will be obtained. Considering a complex site with many facilities or the UK as a whole, deferred decommissioning may be required in order to make the pro­gramme of work fit within the available annual budget. Such issues can be investigated using the same criteria and the impact in terms of cost, environ­ment and social factors assessed for leaving a plant in a state of surveillance and maintenance or care and maintenance.

By these methods programmes of work delivering maximum environmental benefit as quickly as possible can be derived.

Discharges

Use of resources

HLW/ILW waste disposal

LLW/VLLW waste disposal

POCO

Significant discharges

Use existing effluent treatment plants and chemicals

Significant arisings to existing waste routes

Low

Initial

Decommissioning

Some discharges

May require new effluent treatment plant; may require aggressive cleaning chemicals

Some arisings to existing/new waste routes

Low

Surveillance and

Ongoing discharges

Ongoing provision of services

Very low arisings of secondary

Low arisings of

Maintenance

to service streams

and associated clean up equipment

waste

secondary waste

Interim

Decommissioning

Some discharges

Equipment for vessel/pipework removal and size reduction required

Primary wastes generated: dismantled vessels and pipework. Wastes require packaging.

Primary wastes generated: dismantled vessels and pipework. Wastes require packaging.

Care and Maintenance

Negligible

Negligible

Negligible

Negligible

Final

Low activity dusts

Significant use of retrievals

Small quantities expected

Very large quantities

Decommissioning

and cleaned liquors

equipment and waste packaging

under some facilities

expected

Groundwater Remediation and Contaminated Land

Low

Significant use of retrievals equipment and waste packaging

Small quantities expected under some facilities

Very large quantities expected

Table 2 Environmental impacts of decommissioning.

Decommissioning of Nuclear Sites 125

Table 3 Environmental impacts of buildings between decommissioning stages.

Requirement for services Potential migration of

Potential for release in with associated discharges species from contaminated

maloperation and secondary waste land

Following completion of operations

Large quantities of mobile Water, steam, electricity species may be released in maloperation conditions

POCO

may produce liquid effluents. In each case suppression of the effluent produc­tion may result in formation of secondary wastes, such as filters, which then require ultimate disposal. Natural resources are also required for waste
packaging and construction of decommissioning equipment, some of which may not be able to be reused.

In between decommissioning stages, environmental impacts include the need for provision of services such as water and air and the treatment of the associated discharges; clean up and secondary waste generation; potential discharges in the event of a major building failure, perhaps as a result of a natural disaster or malicious action; and migration of species in the ground under the building from previous spillages.

8 Conclusions

Decommissioning is an important phase in the lifecyle of any nuclear facility, covering the transition from an operating facility to its planned end state. The United Kingdom has had a significant civil nuclear operation for many years, and as such has a significant decommissioning challenge associated with both fuel cycle plant and reactors. The decommissioning of legacy plant involves significant financial liabilities and involves large volumes of waste.

Whilst the process of decommissioning a facility can be described generically as a series of stages, the selection of a decommissioning strategy is typically plant, site or region specific. Decommissioning can be driven by many factors, ranging from a desire to reduce the hazard associated with an ageing facility through to a need to release the site for re-use. This paper has highlighted a series of criteria that should be considered when selecting a decommissioning strategy, of which environment and safety factors are of fundamental importance.

Atmospheric Deposition

Following atmospheric release, vegetation intercepts radionuclides from wet, dry or occult deposition19 and the remaining radionuclides are deposited to the ground surface. The fraction of radionuclides intercepted by vegetation is dependent on the developmental stage of the plant and the amount of above ground biomass, and consequently the time of year is important in determining how much radionuclide is retained initially on plant surfaces. Leafy vegetables, because of their large surface area, have a high interception of radionuclides which is currently being demonstrated around the Fukushima site where radioiodine and radiocaesium activity concentrations in spinach are high compared with other crops.66

For dry deposition, interception is more effective for small particles and reactive gases than for larger particles. Interception of wet-deposited radio­nuclides is a result of the complex interaction of the chemical form of the element and the stage of development of the plant.

The process of loss of radionuclides from plant surfaces is termed ‘‘weathering’’, which is influenced by a number of physical processes, including wash-off by rain (or irrigation in agricultural systems), surface abrasion, wind action, tissue senescence, leaf fall, herbivore grazing, growth, volatilisation and evaporation.20

Direct ingestion by animals of radionuclides intercepted by vegetation can be an important contributor to radionuclide intake.

Uranium Mining

There is a comprehensive discussion of uranium mining on the World Nuclear Association website (http://www. world-nuclear. org/), on which the following discussion is largely based. The first step in the nuclear fuel cycle is the mining of uranium ore. While high grade ores are still available, a significant pro­portion of uranium mining is now carried out by extraction of large volumes of easily accessed low grade ore (grade typically a few hundred ppm U3O8) from open cast mines. Canada, Australia, Kazakhstan, Niger, Russia and Namibia presently produce most of the World’s uranium. World uranium production has increased by almost 50% over the last decade, to over 50000 tonnes in 2009.

As well as the naturally occurring uranium isotopes, 234U, 235U and 238U, uranium ores contain a wide range of other radioisotopes, formed in situ as intermediates in the decay of uranium to stable lead isotopes. These are dominated by the decay products of 238U (see Figure 1). Uranium is extracted from crushed ore by leaching, usually with either sulfuric acid or sodium carbonate solution, then concentrated from the leachate by solvent extraction or ion exchange. Most of the decay product radionuclides (Ra and below) are left in the wastes. Especially where ore grades are low, large volumes of these wastes (‘‘tailings’’) arise and uranium mine wastes are often relatively radio­active. In total, around 940 Mt of tailings have been created.1 These wastes

image003

Figure 1 Decay products derived from 238U. Downward arrows denote a-decays, while upward arrows denote b-decays.

require careful management to prevent the spread of contamination and associated health risks. A particularly challenging example is the Erzgebirge of eastern Germany (http://www. wise-uranium. org/uwis. html), where 216 000 tonnes of uranium were extracted between 1945 and 1990. These mining activities affected an area of about 100 km2, primarily around five mine sites and two ore processing sites. The wastes included 311 million m3 of waste rock, and a further 178 million m3 of tailings. The tailings covered a total area of almost 600 hectares, to a maximum thickness of 70 m. A 15 year remediation programme, costing around €6 billion and largely now completed, has been required to stabilise and restore the area.

Dounreay

Another former RAF site, Dounreay, became the centre for the UK’s fast reactor research and development in 1954. Commercial energy production began in 1962 becoming the first fast reactor in the world to supply energy to the grid. However, fast reactor technology proved to be more expensive than was first thought and consequently all fast reactor programs ceased operations in 1994. Reprocessing and fuel fabrication operations ended in 1996 and 2004, respectively. Dounreay is now wholly a decommissioning site owned by the NDA and run by Dounreay Site Restoration Ltd. The site closure program is scheduled to be completed by 2025 at an estimated cost of £2.6 bn. Over the course of decommissioning, Dounreay is expected to generate a lifetime waste of 97 126m3 of LLW, 3164m3 of ILW and 0m3 of high level waste (HLW).12 Dounreay has a legacy of irradiated nuclear fuel particles which were dis­charged into the sea as a result of reprocessing activities during the 1960s and 1970s. These particles have been detected on the seabed around Dounreay with the most hazardous fragments located close to the old discharge point on the seabed. Their disintegration is believed to be the source of smaller, less hazardous particles detected on local beaches. Around 1000 significant (106 Bq of 137Cs), 1000 relevant (105 to 106 Bq of 137Cs) and 3000 minor (< 105 of 137Cs) particles are thought to be present within the main particle plume offshore from Dounreay.13 Monitoring of the particles is expected to last until 2020s and with a total cost estimated at £18-25 million.

On site, there are pockets of caesium-137 contamination with activities greater than 4Bqg-1, although the majority of contamination over the site is below 0.4 Bqg-1.14 Between 1959 and 1971, solid ILW was disposed of in the Dounreay waste shaft. A purpose built wet silo was constructed in 1971 as an alternative to the shaft after which solid ILW was tipped into the silo, a large underground concrete vault filled with water. Large items too big for a purpose built wet silo continued to be disposed of down the shaft until 1977, when an explosion in the airspace above the water column damaged the shaft cover. There are uncer­tainties over the exact contents of the shaft, thought to include contaminated equipment, chemicals, natural uranium fuel, radioactive sources and sludges.15 A total of 703 m3 of waste is covered by a water column 8 m deep which is below sea level so that groundwater flow is towards the shaft.

Management of Land Contaminated by the Nuclear Legacy

Guiding Principles and Timeline

UK GDF planning is in its infancy and subject to change, however, a number of important guiding principles for GDF implementation were outlined in the MRWS White Paper:1

extended history of nuclear power generation and nuclear weapons production (see section 3.3.4).

(iii) The GDF design process will be informed by international experience and best practice (see section 3.2 and Table 1).

(iv) The Government currently favours the construction of a single GDF that is capable of housing all current and potential future HAWs and spent fuel (if this is declared as waste). The construction of separate GDFs at one or multiple sites (one for HLW/spent fuel and one for ILW) is also possible, but will have an increased cost and environ­mental impact compared to a single GDF.

(v) Economic and security considerations favour rendering GDF wastes as irretrievable in the long term, but planning, design and construction must be conducted in a way that does not exclude the option of a relatively extended period of retrievability pending a final decision.

(vi) As GDF implementation will take several decades, HAWs should be conditioned to increase stability (see section 3.3.5) and interim storage must be improved to ensure the safe containment of wastes prior to GDF emplacement (see section 3.3.6).

(vii) Further to the MRWS White Paper, the NDA-RWMD published a summary report Geological Disposal: Steps towards Implementation14 that (i) outlined the preparatory work undertaken by the NDA in lieu of a final GDF site decision, and (ii) identified a prospective timeline for GDF implementation. Importantly, the preparatory work has involved the development of several GDF reference scenarios (including geological and engineering considerations) and this is dis­cussed in section 3.3.7. The prospective timeline estimates initial GDF waste emplacement in 2040 and site operation over several decades during which waste will be monitored and could be potentially retrieved (see Figure 3).

Environmental Radiological Protection

There are fundamental differences in determining the risk to humans following exposure to radiation and the risks to other organisms.53 Human risk analyses largely focus on cancer risks to individuals. Dose-response relationships are sufficiently well known that risk factors (i. e. probability of lethality from cancer per unit of dose) are established. In contrast, ecological risk to wildlife is concerned generally with populations of plants and animals. For most organ­isms, cancer induction is not relevant and suitable endpoints include morbidity (functioning less well), reduced reproductive success, mortality and chromo­somal damage. The dose-response relationships for these endpoints are not established for many wildlife groups, and therefore there are well established and quantified risk factors that equate dose to the probability of an outcome.

The endpoints considered to be most relevant in determining risks to wildlife are increased mortality, increased morbidity and decreased reproductive out­put. Of the three, changes in reproduction are thought to be the most sensitive to radiological exposures and relevant for the protection of wildlife populations (populations rather than individuals being likely to be the object of protection for environmental assessments).54 Much more data are needed, however, before we can confidently predict population level impacts to wildlife as a function of radiological exposures.55 Data are particularly scarce for the chronic, low-level exposures for which most assessments will be used. The available data on the biological effects of ionising radiation on wildlife have been compiled from the literature into an online database called the FREDERICA radiation effects database (http://www. frederica-online. org).56