Category Archives: Nuclear Power and the Environment

Geological Disposal

1.1 The GDF Concept

Geological disposal involves the emplacement and isolation of HAWs in an underground repository (a GDF), housed deep inside a suitable rock formation (see Figure 2). For UK purposes, the definition of geological disposal is ‘‘burial underground (200-1000 m) of radioactive waste in a purpose built facility with no intention to retrieve’’.1,4,5 Geological disposal facilities utilise a multiple barrier concept, whereby several engineered barriers are intended to work together with the host geology to contain and retard the radionuclides that are present in radioactive wastes. The components of a multiple barrier GDF typically include (see Table 1):

Figure 2 Schematic representation of a generic co-located geological disposal facility for HLW/spent fuel (SF) and ILW/LLW.14 (Reproduced with permission from the UK NDA).

from nuclear fuel processing is vitrified to make it into an insoluble waste form. In the case of ILWs, grout encapsulation buffers the pH of the waste to hyperalkaline conditions under which a range of radi­ologically significant radionuclides (including the actinides) are pre­dicted to be poorly soluble.

(ii) The waste container. The conditioned waste form is encapsulated in a container prior to disposal, creating a waste package. For example, grouted ILW is encapsulated in steel to provide mechanical stability. Furthermore, when the GDF evolves, steel corrosion creates reducing conditions that should retard the mobility of some radionuclides, par­ticularly the actinides.

(iii) Buffer materials. Buffer materials are directly emplaced around the waste package. The materials used are chosen to provide beneficial functions, for example, to control the chemical or flow conditions in the repository during GDF evolution.

(iv) Backfill. Backfill is used to pack the GDF excavation tunnels, shafts, and drifts. The materials used must have the mechanical strength to support the GDF structure and are chosen to complement waste package conditioning and to allow further conditioning of the GDF to retard radionuclide mobility by pH, redox and/or flow control.

(v) Sealing systems. Highly impermeable sealing materials are required to control GDF groundwater ingress during construction and after waste emplacement.

(vi) Geology. The GDF host geology provides the final barrier for the repository. The geological barrier is likely to provide a number of beneficial functions, for example, it could support low groundwater flow or contain minerals and surfaces that sorb radionuclides from solution.

Once a GDF is closed, natural hydrochemical and biogeochemical processes will act to degrade the engineered structure and the multiple engineered barriers are expected to contain the waste for several thousand years.1 After this engi­neered containment period, it is expected that groundwaters will have pene­trated the backfill and canisters, and ultimately will have entered the waste packages, dissolving some fraction of the radionuclides. The partially degraded barriers will consist of a range of evolved mineral phases including iron oxides and aged cement phases, and for a cementitious repository, the pH is predicted to evolve from hyperalkaline to a more mildly alkaline state. Overall, the evolved system is likely to limit the mobilisation of radionuclides from the wasteform for several thousand to several tens of thousands of years. None­theless, with geological time radionuclides will transport to the surrounding geological environment, which will have been affected by the alkaline fluids from the GDF. In such instances, the radionuclides are likely to dilute and disperse, and sorption reactions with the surrounding rock and associated minerals and surfaces are intended to limit radionuclide transport into the biosphere to “acceptable levels’’. Clearly, a key challenge in successful GDF implementation is the associated communication of these uncertainties to the relevant stakeholder and public audiences. It is clear that high quality, inde­pendent, peer reviewed science is essential to allow full scrutiny of the safety case for geological disposal. It is also clear that communicating these scientific findings, coupled to explanation of the proposed GDF concept as it develops, in a clear and transparent way is likely to be pivotal in developing the ‘‘trust’’ between all players that is needed for movement toward GDF construction.

Quantification of Transfer to Animals

For human food chains, the transfer of radionuclides to milk and meat has previously been commonly quantified using the transfer coefficient defined as the equilibrium ratio between the radionuclide activity concentration in milk (Fm; dl-1 or dkg-1) or meat (Ff dkg-1) and the daily dietary radionuclide intake. Transfer coefficients for smaller animals are higher than those for larger animals, and those for adults are lower than those for (smaller) young livestock. Beresford et al.32 suggested that much of this difference observed in transfer coefficients arises because they incorporate dry matter intake which increases with animal size, and suggested that the concentration ratio between the activity concentration in an animal product and diet may be a less variable and more generic parameter (later substantiated in the IAEA handbook28). Mean concentration ratios reported in IAEA28 for milk are highest for Cs (0.15) and for essential elements including I (0.46).68,69 The transfer of caesium and/or iodine isotopes to milk has required mitigation after the Windscale, Chernobyl and Fukushima accidents.

For wildlife assessment, most approaches use concentration ratios for at least some organisms. The CR is based on the whole organism activity concentration for terrestrial animals most usually compared to the soil activity concentration, see Equation (2).

The ‘‘whole organism’’ generally excludes the outer parts such as the skin and feathers, and the gut which can contain ingested material which is more highly contaminated than other body tissues.

Ratios approaches assume equilibrium, but there can be considerable tem­poral variation in an animal’s intake of radionuclides and hence tissue con­centrations may be constantly changing. Equilibrium will often not have been reached within an animal’s lifetime, especially for radionuclides with long physical and biological half-lives in tissues (e. g. plutonium). Dynamic models describing the behaviour of radionuclides within animal tissues have been developed for human food chains which can be used to predict radionuclide activity concentrations in different tissues following continuous, single or varying intakes.33 36

Differences in the quantification of transfer in the currently available assess­ment tools have resulted in large variation in predicted whole organism activity concentrations and resultant internal doses.37 40 In response, CRwo values have been collated for terrestrial, freshwater, marine and estuarine ecosystems in an online database and the data reported in broad wildlife groups by the IAEA in a Technical Report Series Handbook currently in preparation.41 Since much of the reported data for activity concentration in organisms are for edible fractions used in the human food chain, the handbook also provides tables to enable the conversion of data for edible fractions to whole organism values.42

In the IAEA handbook, all the CR values are based on reported data. As an example of the CRwo data available, the values for the selected radionuclides for some terrestrial wildlife groups are shown in Figure 1.

Given the large number of potential radionuclide-organism combinations which may require consideration within an environmental assessment many CRwo values cannot be derived from the literature. Various methods have been proposed to extrapolate from the available data to provide values for missing combinations, such as described by Beresford et al.43 and Higley.44

Thorium

The commonest isotope of thorium, 232Th, is fertile, being converted by neutron irradiation to the fissile 233U. However, there is no thermally fissile isotope of thorium available in nature in usable amounts, so it is not possible to construct an entirely thorium-fuelled reactor. Any thorium-fuelled reactor therefore has to use a fissile material such as 233U, 235U or 239Pu to drive the reaction.

Thorium fuel has several advantages because the fissile 233U is significantly contaminated with a 232U byproduct formed by n,2n reactions in 233U. The 232U decay products emit high energy gamma rays, which limits the utility of a 233U/232U mixture in nuclear weapons. In addition, long-lived transuranic wastes are much less significant in thorium fuels, although 231Pa (half life 3.27 x 104 years) is produced. Disadvantages include the presence of 232U and its decay products, which make handling of the uranium stream difficult, since that has to be conducted in heavily shielded facilities. Also, large scale recycling of thorium based fuels may require novel technologies.

Remediation

A number of different techniques are available for the remediation of both groundwater and soils and can be categorised into biological, chemical and physical treatments. This review will look to provide an overview of some of these key techniques and will then focus on a number of case studies where these methods have been applied in the field. The advantages and drawbacks of the major techniques are summarised in Table 5.

Sources of Radionuclides in the Environment

1.1 Nuclear Weapons

Nuclear weapons tests account for a significant proportion of the total activity released into the environment and historically are the major source of radio­nuclides in the atmosphere. An estimated 2 x 108 TBq of radioactivity have been released into the atmosphere as a result of nuclear weapons testing;3 Table 1 lists the radionuclides produced and released by atmospheric nuclear tests.4 Most of the radionuclides released were short-lived, and so atmospheric levels of radioactivity have declined sharply from their peak in the 1960s; further decline in levels of radioactivity will be much slower, as the remaining activity is predominantly due to long-lived 14C.4 Fallout from atmospheric weapons testing will also cause contamination of surface water and terrestrial environments. Fallout can either be local (within a few 100 km of the test site), regional (up to several thousand km from the site) or global, and the spread of fallout will depend on the altitude and latitude of the explosion and the explosive yield.5,6

Although much of the contamination arising from nuclear weapons testing is widely dispersed and at low levels, there are considerable levels of activity at test and production sites. In the USA, there are >70 million m3 of con­taminated soil and > 1800 million m3 of contaminated water at Department of Energy facilities used for weapons production.7 At the Mayak Production Association in the Chelyabinsk region, Russia, weapons-grade plutonium was produced for в 40 years and significant levels of contamination exist at the site and the surrounding area from both production and accidental discharges.8,9 Approximately 105 TBq of radioactivity, as liquid waste, were discharged from the site into the Techa River between 1949 and 1956, with most of the released radioactivity associated with 89+90Sr (20.4%), 137Cs (12.2%), rare earth isotopes (26.8%), 95Zr-95Nb (13.6%) and ruthenium isotopes (25.9%).9,10 At the same site ~ 7.4 x 104 TBq of radioactivity were released as a result of a high level radioactive liquid waste tank exploding, causing the contamination of

Table 1 Radionuclides produced and globally dispersed in atmospheric nuclear tests.4

Global Release Global Release

Radionuclide

Half-life

(PBq)

Radionuclide

Half-life

(PBq)

H-3

12.33 y

186000

Sb-125

2.76 y

741

C-14

5730 y

213

I-131

8.02 d

675000

Mn-54

312.3 d

3980

Ba-140

12.75 d

759000

Fe-56

2.73 y

1530

Ce-141

32.5 d

263000

Sr-89

50.53 d

117000

Ce-144

284.9 d

307000

Sr-90

28.78 y

622

Cs-137

30.07 d

948

Y-91

58.51 d

120000

Pu-239

24110 y

6.52

Zr-95

64.02 d

148000

Pu-240

6583 y

4.35

Ru-103

39.26 d

247000

Pu-241

14.35 y

142

Ru-106

373.6 d

12200

~20 000 km2 at concentrations > 4000 Bq m 2.8,11 Underground weapons test­ing has caused contamination of the subsurface with tritium, fission and acti­vation products and actinides.2,12 At the Nevada Test site, the primary location for nuclear weapons tests in the USA, ~1 x 107TBq of radioactivity was released into the subsurface during 828 tests.13 The decay-corrected radionuclide inven­tory as of 1992 (the year of the last test) is 4.86 x 106 TBq, with the most sig­nificant amounts of radioactivity arising from 3H, 137Cs, 90Sr, 241+239Pu, 85Kr, 152+154Eu and 151Sm.14 The inventory will change, however, as short-lived radionuclides decay and daughter radionuclides appear; with time, the remain­ing radionuclide inventory in the subsurface will be dominated by long-lived radionuclides such as uranium, plutonium, neptunium and americium.14

The ICRP’s System of Protection

The primary aim of the ICRP’s Recommendations is to contribute to an appropriate level of protection for people and the environment against the detrimental effects of radiation exposure, without unduly limiting the desirable human actions that may be associated with such exposure. In protecting individuals, it is the control (in the sense of restriction) of radiation doses that is important, no matter what the source. In view of what is known about the effects of radiation, the human health objectives are relatively straightforward: to manage and control exposures to ionising radiation so that deterministic effects are prevented and the risks of stochastic effects are reduced to the extent reasonably achievable. Before examining how these objectives are achieved, however, it is first useful to consider the situations that would result in radia­tion exposure in the first place.

The ICRP currently recognises three types of exposure situations: (i)

occur unintentionally to those where there is a clear intention to perform a malevolent act. Specific guidance has been given in relation to radiological attacks.15

(ii) Emergency exposure situations, which are unexpected situations such as those that may occur during the operation of a planned situation, requiring urgent attention. They are, inevitably, unpredictable in detail, and often require particular attention being paid to deterministic health effects.

(iii) Existing exposure situations, which are exposure situations that already exist when a decision on control has to be taken, such as those caused by natural background radiation.

Individuals may be exposed to radiation from more than one source. Provided that doses are below the threshold for deterministic effects (harmful tissue reactions), the presumed proportional relationship between the additional dose attributable to each situation and the corresponding increase in the probability of stochastic effects makes it possible to deal separately with each one.

The term ‘‘practice’’ has become widely used in radiological protection and denotes an activity that causes an increase in exposure to radiation or in the risk of exposure to radiation. A practice can be a business, trade, industry or similar activity. It can also be a government undertaking or a charity. Regardless of its purpose, however, it is implicit in the concept of a practice that the radiation sources that it introduces or maintains can be controlled directly by action on that source.

The exposure of people to ionizing radiation can also be categorized in different ways:

(i) Medical exposure of patients, which includes radiation exposure resulting from diagnostic, interventional and therapeutic procedures.

(ii) Occupational exposure, in which radiation exposure is incurred as a result of work.

(iii) Public exposure, which includes all exposures of the public other than occupational or medical exposure of patients, and includes exposures of the embryo and foetus of pregnant workers.

Of course any particular individual could belong simultaneously to all three categories.

‘‘Patients’’ are defined as individuals who receive an exposure to radiation associated with a diagnostic, interventional or therapeutic procedure. Dose limits and dose constraints do not apply to individual patients because they may reduce the effectiveness of the patient’s diagnosis or treatment, thereby doing more harm than good. Emphasis is therefore placed on the justification of the medical procedures and on the optimisation of protection and, for diagnostic procedures, the use of diagnostic reference levels.

‘‘Workers’’ are defined as any person who is employed, and who has recognised rights and duties in relation to occupational radiological protection.

Workers in medical professions involving radiation are occupationally exposed, and air crew may also be considered to lie in this category — but not ‘‘frequent fliers’’. (Exceptional cases of cosmic radiation exposures, such as exposure in space travel, where doses may be significant and some type of control war­ranted, are dealt with separately, taking into account the special type of situations that can give rise to this type of exposure).

An important function of an ‘‘employer’’ is that of maintaining control over the sources of exposure, and over the protection of workers who are occupa­tionally exposed. In this respect, the classification of areas of work is preferable to the classification of workers. There are usually two types of designation — ‘‘controlled areas’’ and ‘‘supervised areas’’. A ‘‘controlled area’’ is a defined area in which specific protection measures and safety provisions are, or could be, required for controlling normal exposures or preventing the spread of contamination during normal working conditions, and preventing or limiting the extent of potential exposures. A ‘‘supervised area’’ is one in which the working conditions are kept under review but for which special procedures are not normally needed. A controlled area is often within a supervised area, but need not be. Workers in controlled areas of workplaces are of necessity well informed and specially trained, and form a readily identifiable group. Such workers are monitored for radiation exposures incurred in the workplace, and occasionally may receive special medical surveillance.

Particular attention is paid to pregnant workers and breast feeding mothers. If a female worker has declared that she is pregnant, additional controls have to be considered to protect the embryo or foetus, to a level that is equivalent to that provided for members of the public. The working conditions of a pregnant worker should therefore be such as to ensure that the additional dose to the embryo or foetus would not exceed about 1 mSv during the remainder of the pregnancy. The principal implication is that the employer should carefully review the exposure conditions of pregnant women and, if required, alter their working conditions so that the probability of accidental doses and radionuclide intakes is extremely low.16,17

Finally, a ‘‘member of the public’’ is defined as any individual who receives an exposure that is neither occupational nor medical. A large range of different natural and man-made sources contribute to the exposure of members of the public but, in general, each source will result in a distribution of doses over many individuals. For the purposes of protection of the public, the term ‘‘critical group’’ has long been used to characterise an individual receiving a dose that is representative of the more highly exposed individuals in the population. Dose restrictions were then applied to the mean dose in the appropriate critical group. A considerable body of experience has now been gained in the application of the critical group concept, particularly in the UK. There have also been developments in the techniques used to assess doses to members of the public, including the increasing use of probabilistic techniques. The adjective ‘‘critical’’ has also had the connotation of a ‘‘crisis’’, which was never intended by ICRP. Furthermore, the word ‘‘group’’ can be confusing in the context where the assessed dose is to an individual.

So the ICRP now recommends the use of the “Representative Person’’ for the purpose of radiological protection of the public instead of the earlier critical group concept.18 This Representative Person may be real or hypothetical, but it is important that the habits (e. g., consumption of foodstuffs, breathing rate, location, usage of local resources etc.) used to characterise the Representative Person are typical habits of a small number of individuals representative of those most highly exposed, and not the extreme habits of a single member of the population. Thus although consideration may be given to some extreme or unusual habits, they should not dictate the characteristics of the Representative Persons considered. Dose coefficients are available for the calculation of pro­spective doses to different age categories, but for practical reasons it is now recommended that three age categories be used: 0-5 years (infant); 6-15 years (child); and 16-70 years (adult), the dose coefficients and habit data for a 1 year old, a 10 year old, and an adult being used respectively.

All of these concepts and definitions need marshalling together in order to provide advice that is both consistent and logical across all exposure situations, and across all categories of exposure. In order to do so, it is necessary to construct some form of principled framework. Such a framework obviously needs to be based on the scientific information that exists, and the LNT model, whilst also allowing for the incorporation and interpretation of new informa­tion as it arises. However, it also needs to be able to accommodate other factors relating to sociological, financial and other relevant considerations if it is to be of value as a decision making tool. The ICRP has attempted to rise to this challenge by basing its advice on the following three key principles:

(i) The Principle of Justification: any decision that alters the radiation exposure situation should do more good than harm.

(ii) The Principle of Optimisation of Protection: the likelihood of incurring exposure, the number of people exposed and the magnitude of their individual doses should all be kept as low as reasonably achievable, taking into account economic and societal factors.

(iii) The Principle of Application of Dose Limits: the total dose to any individual from regulated sources in planned exposure situations, other than medical exposure of patients, should not exceed the appropriate limits specified by the ICRP.

The principles of justification and optimisation apply in all three exposure situations, whereas the principle of application of dose limits applies only to doses expected to be incurred with certainty as a result of planned exposure situations.

Editors

image001Ronald E. Hester, BSc, DSc (London), PhD (Cornell), FRSC, CChem

Ronald E. Hester is now Emeritus Professor of Chemistry in the University of York. He was for short periods a research fellow in Cambridge and an assistant professor at Cornell before being appointed to a lectureship in chemistry in York in 1965. He was a full professor in York from 1983 to 2001. His more than 300 publications are mainly in the area of vibrational spectroscopy, latterly focusing on time-resolved studies of photoreaction intermediates and on biomolecular systems in solu­tion. He is active in environmental chemistry and is a founder member and former chairman of the Environment Group of the Royal Society of Chemistry and editor of‘Industry and the Environment in Perspective’ (RSC, 1983) and ‘Understanding Our Environment’ (RSC, 1986). As a member of the Council of the UK Science and Engineering Research Council and several of its sub-committees, panels and boards, he has been heavily involved in national science policy and administration. He was, from 1991 to 1993, a member of the UK Department of the Environment Advisory Committee on Hazardous Substances and from 1995 to 2000 was a member of the Publications and Information Board of the Royal Society of Chemistry.

image002Roy M. Harrison, BSc, PhD, DSc (Birmingham), FRSC, CChem, FRMetS, Hon MFPH, Hon FFOM

Roy M. Harrison is Queen Elizabeth II Birmingham Cen­tenary Professor of Environmental Health in the University of Birmingham. He was previously Lecturer in Environ­mental Sciences at the University of Lancaster and Reader and Director of the Institute of Aerosol Science at the Uni­versity of Essex. His more than 350 publications are mainly in the field of environmental chemistry, although his current work includes studies of human health impacts of atmospheric pollutants as well as research into the chemistry of pollution phenomena. He is a past Chairman of the Environment Group of the Royal Society of Chemistry for whom he has edited ‘Pollution: Causes, Effects and Control’ (RSC, 1983; Fourth Edition, 2001)
and ‘Understanding our Environment: An Introduction to Environmental Chemistry and Pollution’ (RSC, Third Edition, 1999). He has a close interest in scientific and policy aspects of air pollution, having been Chairman of the Department of Environment Quality of Urban Air Review Group and the DETR Atmospheric Particles Expert Group. He is currently a member of the DEFRA Air Quality Expert Group, the DEFRA Expert Panel on Air Quality Standards, and the Department of Health Committee on the Medical Effects of Air Pollutants.

Nuclear Accidents

J. T. SMITH

ABSTRACT

In the wake of the Fukushima accident, this chapter provides a summary and comparison of the four previous major accidents in the history of exploitation of nuclear power for military and civilian purposes: Wind — scale, Kyshtym, Three-Mile Island (TMI) and Chernobyl. The events leading to each accident, and their consequences to environmental and human health, are summarised. The earlier accidents at Windscale (UK) and Kyshtym (former Soviet Union) could be attributed in large part to the pressures to produce plutonium for atomic weapons programmes during the early years of the Cold War. This led to nuclear facilities being built with insufficient emphasis on design safety and, in some cases, lack of full understanding of the processes involved. The latter accidents at TMI (USA) and Chernobyl (former Soviet Union) were also in part caused by design and equipment failures, but operator errors (caused by poor training, insufficient or unclear information and a failure in safety culture) made a key contribution. In terms of environmental and human health impacts, the Kyshtym and Chernobyl accidents were of much greater significance than those at Windscale and TMI. Both Kyshtym and Chernobyl caused mass permanent evacuation and significant long-term environmental contamination. As demonstrated at TMI, even where radiation doses to the public are very low, psychological and social consequences of nuclear accidents can be serious. Concerning impacts of nuclear accidents on ecosystem health, there is no clear evidence that even the Kyshtym and Chernobyl accidents have caused significant damage in the long term. However, studies of the effects of radiation damage in these contaminated environments have been confounded by the largely positive impact evacuation of the human population has had on the ecosystem.

Issues in Environmental Science and Technology, 32 Nuclear Power and the Environment Edited by R. E. Hester and R. M. Harrison © Royal Society of Chemistry 2011

Published by the Royal Society of Chemistry, www. rsc. org

1 Introduction

Prior to the 2011 Fukushima accident, there had been four accidents of major importance in terms of their actual or potential consequences for the envir­onment and human health. The two earliest accidents, at the Windscale site in the UK and at Kyshtym in the former Soviet Union, were at facilities which were part of the Cold War drive to produce materials for nuclear weapons production. The latter two, at Three-Mile Island (TMI) in the USA and at Chernobyl, in the former Soviet Union, were at civilian nuclear power plants (see Table 1). Much has previously been written about these four accidents, particularly the Chernobyl accident, and this chapter aims to give a brief summary and overview of this literature.

In addition to these major accidents, there have previously been significant releases of radioactivity to the environment during development of nuclear weapons and from ‘‘routine’’ operations at nuclear facilities. Some key past releases of radioactivity to the environment are summarised in Table 2. It should be noted that long-term environmental contamination from the Hiroshima and Nagasaki bombs was not significant; the high radiation exposures to the population of these cities came primarily from radiations during or shortly after the explosions. During the Cold War, atmospheric (i. e. above ground) testing of nuclear weapons caused fallout of relatively low level radioactivity (particularly 90Sr, 137Cs and 14C) globally, mainly in the Northern Hemisphere. Several hundred atmospheric nuclear weapons tests were carried out by the USA, USSR and the UK until a test ban treaty was signed in 1963. Limited atmospheric nuclear weapons tests were carried out by France and China in the early 1970s.

Stages of Decommissioning

In the United Kingdom, the Nuclear Decommissioning Authority (NDA) is a non-departmental government body founded in 2005 to manage the UK civil nuclear wastes. The NDA defined a number of stages of the decommissioning process as illustrated in Figure 1.3

Whilst each plant and site may have their own characteristics, this series of generic stages provide a good overview of decommissioning activities from the end of operations through to site closure.

The first stage after normal plant operations cease is known as Post­Operational Clean Out (POCO). In the case of a reactor site this is removal of the fuel from the reactor; for other facilities it typically requires existing equipment, with only minor modifications, to be used by plant operators to move most of the radioactivity out of the plant. POCO will typically use only chemicals (and equipment) that were used during plant operation, and utilise existing waste and effluent treatment routes.

The next stage, Initial Decommissioning, removes or fixes loose radioactive material within pipework and vessels to reduce dose rates and ease access to facilitate further decommissioning tasks. This may use special cleaning che­micals and so require additional effluent treatment equipment. The transition from POCO to initial decommissioning may involve changes in staff and controlling procedures and so is potentially problematic; particular advice is available for making this transition.4

The Surveillance and Maintenance stage applies only to facilities that are not in a passively safe state following Initial Decommissioning and which require a period of Surveillance and Maintenance prior to Interim Decommissioning. In these cases certain plant systems would remain operational (e. g. services,
radiological monitoring and ventilation systems), maintenance regimes would remain in place and some plant enhancement may be necessary to maintain building structural integrity.

The Interim Decommissioning stage is when the work required to convert a facility to a passively safe state is carried out. Typically this would involve removal of residual radioactive inventory from the plant, dismantling and removal of plant and equipment, removal of non-radioactive facilities and, where possible, reduction of the building footprint. At the end of this stage, the plant will be in a passively safe state with systems and processes de-energised, deactivated and drained.

The Care and Maintenance stage allows limited monitoring and observation of a facility prior to final decommissioning. It can be distinguished from sur­veillance and maintenance because few resources are required. Typically care and maintenance might be used to allow levels of radioactivity to decay, but it is also possible that the facility has been made sufficiently safe that resources are most effectively used on other facilities at a given time. Effort to maintain the plant in this state would be minimal, confined to routine monitoring and sur­veillance of the facility and the building fabric with very few, if any, operators dedicated to the plant on a full-time basis. It is important the facility is ade­quately enclosed during this period and some guidance for this is provided.5

Final Decommissioning will bring a plant or facility to its agreed end-point, including final site clearance but excluding any contaminated land or groundwater remediation. This includes final dismantling of installed plant and equipment, strip-out of any remaining facilities within the building and demolition of cells, internal structures and the building envelope. All wastes generated will be disposed of or stored awaiting disposal. The end-point reached at the completion of this phase will be such that any danger or hazard that may remain to workers, the general public or the environment is at a minimum level consistent with the principles of ALARP (As Low As Rea­sonably Practicable).

Subsequent work would consider both Groundwater Remediation and Contaminated Land Remediation prior to site close-out.

Radiation Protection of the Environment: A Summary of Current Approaches for Assessment of Radionuclides in Terrestrial Ecosystems

B. J. HOWARD* AND N. A. BERESFORD

ABSTRACT

Over the past decade the international community has recognised the need to demonstrate that wildlife populations are protected from environmental releases of radioactivity as well as humans. Frameworks and models for such assessments have been developed and are continuously being tested and improved. In this chapter, the basic elements of an assessment for radiation exposure of wildlife are outlined, including the current methods used to estimate environmental radionuclide transfer and the resulting doses. The methods used to derive benchmarks based on radiation effects data, against which estimated doses can be compared, are described. Since it is impossible to quantify transfer and doses for all species, the approaches use representative groups such as ‘‘reference organisms’’ including the Reference Animals and Plants of the International Commission on Radiological Protection (ICRP). The current approaches used for wildlife have some commonalities with those used for humans, but with some notable differences. Organisms tend to be considered as homogenous, simplified geometric shapes with the whole organism absorbed dose rate being estimated; the majority of available effects data are expressed on the basis of whole organism dose rates. Transfer is often quantified by predicting the whole organism activity concentration from that in the environmental media such as soil, water or air. Protection is focused on

* Corresponding author

Issues in Environmental Science and Technology, 32 Nuclear Power and the Environment Edited by R. E. Hester and R. M. Harrison © Royal Society of Chemistry 2011

Published by the Royal Society of Chemistry, www. rsc. org

populations rather than individuals, and therefore some approaches used for

the assessment of chemicals pollutants are also being adopted for

radionuclides.

1 Introduction

Man-made radionuclides are emitted from the nuclear fuel cycle into the atmosphere in gaseous discharges and into aquatic ecosystems via liquid discharges. Radionuclides can be transferred from air, soil, water and sediment to organisms. Liquid discharges input radionuclides into water bodies from which they can be transferred to sediment, which is the major sink in aquatic systems. Gaseous discharges are deposited onto both plant and soil surfaces, and soil is the major reservoir for most radionuclides in the terrestrial envir­onment. To illustrate various issues in the chapter, we will concentrate on a few radionuclides: 60Co, 90Sr, 137Cs, 239+240Pu, 241Am and 131I. Noble gases, which are discharged in comparatively high amounts, are not important in dose terms, due to their low environmental transfer and dose coefficients (see the predicted dose rates presented by Copplestone et al.).1

In this chapter, we describe the major terrestrial pathways for radionuclides released during the nuclear fuel cycle, with a focus on wildlife rather than the human food chain. We briefly describe the approaches being used and devel­oped to demonstrate protection of wildlife from releases of radioactivity into the environment. Key differences in the approaches to predicting environ­mental transfer between the human food chain and environmental assessment methods for wildlife will be highlighted. More detailed information on this topic can be accessed at http://www. ceh. ac. uk/protect.

The major factors affecting the extent of transfer of radionuclides to organisms are briefly described. In many recent radiological documents, the term ‘‘non-human’’ biota has been used for organisms other than humans. This term is rarely used in ecotoxicology and other areas of environmental protec­tion. Here we use the term ‘‘wildlife’’ which encompasses wild plants, undo­mesticated animals and organisms such as fungi and bacteria (i. e. the potential objects of environmental protection).

The degree of internal exposure arising from man-made radionuclides in the environment depends on the environmental behaviour of the radionuclides emitted. The environmental mobility of different radionuclides varies con­siderably. Radionuclides with a potentially high environmental mobility include 131I, 134/137Cs, 90Sr, 14C, 3H, 35S, whereas those with low environmental mobility include 239/240Pu and 241Am. Many different factors affect environ­mental mobility and the extent to which radionuclides are transferred into organisms. We briefly describe a range of these processes but as an example focus on those which are most important in determining transfer to wildlife.