Category Archives: Nuclear Power and the Environment

Origins of Nuclear Power: The Nuclear Weapons Programme

The first nuclear reactors in what were to become the world’s first nuclear powers, namely the United States, UK and the USSR, were all designed to produce plutonium for their respective nuclear weapons programmes. iu These initial reactors were of rudimentary design, graphite blocks into which uranium fuel was placed and plutonium chemically extracted from the spent fuel to be used in atomic bombs. The world’s first nuclear reactor, built as part of the Manhattan project),[2] achieved criticality in December 1942. Following this, a number of reactors were subsequently constructed at the Hanford nuclear site in Washington State, in order to produce plutonium for the first atomic bombs. The Manhattan project mobilised over 100000 people and in today’s money cost $22 billion.

In the aftermath of the Second World War, the allure of ‘‘the bomb’’ was strong, appearing to be the ultimate trump card reflecting a nation’s prowess. Such geopolitical reasoning remains strong to the present day, witness a number of developing countries’ desire to acquire nuclear weapons in order to project regional and global influence.

As a result of the research conducted during the Manhattan project, scientists in the West and the USSR realised that the heat generated from nuclear fission could be harnessed to generate electricity for power hungry nations, as well as to provide propulsion for submarines and aircraft carriers. The first nuclear reactor to produce electricity (albeit a trivial amount, enough for four light bulbs) was the small Experimental Breeder Reactor (EBR-1) in Idaho, USA, which started up in December 1951. It was, like a number of reactors in the years following the end of the war, a prototype ‘‘fast breeder reactor’’ designed to run on plutonium, itself extracted from spent fuel from a standard reactor. The plants were designed to produce electricity whilst ‘‘breeding’’ more plu­tonium, thus, in theory at least, they would continually produce all the fuel they needed. v

From the beginning, it was recognised that military and peaceful applications were intricately linked:

“The development of atomic energy for peaceful purposes and the

development of atomic energy for bombs are in much of their course

interchangeable and interdependent’’.3

So reads a passage from a seminal report written by the then US Secretary of State, Dean Acheson in 1946, which became known as the Acheson-Lilienthal Report. It proposed transferring ownership and control of the nuclear fuel cycle from individual nation states into the hands of the United Nations Atomic Energy Commission. In principle both the USA and the USSR backed the idea, initially mooted in discussions between the allied powers during 1945. Niels Bohr, one of the leading researchers on the Manhattan project, became increasingly convinced during the war that atomic research should be shared between the USA and the USSR, primarily as a means of reconciling the two countries, even suggesting they share details of the Manhattan project be shared between the two countries. vl [3] [4]

The remarkable proposition was that the UN commission would in effect own and control the nuclear fuel cycle, from uranium mining through to reprocessing, and in effect release uranium to nations who wanted to build nuclear power plants for electricity production only. As part of this interna­tional control of nuclear technology, the US, the report suggested, should abandon its monopoly on nuclear weapons sharing knowledge with the Soviets in exchange for the Soviets not proceeding with weapons development. It seemed to be a win-win situation. Countries could take advantage of the promise of cheap base load electricity generated from nuclear power plants and the international community could nip proliferation risks in the bud.

However, the proposal taken forward in the Baruch Plan, failed.[5] [6] The small window of opportunity that existed for international cooperation on nuclear matters was firmly shut, ushering in the nuclear arms race and the cold war, the repercussions of which reverberate down to the present day. The US Congress in 1946 passed the McMahon Act, which firmly denied foreigners’ (even wartime allies) access to US nuclear data. Individual countries had to pursue their own nuclear weapons and nuclear energy programmes with all the attendant costs and risks of ‘‘going it alone’’.

Wartime allies who had collaborated together on the Manhattan project began to develop their own weapons programme. For example, in the UK Clement Attlee created a cabinet sub committee, Gen 75, known informally as the ‘‘Atomic Bomb Committee’’ which met for the first time on 29 August, 1945. In December of that year, the committee agreed to the construction of nuclear reactors as part of the British nuclear power programme. As a result, the first nuclear reactor to come online in Western Europe, GLEEP (the Graphite Low Energy Experimental Pile) situated in Harwell, Oxfordshire, became operational in 1947 and was used for research into reactor design and operation as part of the new weapons’ programme. Three years later in 1950, the ‘‘Windscale piles’’ in Cumbria achieved criticality. They were comprised of graphite blocks into which uranium was placed generating a chain reaction, with the spent fuel reprocessed to extract weapons grade plutonium on site, with reprocessing beginning in 1952. This enabled Operation Hurricane to take place, the first British detonation of an atomic bomb in the Monte Bello Islands on 2 October, 1952, which led to ‘‘Blue Danube’’, the UK’s first free fall nuclear bomb, came into service in November 1953.vul

Unlike the plutonium-producing reactors in Hanford, Washington state, the Windscale piles were cooled by air being blown straight through the ‘‘piles’’ and discharged from tall stacks directly into the outside atmosphere. lx This initiated a period of massive investment in nuclear science R & D in the UK under the guidance of the Atomic Energy Research Establishment based at Harwell, which was tasked with undertaking R & D in nuclear fission for both military and civilian uses. From the late 1940s, Harwell was conducting research into reactor design for energy production.

It is clear that ‘‘without the nuclear weapons programme, and if normal commercial criteria had been applied, it is doubtful if a civil nuclear industry would ever have arisen’’.6 It was realised that changing the design of these plutonium-producing piles could allow the heat to generate steam and the steam could be used to drive a turbine to produce electricity, it was these changes that formed the basis of the UK’s civilian nuclear reactors.7

In the post war era as Britain still had to import relatively expensive oil and, to an extent coal, policy makers thought that nuclear energy could be a cheap alternative. Given its origin in the weapons’ programme, the free exchange of information was curtailed. The absence of informed debate in this climate of secrecy meant that the positive aspects of nuclear energy were emphasised with negative issues rarely discussed in the public sphere. This was beneficial to governments who were keen to develop their nuclear weapons programme away from the glare of public scrutiny and to the multinational companies who, given their involvement in military applications of nuclear technology, saw profitable opportunities in new areas such as developing and selling nuclear reactors.8,9

If secrecy and elite decision-making surrounded the development of nuclear technology in the West, this was taken to a different level in the USSR, where a number of ‘‘closed cities’’ were created such as Ozyorsk (known as Chelyabinsk-65) which housed a plutonium-production plant. Soviet citizens had to have special permission to visit these cities. It was in one of such cities, Obninsk, 100 km southwest of Moscow, that the world’s first nuclear power plant to generate electricity for a national grid came online. The AM-1 (‘‘Atom Mirny’’ — ‘‘peaceful atom’’) reactor was a prototype water-cooled and graphite­moderated, with a design capacity of 30MWt or 5MWe. It produced just 5 megawatts of electric power. For 10 years it remained the only nuclear power plant in the USSR.[7] [8]

During the Manhattan project, a naval officer, Hyman Rickover[9] (later to become Admiral), realised the potential application of nuclear energy to sub­marine propulsion, so he initiated R & D which led to the development of what was to become known as the pressurised water reactor (PWR) used to power the first nuclear submarine USS Nautilus. The PWR used enriched uranium oxide fuel and was moderated and cooled by ordinary (light) water. USS Nautilus was launched in 1954, three years ahead of the first commercial nuclear power station, which was also overseen by Hyman Rickover. These compact reactors which used uranium as fuel and pressurised water as both coolant and moderator evolved into the Pressurised Water Reactor (PWR) which dominated the American and other international markets. The PWR and the Boiling Water Reactor (BWR) — known collectively as ‘‘Light Water Reactors’’ — dominated the US and international market in reactor design and still do so today. However, a number of observers have concluded that LWRs, rather than necessarily being the best reactor design chosen after careful con­sideration of alternatives, were rushed forward after the concern that was generated by the first Soviet atomic bomb test.

The United States Atomic Energy Commission (AEC), created in January 1947, effectively transferred control over nuclear energy from the military to civilian institutions. Whilst in its early years the AEC’s main job was to produce nuclear warheads for the military, now it was also tasked with developing and regulating civilian nuclear power, which created a conflict of interest.™

This focus on military applications changed in 1953 when President Eisen­hower proposed his ‘‘Atoms for Peace’’ programme, which reoriented research effort towards electricity generation and set the course for civil nuclear energy development.10 Eisenhower suggested nuclear materials be used to provide ‘‘abundant electrical energy in the power-starved areas of the world’’. This set in train a number of international efforts at pushing this vision forward, from the Geneva Conference on the Peaceful Uses of the Atom in 1955 to the formation of the International Atomic Energy Authority (IAEA), whose mandate was to ‘‘accelerate and enlarge the contribution of atomic energy to peace, health, and prosperity throughout the world’’.

The optimism and almost euphoria about the possible manifold peaceful uses of the atom captured the imagination of writers and scientists, with claims that we would, aside from benefiting from cheap electricity, see ‘‘nuclear powered planes, ships, trains… nuclear energy would genetically modify crops and preserve grains and fish’’.11 This ‘‘nuclear utopianism’’ was rarely challenged, receiving widespread support from the public and policymakers.

The cold war enabled nuclear power to be constructed as vital for national security, and the political climate generated by McCarthyism during the 1950s in the US meant that research into potential safety problems and hazards from nuclear power were discouraged.12 Legitimate concerns over the effects of atomic testing were seen as subversive and un-American.13 The Atoms for Peace pro­gramme was in part designed to dissuade foreign states from developing nuclear weapons. To this end the US government supplied highly enriched uranium

xl1 This critique led to the separation of regulatory and promotional functions spelt out under the Energy Reorganization Act of 1974, moving regulatory functions into the Nuclear Regulatory Commission (NRC) and promotional activities into the US Department of Energy.

(HEU) to countries who promised not to construct atomic bombs.14 Suffice to say that not all of the HEU is accounted for today. This new atomic age of abundance and prosperity was also an opportunity for business to take advantage of the commercialization of the atom. The US Atomic Act (1946) was modified in 1954 to allow private sector firms to build and operate nuclear plants.15’™1

Environmental Contamination

Radioactivity, most importantly I, Cs and Po, was carried to the East by prevailing westerly winds, though the wind speed and direction varied during the course of the accident.1,2 Deposition to the ground across Northern England ‘‘was dominated by 131I (half-life of 8.04 days) with deposits above 4 kBq m~2 extending about 75 km east northeast and 140 km south southeast of the site, covering an area of about 12000 km2’’ (Jones2 citing Chamberlain).3 Iodine-131 was measured across the North Sea in Holland and Belgium, though concentrations were much lower than in England. Figure 1 shows a map of the iodine release. A recent re-analysis of air monitoring data in Norway4 showed that the radioactive plume reached Norway on the 15th and 16th October. These authors noted that maximum observed deposition was comparable to the level of deposition from atmospheric nuclear weapons testing in 1958. It has been estimated by Garland and Wakeford1 that no more than 10% of the total 131I passed across the East coast of England to the North Sea and Europe.

As the accident progressed, in the early hours of October 11th, the regional police chief constable was notified.5,6 A review of the accident6 concluded that, after the uncertainty and confusion of the initial incident, the aftermath was handled well: ‘‘community warnings and communications were handled efficiently and promptly, environmental survey teams and equipment were assembled and dispatched promptly, and there was an atmosphere of quiet professionalism’’.

Assessment of external radiation doses showed that these were not high enough to require evacuation (Jackson and Jones7 citing Dunster et al.8), but

-Ч It *1 ‘

Ыал of release: <ДМЯЛи_ОФІОгі937

Release Іослмеї tmeltipie аоаачя

Release begtht)

Мс« ОПІСС (GMR) Сигма CDfyrijchl

Figure 1 Time integrated concentrations of 131I in air following the Windscale acci­dent up to 12.00 on the 15th of October 1957. (Reprinted from Johnson et al. with kind permission of Elsevier).

high levels of radioactivity (principally 131I) in milk implied potentially significant ingestion doses. The majority of iodine ingested by the body is accu­mulated in the thyroid, so radioactive iodine intake results in a risk of thyroid cancer. At the time, no intervention level for 131I in milk had been set, so, in the words of Jones2 ‘‘hasty, but effective, consultations and calculations” determined a maximum permissible level of 0.1 pCi l 1 (3700 Bq l 1). A sampling campaign was set up to determine levels of radioactivity in milk across a large area of the UK. In the local area on the 11th and 12th of October, observed 131I activity concentrations in milk reached 30000Bq l 1, but rapidly declined over the
following weeks. Similar maximum activity concentrations were observed in other foodstuffs.7 Levels in public drinking water sources were not expected to be high.

Approximately three million litres of milk were discarded over an area of around 500 km2 (ref. 7). Restrictions were finally lifted on the 23rd of November, approximately six weeks after the accident.2 It is probable that, at present day intervention levels, the area in which milk consumption was banned would have been much greater7 and temporary precautionary bans on food­stuffs, including meat and milk, would also have been implemented as a consequence of radiocaesium contamination.7,9

Decommissioning Techniques

One of the key decisions for decommissioning is whether it will be performed remotely or will be hands-on, with operators using tools directly. It will often be cheaper and easier to use manual techniques since this allows maximum

3500000

Unknown, 116

Wood, 19836

flexibility in choosing appropriate tools and allows parallel working to reduce the time required for decommissioning. However, manual decommissioning may not be possible if radiation levels are too high.

Manual decommissioning can exploit hand tools as well as sit-on machinery. Remote decommissioning uses manipulators and other tools mounted on cranes or remotely-operated-vehicles and directed by operators using a camera for guidance.

Decommissioning may try to separate different waste categories such as intermediate and low level wastes from free release material. In some cases physical techniques such as cutting, scabbling a surface, water jetting or sand blasting might be used.13 In other instances chemical decontamination might
offer advantages. In each case the cost and environmental benefits of per­forming the operation must be weighed against the cost, resource usage and dose incurred by further processing the waste. Some methods for decontami­nation are discussed.14 Decontamination to support decommissioning can use more aggressive chemicals than are used for cleaning equipment prior to maintenance during plant operations, it is also possible to deploy decontami­nation either in situ prior to removal of items or ex situ after they are removed from their original location.

Examples of the successful decontamination of streams are the Berkeley fuelling machines: 1700 tonnes of which were recycled, 60 tonnes of which were disposed as LLW, while 30 tonnes remain in store awaiting disposal. Chemical cleaning of the Berkeley gas ducts allowed 750 tonnes of steel to be recycled.

Environmental Transfer in Terrestrial Ecosystems

To be able to estimate radionuclide activity concentrations in exposed organ­isms we need to quantify and model the transfer processes. The approach used varies depending on the objective and the need for detailed information. Some models mathematically describe the transfer processes through the application of steady state compartment models which assume that there is an equilibrium established in the environment between the source and the receptor. However, in some cases there may be a need to describe transfer in more detail, either to take account of some of the many environmental factors which affect the extent of transfer in the environment, or to quantify changes in transfer with time after radionuclides have been received by ecosystems.

The pathways leading to exposure of organisms in terrestrial ecosystems can be subdivided into external (see section 4) and internal components. Internal irradiation occurs from radionuclides which are absorbed and distributed within the organism. Ingestion of plants, animals, soil/sediment and detritus also leads to direct irradiation of the digestive tract.

Since the dosimetric calculation to estimate absorbed doses are derived for a defined shape of a whole organism (see section 4), there is a requirement to estimate the whole organism activity concentrations. This contrasts with the human food chain where the focus is on the part of the organism that is ingested by humans.

The processes involved in environmental terrestrial pathways are briefly summarised here before we describe the current methods by which exposure of organisms to radionuclides are currently quantified and evaluated.

Nuclear Fuel

The early generations of nuclear fission technology depended on the thermal neutron (neutrons with an energy ~ 0.025 eV) fission of the natural isotope 235U, present in nature at 0.72 atom%. The dominant uranium isotope, 238U, is a so — called ‘‘fertile’’ isotope since it can be converted into artificial isotopes, especially the fissile 239Pu, by neutron irradiation. Thermally fissile plutonium isotopes can also be exploited in energy production. So, as plutonium and other fissile iso­topes are produced through irradiation of 238U, they can also be exploited in energy production, either through consumption in situ, or through recycling into new fuel materials. In a uranium-fuelled thermal reactor, about 40% of total energy is derived from the fission of plutonium isotopes produced in situ.

United Kingdom

The United Kingdom’s nuclear legacy has arisen from a variety of nuclear facilities operated across the country over the past ~60 years. These contribute to the production of nuclear material for nuclear reactors or weapons, the use of this material in reactor plants and the re-processing of spent nuclear fuel. Construction on the UK’s first nuclear power plant, Calder Hall, began in 1953 and in 1956 it was connected to the national grid becoming the world’s first commercial nuclear power station. The site was also expanded over the fol­lowing decades, to result in the present Sellafield site (see section 2.1.1). Between 1953 and 1971 a total of 26 reactors were built at nuclear research and devel­opment sites across the UK.3 A substantial part of the UK’s electricity supply has come from the first generation of Magnox nuclear power stations over the past 60 years. Just two of the eleven Magnox stations are still operational.

The Atomic Weapons Establishment (AWE) is responsible for providing and maintaining the UK’s nuclear deterrent and has held this responsibility for over 50 years. AWE operates over two sites, Aldermaston which is a former airfield and Burghfield, a former munitions factory. Although the Aldermaston site is radiologically safe there are areas where soils contain higher than background levels of various radionuclides, including plutonium. Levels of 239 + 240Pu have been found to range from 15 to 155Bqkg-1 in certain settled sediments (sludge)4 compared to background levels due to global fallout of 0.02 to 0.7Bqkg-1.5

The Nuclear Decommissioning Authority (NDA), a Non-Departmental Public Body (NDPB), was established in 2005 to manage the decommissioning and clean up of the UK’s civil public sector nuclear legacy sites. The restoration program tasked to the NDA relates to 19 sites covering the length and breadth of the UK with certain sites not expected to reach their planned end state for decades. The discounted lifetime cost for completing their contracted work, the Nuclear Liabilities Estimate (NLE), stands at £44.5 billion.3 A detailed overview of the NDA’s planned approach for decommissioning and clean up is provided in their recent draft strategy published for consultation.3

Although there are a number of sites in the UK where nuclear operations have occurred, the majority of the legacy waste and contamination is located at a few principal facilities. Two key sites (Sellafield and Dounreay) involved in the UK’s nuclear waste inventory and which suffer from the greatest contamination concerns are discussed here in more detail.

2.1.1 Sellafield

Sellafield (formerly Windscale), West Cumbria, is the UK’s largest nuclear complex covering 262 hectares and has supported the nuclear power program

Oak Ridge

Separation of uranium for Manhattan Project

National Lab managed by DoE

Sorbed and precipitated uranium concentrations up to 800 mg kg-1.

Hg up to 2400 mg g-1 in floodplains along East Fork Popular Creek.

Soluble uranium in groundwater plume (up to 210 mM).

Leakage from S-3 ponds has created a plume containing uranium (up to 0.2 mM) and Tc (up to 47 nM).

ref. 38, 39

Hanford

Plutonium production,

Decommissioning

68 out of 149 tanks known

In 1951, 3.5 x 105 l of

ref. 42, 44,

nuclear reactors

and cleanup

or thought to have leaked HLW into sedi­ments beneath them.

Pu found in silt layers at up to 9.25 MBq kg-1. Caesium-137 as high as

105 Bq g-1 in con­taminated sediments.

highly radioactive waste leaked into subsurface containing an estimated 7000 kg of U.

Tritium and 129I present in groundwater at above drinking level limits.

Tc, U, Pu 60Co, 137Cs also detected above drinking levels.

49

Rifle

Former uranium processing

UMTRA managed site

Uranium concentrations in a contaminated aquifer range from 0.4 to 1.4 mM

ref. 28

Management of Land Contaminated by the Nuclear Legacy 87

since the 1940s with the site containing the world’s first commercial nuclear power station, Calder Hall. Operations at the Sellafield site include spent fuel reprocessing, mixed oxide fuel fabrication (MOX) and nuclear waste storage and management. Discharges into the environment from Sellafield began in 1951 and first became subject to formal authorisation in August 1954 under the ‘‘Atomic Energy Authority Act 1954’’. Prior to 1954, discharges were subjected to controls derived from consultation with site operators and government departments. Current disposal of radioactive waste is regulated under the “Environmental Permitting (England and Wales) Regulations 2010’’ (EPR).

During reprocessing, plutonium, uranium and highly radioactive fission pro­ducts are separated by a series of solvent extractions which results in some of these products being concentrated in aqueous waste. Highly radioactive aqueous waste is added to an acid effluent stream for evaporation and storage and is now being converted into vitrified waste. Low level aqueous waste is discharged into the Irish Sea via pipelines extending 2.5 km from the high water mark. These low level discharges have created an environmental inventory, over the period of 1952-1990, of around 1.1 x 102 TBq of 238Pu, 6.1 x 102 TBq of 239,240Pu, 1.3 x 104 TBq of 241Pu and 9.4 x 102 TBq of 241Am (with about 3.6 x 102 TBq of the americium having been derived from decay of 241Pu released).6 Around 90% of the Pu, in its insoluble Pu(iv) state, was retained rapidly by the sediment in the Irish Sea along with the vast majority of the discharged Am. The remaining 10% of plutonium, in the more soluble Pu(v) state, remained in solution and was transported out of the Irish Sea.7 Since 2006, beach monitoring has detected a number of contaminated sites resulting from the Sellafield discharges, although they are generally less active than those found in Dounreay.8

Approximately 1600 m3 of soil around the centre of Sellafield has been contaminated by spillage and reprocessing and will have to be treated as intermediate level waste (ILW).8 This area overlies an aquifer in the underlying sandstone geology, which is significantly contaminated to the southwest due to leaching of the contaminated soil from above. An estimated 1 000 000 m3 of soil will require treatment as low level waste (LLW). Sellafield is also responsible for the storage of the majority of the UK’s nuclear waste products and, as such, a large inventory of varying levels of radioactive waste is stored on site either awaiting disposal9 or for the activity to decrease.

Two site investigations have been conducted at Sellafield over the past decade in an attempt to identify and develop conceptual models of below ground contamination. The first phase of the report was completed in 2004 and exam­ined contamination outside of the Sellafield ‘‘Separation Area’’, where fuel re-processing and fabrication took place, with the second report focussing on contamination within the Separation Area expected to be completed in 2010.

Soil sample records from over 2000 boreholes have demonstrated that radioactively contaminated ground exists beneath and, occasionally, outside the Separation Area. Groundwater monitoring throughout the site has revealed that radioactive contamination is present in distinct plumes in the groundwater which are migrating in the direction of the hydraulic gradient. These con­taminants include 90Sr, 137Cs, 3H and 99Tc, with actinides also expected.10

The maximum activity of the most mobile contaminant, tritium, is around 1.0 x 107 Bqm-3 in contaminated groundwater found in boreholes close to the Separation Area. The activity decreases down the hydraulic gradient towards the River Ehen, until it becomes undetectable (below 1.0 x 105Bqm-3).10 Technetium-99, although derived from a different source, becomes a co­contaminant with the tritium in a common plume as they both migrate downgradient. The 99Tc is known to be a contaminant in the upper strata of the sandstone bedrock and has also been found in monitoring wells as far as the site boundary. The maximum concentration of 99Tc found in this plume during the phase 1 site investigation was 2.3 x 105 Bqm-3, located near to the site main gate.10 Strontium-90, which has limited solubility and readily adsorbs to sedi­ments at Sellafield, is detectable in monitoring wells inside the Separation Area where it is mostly contained. Beta activity from the 90Sr is also detected in two plumes, including the plume contaminated with 3H and 99Tc.10 Caesium-137, the only other radioactive isotope detected in the groundwater plumes, was found to be present only in very low concentrations and only in filtered solids.

Monitoring of 137 boreholes was conducted for the Sellafield Ltd Ground­water Annual Report11 and is summarised in Table 2. Although the majority of boreholes contain activity below the WHO drinking standard for total alpha, tritium and technetium activity, there are a significant number of boreholes with total beta activity above the WHO drinking standard. Strontium-90 makes up the bulk of the total beta activity, with caesium-137 also contributing sig­nificant activity. However, when both isotopes are examined on an individual

Table 2

Summary

boreholes.

of the groundwater monitoring

11

of 137

Sellafield

Activity

analysed

WHO

drinking

water

standard

(Bg l-1)

Boreholes where WHO standard is exceeded

Location of boreholes

Major

isotopes

Highest annual average activity (Bq l-1)

Total alpha activity

0.5

5

Within the Separa­tion Area

Uranium

isotopes

103

Total beta activity

1

46

Predominately within Separation Area with several to the south. A minority found close to the River Calder’s west bank

90Sr and 137Cs

129 000

Tritium

10 000

3

Outside south-west corner of Separa­tion Area

3H

39 200

Technetium

100

1

Between south-west corner of Separa­tion Area and the site main gate.

99Tc

111

basis, then fewer samples exceed the WHO drinking standard for 90Sr and no samples exceed the 137Cs safe drinking limit. The majority of boreholes with values above the WHO standard are located within the Separation Area with a number also located to its south-west.

The former storage and de-canning facility, known as B-30, houses a pond used for the storage of spent nuclear fuel until its replacement facility, the Fuel Handling Plant, was commissioned in 1986. Although now closed, the storage pond is thought to contain 300 to 450 tonnes of spent nuclear fuel. Fuel was stored in the pond for longer than was anticipated due to an accident at the Magnox reprocessing facility in 1974 causing corrosion of the fuel cans and leakage of radiation into the pond.

Implementing the UK GDF

3.3.1 Historical Perspective, Public Consultation, Policy Decisions, and Responsibilities

Previous investigations concerning the geodisposal of UK ILW between the 1980s and 1990s were not successful and were effectively abandoned in 1997.13 However, in 1999, the House of Lords Science and Technology Select Com­mittee report on the status of UK radioactive waste13 concluded that geolo­gical disposal was feasible and desirable but that the public should be consulted on future policy decisions. Accordingly, in 2001, the UK Govern­ment initiated the Managing Radioactive Waste Safely (MRWS) programme with a public consultation, to find the best practicable management solution for UK’s HAWs. Following feedback from the consultation process, the Government commissioned the Committee for Radioactive Waste Manage­ment (CoRWM) to offer independent advice on the best HAW management pathways. In 2006, CoRWM submitted a range of recommendations to the Government, indicating a preference towards geological disposal, coupled with safe and secure interim storage, and a programme of ongoing research and development.4 In response, the Government announced their plans for the long term management of HAWs to Parliament in October 2006.5 The announcement accepted CoRWM’s recommendation of geological disposal and the Government instigated a further period of consultation to investigate how geological disposal should proceed. After consultation, the Government White Paper Managing Radioactive Waste Safely: A Framework for Imple­menting Geological Disposal was published in 2008.[37] This document sets out the detailed policy and plans for geodisposal, and identifies the Nuclear Decommissioning Authority (NDA) as the body responsible for managing the delivery of UK geological disposal. In response, the NDA created the Radioactive Waste Management Directorate (RWMD) to facilitate this process. The MRWS White Paper also identified that: (i) the Government would retain responsibility for policy concerning geological disposal; (ii) independent regulators would oversee adherence to national and interna­tional statutory controls; and (iii) CoRWM would be retained to provide independent scrutiny and advice to the Government on geological disposal plans and programmes.

Effects on Wildlife

DNA is the primary target for the induction of biological effects from radiation in all living organisms. There are broad similarities in radiation responses from different organisms, but differences in radiation sensitivity. The range in leth­ality from acute exposure to radiation varies by three to four orders of mag­nitude amongst organisms, with mammals being among the most sensitive and viruses being among the most radioresistant.51

Damage from radiation is initiated by ionisation which occurs if the radia­tion has sufficient energy to eject one or more orbital electrons from the atom in which it interacts. Ionising radiation is characterized by a large release of energy which can break strong chemical bonds. The ionisation process and resulting charged particles can subsequently produce significant damage to biological cells termed ‘‘direct effects’’. However, much of the biological damage from radiation is due to ‘‘indirect effects’’ from free radicals, which are the fragments of atoms that remain after being ionised. Free radicals have an unpaired or odd number of orbital electrons, and are thus chemically unstable. Such free radicals can easily break chemical bonds, and are a main cause of damage from radiation exposure.52

Free radicals are not unique to radiation; they are produced in response to many stressors. Damage caused from the free radicals is so abundant that efficient repair mechanisms have evolved within all biological species to counter their effects.

Radiation and the free radicals produced can damage DNA by causing several different types of lesions for which there are efficient DNA repair processes.52 However, errors in repair can result in cell death (through apop­tosis), chromosome aberrations or mutations. Mutations can be deleterious, neutral with no apparent effect (which can persist over many generations) or, rarely, may offer a selective advantage. The fate of mutations and their impacts within a population are dependent on the type of cell in which they occur. Mutations in reproductive germ cells can decrease the number of gametes, increase embryo lethality, or be inherited by the offspring, resulting in their alteration. A mutation within a somatic cell can lead to cell death, or, if DNA damaged is mis-repaired the mutation in the somatic cell can lead to cancer. The risk of non-fatal cancer for humans has been estimated at 1 x 10~5 per mSv.52

The deleterious effects of ionising radiation to biological systems are primarily dose dependent. The effective dose depends not only on the gross energy deposited, but also on the type of the radiation and the radiation sensitivity of the affected tissue. In SI units, the effective dose to humans is the Sievert (Sv), which is the absorbed dose (the gray; Gy) adjusted by two dimensionless weighting factors: the radiation weighting factor to account for the biological effectiveness of the absorbed radiation, and the tissue weighting factor to account for differences in the radiation sensitivities of different organs of the body. These weighting factors have been developed for human radiation biology — no such factors exist for other organisms. Thus, dose to wildlife is expressed in Gy, rather than Sv (although dose rates may be presented on a weighted or unweighted basis).

Other Reasons to Reprocess

In addition to its original purpose, of recovering uranium and plutonium, reprocessing also offers the possibility of:

(i) Controlling proliferation risks. The production and potential isolation of fissile materials which can be diverted from the civil fuel cycle for military purposes is an intrinsic risk in nuclear power. Although the fissile content of plutonium derived from high burnup fuel is not opti­mal, power reactor plutonium can nevertheless be used in a weapon,4 so the creation of plutonium is, by definition, a proliferation risk. While irradiated fuel is lethally radioactive and specialised facilities would be

Table 3 Principal oxidation states of the mid actinides. Bold indicates oxi­dation states which are significant in nuclear fuel recycling.

Uranium

Neptunium

Plutonium

Americium

III

III

III

III

IV

IV

IV

IV

V

V

V

V

VI

VI

VI

VI

VII

VII

needed to recover fissile material, the fission products will decay over a few hundred years to the point where plutonium could be easily recovered. This raises complex ethical issues and the recovery of plu­tonium for separate treatment, either as a waste or as a fuel, within a few years of production can be attractive for these reasons.

(ii) Reducing waste volumes. Well over 90% by mass of spent fuel is uranium. If a ‘‘once through’’ or ‘‘open’’ fuel cycle is adopted, the irradiated fuel will be packaged and disposed as waste. As a result, the volume of waste for disposal will be very substantial, and the associated costs will be high. For example, in some disposal concepts being considered for the UK, fuel elements containing 2-4 tonnes heavy metal (masses of nuclear materials are often expressed in tonnes heavy metal (tHM), i. e. equivalent mass of uranium or plutonium in the material) depending on fuel type and heat production could be packaged in a cast iron insert, then in a copper container between 2 and 5 m long and 0.9 m diameter, with 5 cm thick walls.5 By contrast, reprocessing spent fuel and conversion of the high level waste to glass will produce less than 100 kg (0.04 to 0.05 m3) of glass per tonne of uranium reprocessed, reducing the volume of highly radioactive material for disposal. Since high level waste and spent fuel are heat gen­erating wastes, they need to be widely spaced in a disposal facility to limit the heat load, so the volume of waste disposed has a large effect on the facility footprint and consequent cost. If there is only a limited volume of host rock, reducing waste volume may be very helpful. Finally, a disposal facility, for example the currently suspended Yucca Mountain facility in the USA, may be legally limited to a specific volume of waste, in which case volume reduction by removal of uranium may well be attractive.

(iii) Controlling high level waste radiotoxicity. The majority of the fission products in spent fuel have relatively short half lives, so that, at timescales longer than a few hundred years, the activity is dominated by relatively radiotoxic actinide elements (see Figure 2). If, in addition to conventional reprocessing to remove uranium and plutonium, a further separation of minor actinides (e. g. neptunium, americium and curium) from high level waste is carried out, the radiotoxicity of the waste can be reduced by several orders of magnitude beyond a thousand years or so. Of course, the concentrated minor actinide stream has to be managed separately, which prompts much of the current interest in transmutation processes.

Permeable Reactive Barrier

The use of a Permeable Reactive Barrier (PRB) involves placement in the subsurface of a barrier consisting of a permanent, semi permanent or replaceable reactive media across the flow path of a contaminated groundwater plume. As the groundwater passes through the barrier under its natural gra­dient, contaminants are either degraded by, or retained in, the reactive media in a passive treatment system. A typical PRB involves the excavation and back­filling of a continuous trench with a reactive material designed to target par­ticular contaminants. Examples of reactive media used include iron, limestone, calcium phosphate-based minerals, compost and activated carbon, with iron being the most common.86 A review of the uses of these various reactive media is provided by Thiruvenkatachari et al.87 Zero-valent iron (ZVI) acts as a reactive medium through corrosion/oxidation of the metal in situ and donation of electrons from this process to organic and inorganic contaminants, such as halogenated hydrocarbons, U(VI) and Cr(VI), which are reduced thereby leading to degradation of the organic contaminant or metal immobilization.87 Conse­quently, the long-term efficiency of ZVI barriers is heavily dependent on the corrosion of Fe0, as continued use results in authigenic mineral formation which restricts the availability of reactive Fe0.88 However, the precipitation of ferrihydrite clusters found away from the immediate surface of the Fe0 barrier provides an increase in potential sites for metal adsorption thus prolonging the life of the PRB.88,89 A PRB containing ZVI was used in a remediation effort at Oak Ridge and is discussed in more detail later in this chapter.

Advantages of using a PRB include the in situ capture of contaminants, alleviating the need to manage the waste generated by pump and treat methods. Additionally, multiple contaminants, such as metals, radionuclides and organics, can be treated simultaneously;90 and both operating and maintenance costs are typically low.91 A review of the long-term performance of PRBs is presented by Henderson and Demond.92