Brief History of Plasma-Facing Materials in Fusion Devices

PWIs have been recognized to be a key issue in the realization of practical fusion power since the beginning of magnetic fusion research. By the time of the first tokamaks in the 1960s in the USSR and subsequently elsewhere, means of reducing the level of carbon and oxygen were being employed.19,20 These included the use of stainless steel vacuum vessels and all-metal seals, vessel baking, and discharge cleaning. Ultimately, these improvements, along with improved plasma confinement, led to the first production of relatively hot and dense plasmas in the T3 tokamak (^1keVand ^3 x 1019m~3).21,22 These plasmas, while being cleaner and with low — Z elements fully stripped in the core, still had unacceptable levels of carbon, oxy­gen, and metallic impurities. The metallic contamina­tion inevitably consisted of wall and limiter materials.

Early in magnetic fusion research, it was recog­nized that localizing intense PWIs at some type of ‘sacrificial’ structure was desirable, if only to ensure that more fragile vacuum walls were not penetrated. This led to the birth of the ‘limiter,’ usually made to be very robust, from refractory material and posi­tioned to ensure at least several centimeters gap between the plasma edge and more delicate struc­tures like bellows, electrical breaks, vacuum walls, etc. Typical materials used for limiters in these early days included stainless steel in Adiabatic Toroidal Compressor (ATC)23 and ISX-A24 and many others, molybdenum in Alcator A25 and Torus Fontenay-aux-Roses (TFR),26 tungsten in symmetric tokamak (ST)27 and Princeton Large Torus (PLT),28 and titanium in poloidal divertor experiment (PDX).29

Poloidal divertors have been very successful at loca­lizing the interactions of plasma ions with the target plate material in a part of the machine geometrically distant from the main plasma where any impurities released are well screened from the main plasma and return to the target plate.30 By the early 1980s, it was also recognized that in addition to these functions, the divertor should make it easier to reduce the plasma temperature immediately adjacent to the ‘limiting’ sur­face, thus reducing the energies of incident ions and the physical sputtering rate. Complementary to this, high divertor plasma and neutral densities were found. The high plasma density has several beneficial effects in dispersing the incident power, while the high neutral density makes for efficient pumping. Pumping helps with plasma density control, divertor retention of impurities and, ultimately, in a reactor, helium exhaust.

By the late 1970s, various tokamaks were starting to employ auxiliary heating systems, primarily neutral beam injection (NBI). Experiments with NBI on PLT resulted in the first thermonuclear class temperatures to be achieved.28,31,32 PLT at the time used tungsten limiters, and at high powers and relatively low plasma densities, very high edge plasma temperatures and power fluxes were achieved. This resulted in tungsten sputtering and subsequent core radiation from partially stripped tungsten ions. For this reason, PLT switched limiter material to nuclear grade graphite. Graphite has the advantage that eroded carbon atoms are fully stripped in the plasma core, thus reducing core radia­tion. In addition, the surface does not melt if over­heated — it simply sublimes. This move to carbon by PLT turned out to be very successful, alleviating the central radiation problem. For these reasons, carbon has tended to be the favored limiter/divertor material in magnetic fusion research ever since.

By the mid-1980s, many tokamaks were operating with graphite limiters and/or divertor plates. In addition, extensive laboratory tests and simulations on graphite had begun, primarily aimed at under­standing the chemical reactivity of graphite with hydrogenic plasmas, that is, chemical erosion. Early laboratory results suggested that carbon would be eroded by hydrogenic ions with a chemical erosion yield of Y~ 0.1 C/D+, a yield several times higher than the maximum physical sputtering yield. Another process, radiation-enhanced sublimation (RES), was discovered at elevated temperatures, which further suggested high erosion rates for carbon. Carbon’s abil­ity to trap hydrogenic species in codeposited layers was recognized. These problems, along with graphite’s poor mechanical properties in a neutron environment (which had previously been known for many years from fission research33), led to the consideration of beryllium as a plasma-facing material. This was pri­marily promoted at JET.34 A description of the opera­tion experience to date with Be in tokamak devices is provided in Section 4.19.2.3.

At present, among divertor tokamaks, carbon is the dominant material only in DIII-D. Alcator C-Mod at Massachusetts Institute of Technology (MIT), USA35 uses molybdenum. ASDEX-Upgrade (Axially Symmetric Divertor Experiment) is fully clad with tungsten,36 and JET has completed in 2011 a large enhancement programme37 that includes the installa­tion of a beryllium wall and a tungsten divertor. New superconducting tokamaks, such as Korea Supercon­ducting Tokamak Advanced Reactor (KSTAR) in Korea38 and experimental advanced superconducting tokamak (EAST) in China,39 employ carbon as material for the in-vessel components, but with provisions to exchange the material later on in operation.

The current selection of plasma-facing materials in ITER has been made by compromising among a series of physics and operational requirements, (1) minimum effect of impurity contamination on plasma performance and operation, (2) maximum operational flexibility at the start of operation, and (3) minimum fuel retention for operation in the DT phase. This compromise is met by a choice of three plasma-facing materials at the beginning of opera­tions (Be, C, and W). It is planned to reduce the choices to two (Be and W) before DT operations in order to avoid long-term tritium retention in car­bon codeposits during the burning plasma phase. Beryllium has been chosen for the first-wall PFCs to minimize fuel dilution caused by impurities released from these surfaces, which are expected to have the largest contamination efficiency.40-44 More­over, the consequences of beryllium contamination on fusion performance and plasma operations are relatively mild. This has been demonstrated by experiments in tokamaks (see Section 4.19.2.3).

The main issues related to the use of beryllium in ITER are (1) the possible damage (melting) during transients such as ELMs, disruptions, and runaway electron impact, and its implications for operations and (2) the codeposition of tritium with beryllium which is eroded from the first wall and deposited at the divertor targets (and possibly also locally rede­posited into shadowed areas of the shaped ITER first wall). Both issues are part of ongoing research, the initial results of which are being taken into account in the ITER design so that the influence of these two factors on ITER operation and mission is minimized. This includes ELM control systems based on pellets and resonance magnetic perturbation (RMP) coils, disruption mitigation systems, and increased temper­ature baking of the divertor to release tritium from the beryllium codeposited layers. Carbon is selected for the high power flux area of the divertor strike points because of its compatibility with operation over a large range of plasma conditions and the absence of melting under transient loads. Both of these characteristics are considered to be essential during the initial phase of ITER exploitation in which plasma operational scenarios will require development and transient load control and mitiga­tion systems will need to be demonstrated.