Category Archives: Comprehensive nuclear materials

Computational Thermodynamics: Application to Nuclear Materials

T

1.17.1 Introduction

Nuclear fuels and structural materials are complex systems that have been very difficult to understand and model despite decades of concerted effort. Even single actinide oxide or metallic alloy fuel forms have yet to be accurately, fully represented. The problem is compounded in fuels with multiple actinides such as the transuranic (TRU) fuels envisioned for consuming long-lived isotopes in thermal or fast reactors. More­over, a fuel that has experienced significant burnup becomes a very complex, multicomponent, multi­phase system containing more than 60 elements. Thus, in an operating reactor the nuclear fuel is a high-temperature system that is continuously changing as fission products are created and actinides consumed and is also experiencing temperature and composi­tion gradients while simultaneously subjected to a severe radiation field. Although structural materials for nuclear reactors are certainly complex systems that benefit from thermochemical insight, the emphasis and examples in this chapter focus on fuel materials for the reasons noted above. The higher temperatures of fuels quickly drive them to the thermochemical equi­librium state, at least locally, and their compositional complexity benefits from computational thermo­chemical analysis. Related information on thermody­namic models of alloys can be found in Chapter 2.01, The Actinides Elements: Properties and Character­istics; Chapter 2.07, Zirconium Alloys: Properties and Characteristics; Chapter 2.08, Nickel Alloys: Properties and Characteristics; Chapter 2.09, Properties of Austenitic Steels for Nuclear Reactor Applications; Chapter 1.18, Radiation-Induced Segregation; and Chapter 3.01, Metal Fuel.

A major issue for nuclear fuels is that the original fuel material, whether the fluorite-structure phase for oxide fuels or the alloy for metallic fuels, has variable initial composition and also dissolves significant bred actinides and fission products. Thus, the fuel phase is a complex system even before irradiation and becomes significantly more complex as other elements are generated and dissolve in the crystal structure. Com­pounding the complexity is that, after significant burnup, sufficient concentrations of fission products are formed to produce secondary phases, for example, the five-metal white phase (molybdenum, rhodium, palladium, ruthenium, and technicium) and perovskite phases in oxide fuels as described in detail in Chapter 2.20, Fission Product Chemistry in Oxide Fuels. Thus, any chemical thermodynamic representation of the fuel must include models for the nonstoichiome­try in the fuel phase, dissolution of other elements, and formation of secondary, equally complex phases.

Dealing with the daunting problem of modeling nuclear fuels begins with developing a chemical thermodynamic (or thermochemical) understanding of the material system. Equilibrium thermodynamic states are inherently time independent, with the equilibrium state being that of the lowest total energy. Therefore, issues such as kinetics and mass transport are not directly considered. Although the chemical kinetics of interactions are important, they are often less so in the fuel undergoing burnup (fis­sioning) because of the high temperatures involved and resulting rapid kinetics and can often be neglected on the time scales involved for the fuel in reactor. The time dependence of mass transport, however, does influence fuel behavior as evidenced by the significant compositional gradients found in high burnup fuel whether metal or oxide, and most notably by attack of the clad by fission products and oxidation by species released from oxide fuel.

Although the equilibrium state provides no infor­mation on diffusion or vapor phase transport, it does provide source and sink terms for these phenomena. Thus, the calculation of local equilibrium within fuel volume elements can in principle provide activity/ vapor pressure values useful in codes for computing mass flux. Thermochemically derived properties of fuel phases also provide inherent thermal conductiv­ity, source terms for grain growth, potential corrosion mechanisms, and gas species pressures, all important for fuel processing and in-reactor behavior. Thermo­chemical insights can therefore provide support for modeling species and thermal transport in fuels.

Manning approximation

In Section 1.18.3.3, we mentioned the difficulty in measuring the L-coefficients of an alloy, especially those involved in fluxes induced by point defect concentration gradients. Diffusion data are even more difficult to obtain for the interstitials. This is probably why most of the RIS models emphasize the effect of vacancy fluxes, with the interstitial contri­bution assumed to be neutral.

The first model proposed by Marwick30 intro­duced the main trick of the RIS models by taking out vacancy concentration as a separate factor of the diffusion coefficients. Expressions of the fluxes were obtained using a basic jump frequency model, which is equivalent to neglecting the cross-terms of the Onsager matrix. According to the random lattice gas diffusion model of Manning,8 correlation effects are added as corrections to the basic jump frequencies. The resulting fluxes30 are similar to eqn [24], except that the c-partial diffusion coefficients (dAV, 4i) are now equal to the partial diffusion coefficients (dw, dAi), which is equivalent to neglecting cross L-coefficients and the dependence of equilibrium point defect concentration on alloy composition. However, for the first time, the segregation of a major element, Ni, in concentrated austenitic steel was quali­tatively understood in terms of a competition between fast — and slow-diffusing major alloy components.

The partial diffusion coefficients associated with the vacancy mechanism are estimated using the

 

relations of Manning, deducing the L-coefficients from the tracer diffusion coefficients:

 

CD*

C! UJ

ECkD*

  image1054

CiD

 

d, j +

 

[26]

 

One observes that the Manning8 relations systemati­cally predict positive partial diffusion coefficients:

daV = D*a /Cv [27]

Moreover, the three L-coefficients of a binary alloy are no longer independent. Both constraints are known to be catastrophic in dilute alloys, while they seem to be capable of satisfactorily describing RIS of major alloy components in austenitic steels.30

1.18.4.1.2 Interstitials

Wiedersich eta/.102 added to Marwick’s model a con­tribution of the interstitials. The global interstitial flux is described by eqn [24], while preferential occu­pation of the dumbbell by a species or two is deduced from the alloy concentration and the effective bind­ing energies. Such a local equilibrium assumption implies very large interstitial jump frequencies com­pared to atomic jump frequencies. This kind ofmodel yields an analytical description of stationary RIS (see below eqn [28]). An explicit treatment of the intersti­tial diffusion mechanism was also investigated. From a microscopic description of the jump mechanism, one derives the kinetic equations associated with each dumbbell composition.109-112 Unlike previous models, there is no local equilibrium assumption, but correlation effects are neglected (except in Bocquet90). They could have used the interstitial diffusion model with the correlations of Bocquet.90 However, due to the lack of data for the interstitials, most of the recent concentrated alloy models neglect the interstitial contribution to RIS.104,113,114

  image1055

image1056

where the intrinsic diffusion coefficient is equal to Da = KVCV + dCi CI)F. The spatial extent of segre­gation coincides with the region of nonvanishing defect gradients. Note that, in the original paper,102 the partial and c-partial diffusion coefficients were taken to be equal. Such a simplification may change the amplitude of the RIS predictions. In a multicom­ponent alloy, not only the amplitude, but also the sign of RIS might be affected by this simplification. In dilute alloys, the whole kinetics can be approached by an analytical equation as long as the Kirkendall fluxes resulting from the formation of RIS are neglected.115

1.18.4.1.4 Concentration-dependent diffusion coefficients

Most of the RIS models assume thermodynamic fac­tors equal to 1, although in the first paper,11 a strong variation was observed between the thermodynamic factor and composition. Similarly, the quantities ZVa and ZIa of the point defect driving forces [14] are expected to depend on local concentration and stress field.11,116 For example, Wolfer11 demonstrated that RIS could affect the repartition between interstitial and vacancy fluxes and thereby, the swelling phe­nomena. The bias modification might be due to sev­eral factors: a Kirkendall flux induced by the RIS formation, or the dependence of the point defect chemical potentials on local composition, including the elastic and chemical effects.

A local-concentration-dependent driving force is due to the local-concentration-dependent atomic jump frequencies. Following this idea, the modified inverse Kirkendall (MIK) model introduces partial diffusion coefficients of the form1

pm

dAV = dAV exp -^V [29]

The migration energy is written as the difference between the saddle-point energy and the equilibrium energy. It depends on local composition through pair interactions calculated from thermodynamic quanti­ties such as cohesive energies, vacancy formation energies, and ordering energies. In fact, the present partial diffusion coefficients correspond to a meso­scopic quantity deduced from a coarse-grained averaging of the microscopic jump frequencies. In principle, not only the effective migration ener­gies, but also the mesoscopic correlation coefficients and thermodynamic factors should depend on local concentrations. Nevertheless, the thermodynamic factors, correlation coefficients, and the ZVa factor are assumed to be those of the pure metal A. Recently, a new continuous model suggested a multifrequency formulation ofthe concentration-dependent partial dif­fusion coefficients. Instead of averaging the sums of interaction bonds in the exponential argument, every jump frequency corresponding to a given configuration is considered with a configuration probability weight.17 Predictions ofthe model are compared with direct RIS Monte Carlo simulations that rely on the same atomic jump frequency models. In the two presented examples, the agreement is satisfactory. However, the thermody­namic factor and correlation coefficients are yet to be defined clearly.17

Light Ions

In many ways, proton irradiation overcomes the drawbacks of electron and neutron irradiation. The penetration depth of protons at a few MeV can exceed 40 mm and the damage profile is relatively flat such that the dose rate varies by less than a factor of 2 over several tens of micrometers. Further, the depth of penetration is sufficient to assess such prop­erties as irradiation hardening through microhardness measurements, and stress corrosion cracking through crack initiation tests such as the slow strain rate test.

Figure 34 shows schematics of 3.2 MeV proton and 5 MeV Ni2+ damage profiles in stainless steel. Super­imposed on the depth scale is a grain structure with a grain size of 10 pm. Note that with this grain size, there are numerous grain boundaries and a significant irradiated volume over which the proton damage rate is flat. The dose rate for proton irradiations is 2-3 orders ofmagnitude lower than that for electrons or ions, thus requiring only a modest temperature shift, but as it is still 102-103 times higher than neutron irradiation, modest doses can be achieved in reason­ably short irradiation time.

The disadvantages are that because of the small mass of the proton compared to heavy ions, the recoil energy is smaller and the resulting damage morphology is characterized by smaller, more widely spaced cascades than with ions or neutrons. Also, as only a few MeV are required to surmount the Cou­lomb barrier for light ions, there is also a minor amount of sample activation that increases with proton energy.

Perspective

The previous sections give a hands-on introduc­tion to the basic techniques of MD simulation. More involved discussions of the technical aspects may be found in the literature.3 Here, we offer comments on several enduring attributes of MD from the standpoint of benefits and drawbacks, along with an outlook on future development.

MD has an unrivalled ability for describing material geometry, that is, structure. The Greek phi­losopher Democritus (ca. 460 BCE-370 BCE) recog­nized early on that the richness of our world arose from an assembly of atoms. Even without very sophis­ticated interatomic potentials, a short MD simulation run will place atoms in quite ‘reasonable’ locations with respect to each other so that their cores do not overlap. This does not mean that the atomic positions are correct, as there could be multiple metastable configurations, but it provides reasonable guesses. Unlike some other simulation approaches, MD is capable of offering real geometric surprises, that is to say, providing new structures that the modeler would never have expected before the simulation run. For this reason, visualization of atomistic

structure at different levels of abstraction is very

important, and there are several pieces of free soft-

13,50,51

ware for this purpose.

As the ball-and-stick model of DNA by Watson and Crick52 was nothing but an educated guess based on atomic radii and bond angles, MD simula­tions can be regarded as ‘computational Watson and Crick’ that are potentially powerful for structural discovery. This remarkable power is both a blessing and a curse for modelers, depending on how it is harnessed. Remember that Watson and Crick had X-ray diffraction data against which to check their structural model. Therefore, it is very important to check the MD-obtained structures against experiments (diffraction, high-resolution transmission electron microscopy, NMR, etc.) and ab initio calculations whenever one can.

Another notable allure of MD simulations is that it creates a ‘perfect world’ that is internally consis­tent, and all the information about this world is accessible. If MD simulation is regarded as a numeri­cal experiment, it is quite different from real experi­ments, which all practitioners know are ‘messy’ and involve extrinsic factors. Many of these extrinsic fac­tors may not be well controlled, or even properly identified, for instance, moisture in the carrier gas, initial condition of the sample, the effects of vibra­tion, thermal drift, and so on. The MD ‘world’ is much smaller, with perfectly controlled initial condi­tions and boundary conditions. In addition, real experiments can only probe a certain aspect, a small subset of the properties, while MD simulation gives the complete information. When the experimental result does not work out as expected, there could be extraneous factors, such as a vacuum leak, impurity in the reagents, and so on that could be very difficult to trace back. In contrast, when a simulation gives a result that is unexpected, there is always a way to understand it, because one has complete control of the initial conditions, boundary conditions, and all the intermediate configurations. One also has access to the code itself. A simulation, even if a wrong one (with bugs in the program), is always repeatable. Not so with actual experiments.

It is certainly true that any interatomic potential used in an MD simulation has limitations, which means the simulation is always an approximation of the real material. It also can happen that the limita­tions are not as serious as one might think, such as in establishing a conceptual framework for fundamental mechanistic studies. This is because the value of MD is much greater than simply calculating material parameters. MD results can contribute a great deal towards constructing a conceptual framework and some kind of analytical model. Once the conceptual framework and analytical model are established, the parameters for a specific material may be obtained by more accurate ab initio calculations or more readily by experiments. It would be bad practice to regard MD simulation primarily as a black box that can provide a specific value for some property, without a deeper analysis of the trajectories and interpreta­tion in light of an appropriate framework. Such a framework, external to the MD simulation, is often broadly applicable to a variety of materials; for exam­ple, the theory and expressions of solute strengthen­ing in alloys based on segregation in the dislocation core. If solute strengthening occurs in a wide variety of materials, then it should also occur in ‘computer materials.’ Indeed, the ability to parametrically tune the interatomic potential, to see which energetic aspect is more important for a specific behavior or property, is a unique strength of MD simulations compared with experiments. One might indeed argue that the value of science is to reduce the com­plex world to simpler, easier-to-process models. If one wants only all the unadulterated complexity, one can just look at the world without doing anything. Thus, the main value of simulation should not be in the final result but also in the process, and the role of simulations should be to help simplify and clarify, not just to reproduce, the complexity. According to this view, the problem with a specific interatomic poten­tial is not that it does not work, but that it is not known which properties the potential can describe and which it cannot, and why.

There are also fundamental limitations in the MD simulation method that deserve comment. The algo­rithm is entirely classical, that is, it is Newtonian mechanics. As such, it misses relativistic and quantum effects. Below the Debye temperature,53 quan­tum effects become important. The equipartition theorem from classical statistical mechanics, stating that every degree of freedom possesses kBT/2 kinetic energy, breaks down for the high-frequency modes at low temperatures. In addition to thermal uncertain­ties in a particle’s position and momentum, there are also superimposed quantum uncertainties (fluctua­tions), reflected by the zero-point motion. These effects are particularly severe for light-mass elements such as hydrogen.54 There exist rigorous treatments for mapping the equilibrium thermodynamics of a quantum system to a classical dynamics system. For instance, path-integral molecular dynamics (PIMD)55,56 can be used to map each quantum particle to a ring of identical classical particles connected by Planck’s constant-dependent springs to represent quan­tum fluctuations (the ‘multi-instance’ classical MD approach). There are also approaches that correct for the quantum heat capacity effect with single-instance MD.53,57 For quantum dynamical properties outside of thermal equilibrium, or even for evaluating equilibrium time-correlation functions, the treatment based on an MD-like algorithm becomes even more complex.58-60

It is well recognized in computational material research that MD has a time-scale limitation. Unlike viscous relaxation approaches that are first order in time, MD is governed by Newtonian dynamics that is second order in time. As such, inertia and vibration are essential features of MD simulation. The necessity to resolve atomic-level vibrations requires the MD time step to be of the order picosecond/100, where a picosecond is the charac­teristic time period for the highest-frequency oscilla­tion mode in typical materials, and about 100 steps are needed to resolve a full oscillation period with sufficient accuracy. This means that the typical timescale of MD simulation is at the nanosecond level, although with massively parallel computer and linear-scaling parallel programs such as LAMMPS,49 one may push the simulations to microsecond to millisecond level nowadays. A nanosecond-level MD simulation is often enough for the convergence of physical properties such as elastic constants, ther­mal expansion, free energy, thermal conductivity, and so on. However, chemical reaction processes, diffusion, and mechanical behavior often depend on events that are ‘rare’ (seen at the level of atomic vibrations) but important, for instance, the emission of a dislocation from grain boundary or surface.61 There is no need to track atomic vibrations, impor­tant as they are, for time periods much longer than a nanosecond for any particular atomic configura­tion. Important conceptual and algorithmic advances were made in the so-called Accelerated Molecular Dynamics approaches,62-66 which filter out repetitive vibrations and are expected to become more widely used in the coming years.

. .. Above all, it seems to me that the human mind

sees only what it expects.

These are the words of Emilio Segre (Noble Prize in Physics, 1959, for the discovery of the antiproton) in a historical account of the discovery of nuclear fission by O. Hahn and F. Strassmann,67 which led to a Nobel Prize in Chemistry, 1944, for Hahn. Prior to the discovery, many well-known scientists had worked on the problem of bombarding uranium with neutrons, including Fermi in Rome, Curie in Paris, and Hahn and Meitner in Berlin. All were looking for the pro­duction of transuranic elements (elements heavier than uranium), and none were open minded enough to recognize the fission reaction. As atomistic simula­tion can be regarded as an ‘atomic camera,’ it would be wise for anyone who wishes to study nature through modeling and simulation to keep an open mind when interpreting simulation results.

Interstitial clustering

The dependence of in-cascade interstitial clustering on cascade energy is shown in Figure 12 for simula­tion temperatures of 100, 600, and 900 K, where the average number of interstitials in clusters of size two or larger at each energy has been divided by the total number of surviving interstitials in part (a), and by the number of displaced atoms predicted by the NRT model for that energy in part (b). The data points and error bars in Figure 12 indicate the mean and stan­dard error at each energy. The error bars can be used to make two significant comments. First, the relative scatter is much higher at lower energies, which is similar to the case of defect survival shown in Figure 10. Second, comparing again with Figure 10, the standard errors about the mean for interstitial clustering are greater at each energy than they are for defect survival.

The fact that the interstitial clustering fraction exhibits greater variability between cascades at a given energy than does defect survival is essentially related to the variety of defect configurations that are possible. A given amount of kinetic energy tends to produce a given number of stable point defects; this simple observation is embedded in the NRT model, that is, the number of predicted defects is linear in the ratio of the energy available to the energy per defect. However, any specific number of point defects can be arranged in many different ways.

At the lowest energies, where relatively few defects are created, some cascades produce no interstitial clusters and this is primarily responsible for the larger error bars at these energies. The average frac­tion of interstitials in clusters is about 20% of the NRT displacements above 5 keV, which corresponds to about 60% of the total surviving interstitials. Although it is not possible to discern a systematic effect of temperature below 10 keV, there is a trend toward greater clustering with increasing tempera­ture at higher energies. This can be more clearly seen in Figure 12(a) where the ratio of clustered inter­stitials to surviving interstitials is shown, and in the high-energy values in Table 2. This effect of tem­perature on interstitial clustering in these adiabatic simulations is consistent with the observations of Gao and coworkers77 mentioned above, that is, they found that the interstitial clustering fraction increases with temperature.

The interstitial cluster size distributions exhibit a consistent dependence on cascade energy and tem­perature as shown in Figure 13 (where a size of 1 denotes the single interstitial). The cascade energy dependence at 100K is shown in Figure 13(a), where the size distributions from 10 and 50 keV are included. The influence of cascade temperature is shown for 10 keV cascades in Figure 13(b), and for 20 keV cascades in Figure 13(c). All interstitial clus­ters larger than size 10 are combined into a single class in the histograms in Figure 13. The interstitial cluster size distribution shifts to larger sizes as either the cascade energy or temperature increases. An increase in the clustering fraction at the higher temperatures is most clearly seen as a decrease in the number of mono-interstitials. Comparing Figures 13(b) and 13(c) demonstrates that the temperature dependence increases as the cascade energy increases. The largest interstitial cluster observed in these simulations was contained in a 20 keV cascade at 600 K as shown in Figure 14. This large cluster was composed of 33 interstitials (<111> crowdions), and exhibited con­siderable mobility via what appeared to be a 1D glide in a < 111 > direction.64,66

Although the number of point defects produced and the fraction of interstitials in clusters was shown to be relatively independent of neutron energy spec­trum,82 the increase in the number of large clusters at higher energies suggested that the in-cascade clus­ter size distributions may exhibit more sensitivity to neutron energy spectrum than did these other para­meters. At 100 K, there are no interstitial clusters larger than 8 for cascade energies of 10 keV or

image078

less. Therefore, the fraction of interstitials in clusters of 10 or more was chosen as an initial parameter for evaluation of the size distributions. This partial interstitial clustering fraction is shown in Figure 15. As the large clusters are relatively uncommon, the fraction of interstitials contained in them is corre­spondingly small. This leads to the relatively large standard errors shown in the figure. However, it is clear that the energy dependence of the formation of these large clusters is much stronger than simply the
total fraction of interstitials in clusters. Infrequent large clusters such as the 33-interstitial cluster shown in Figure 14 play a significant role in the sharp increase in this clustering fraction observed between 100 and 600 K for the 20 keV cascades.

One unusual observation reported by Wooding and coworkers60 and Gao and coworkers86 was that some of the interstitial clusters exhibited a complex 3D morphology rather than collapsing into planar dislocation loops which are expected to have lower

Average interstitial cluster distributions: 10keV cascades at 100 and 900K

 

Average interstitial cluster distributions: cascades at 100K

 

‘со

Ф

to

о

_co

.CO

Ф

_g

О

c

q

 

(b)

 

image671

Iron cascade simulations Mean and standard error: 100K 600K 900K

 

Y

 

Vacancy

Interstitial

 

image672

X

 

z

 

image673image674image675

Figure 14 Residual defects at ~30 ps from a 20 keV cascade at 600 K containing a 33-interstitial cluster.

energy. Similar clusters have been seen in materials such as copper, although they appear to be less fre­quent in copper.54 The existence of such clusters has been confirmed with interatomic potentials that were developed more recently and with ab initio
calculations.87 Representative examples of these clusters are shown in Figure 16, where a ring-like four-interstitial cluster is shown in (a) and a five — interstitial cluster is shown in (b). Unlike the mobile clusters that are composed of [111] crowdions such

image676

as the one shown in Figure 14, the SIA clusters in Figure 16 are not mobile. As such, they have the potential for long lifetimes in the microstructure and may act as nucleation sites for larger interstitial-type defects. Figure 17 shows a somewhat larger sessile cluster containing eight SIA. This particular cluster was examined in detail by searching a large number of low-order crystallographic projections in an attempt to find a projection in which it would appear as a loop. Such a projection could not be found. Rather, the cluster was clearly 3D with a single di-, tri-, and di-interstitial on adjacent, close-packed (110) planes as shown in the figure. The eighth inter­stitial is a [110] dumbbell that lies perpendicular to the others and on the left side in Figure 17(a). Figure 16(b-d) are [101] projections through the three center (101) planes in Figure 17(a).

It is possible that the typical 10-15 ps MD simu­lation was not sufficient for the cluster to reorient and collapse. To examine this possibility, the simulation time of a 10keV cascade at 100 K that contained a similar eight SIA cluster was continued up to 100 ps. Very little cluster restructuring was seen over the time from 10 to 100 ps. In fact, the cluster had coa­lesced into nearly its final configuration by 10 ps. Gao and coworkers86 carried out a more systematic inves­tigation of sessile cluster configurations with extended simulations at 300 and 500 K. They found that many sessile clusters had converted to glissile within a few hundred picoseconds, but at least one eight SIA cluster remained sessile for ^500 ps even after aging at temperatures up to 1500 K. Given the impact that stable sessile clusters would have on the longer timescale microstructural evolution as
discussed in Chapter 1.13, Radiation Damage The­ory, further research is needed to characterize the long-term evolution of cascade-created point defect clusters. It is significant to point out that the conver­sion of glissile SIA clusters into sessile clusters has also been observed. For example, in a 20 keV cascade at 100 K, a glissile eight SIA cluster was trapped and converted into a sessile nine SIA cluster when it reacted with a single [110] dumbbell. The simulation was continued for more than 200 ps and the cluster remained sessile.

The mechanism responsible for interstitial clus­tering has not been fully understood. For example, it has not been possible to determine whether the motion and agglomeration of individual interstitials and small interstitial clusters during the cascade event contributes to the formation of the larger clus­ters that are observed at the end of the event. Alter­nate clustering mechanisms in the literature include the suggestion by Diaz de la Rubia and Guinan88 that large clusters could be produced by a loop punching mechanism. Nordlund and coworkers62 proposed a ‘liquid isolation’ model in which solidification of a melt zone isolates a region with excess atoms.

However, a new mechanism has recently been elucidated by Calder and coworkers,80 which seems to explain how both vacancy and interstitial clusters are formed, particularly the less frequent large clus­ters. Their analysis of cluster formation followed an investigation of the effects of PKA mass and energy, which demonstrated that the probability of produc­ing large vacancy and SIA clusters increases as these parameters increase.89 The conditions of this study produced a unique dataset that motivated the effort to unravel how the clusters were produced. They developed a detailed visualization technique that enabled them to connect the individual displace­ments of atoms that resulted in defect formation by comparing the start and end positions of atoms in the simulation cell. This defined a continuous series of links between each vacancy and interstitial that were ultimately produced by a chain of displacements. These chains could be displayed in what are called lines of ‘spaghetti.’80 Regions of tangled spaghetti define a volume in which atoms are highly agitated and a certain fraction of which are displaced. Stable interstitials and interstitial clusters are observed on the surface in this volume.

From their analysis of cascade development and the final damage state, Calder and coworkers were able to demonstrate a correlation between the production of large SIA clusters and a process taking place very early in the development of a cascade. Specifically, they established a direct connection between such clusters and the formation of a hypersonic recoil atom that passed through the supersonic pressure wave created by the initiation of the cascade. This highly energetic recoil may create a subcascade and a secondary supersonic shockwave at an appropriate distance from the primary shockwave. In this case, SIA clusters tend to be formed at the point where the primary and secondary shockwaves interfere with one another. This process is illustrated in Figure 18.80 Atoms may be transferred from the primary shock­wave volume into the secondary shockwave volume, creating an interstitial supersaturation in the latter and a vacancy supersaturation in the former. In this case, the mechanism of creating large SIA clusters early in the cascade process correspondingly leads to the formation of large vacancy clusters by the end of the thermal spike phase, that is, after several picose­conds. It is notable that the location of the SIA cluster is determined well before the onset of the thermal spike phase, by about 0.1 ps. Calder’s spaghetti analy­sis provides the opportunity for improved definition of parameters such as cascade volume and energy density; the interested reader is directed to Calder and coworkers80 for more details.

Fokker-Plank equation

In the case where the rates P(x, t), Q(x, t) are suffi­ciently smooth, it is reasonable to approximate them by continuous functions P(x, t), Q(x, t) and to replace the right-hand sides of eqns [18] and [19] by contin­uous functions of two variables, J(x, t) and f (x, t). The Fokker-Plank equation can be obtained from the ME by expanding the right-hand side of eqn [18] in Tailor series, omitting derivatives higher than the second order

@fS(x>t) л-e/ .лі

@t = G(x) — @4 V (x’t )f (x’t)]

@2

+ gXX2^D(x, t)f (x’t)] [33]

where

V(x, t) = P(x, t) — Q(x, t)

D(x)=2[^(x’t)+q (x >t) [34]

The first term in eqn [33] describes the hydrodynamic­like flow of clusters, whereas the second term accounts for their diffusion in the size-space. Note that for clusters of large enough sizes, when the cluster evolution is mainly driven by the hydrody­namic term, the functions P(x, t), Q(x, t) are smooth; hence the ME and F-P equations are equally accurate. For sufficiently small cluster sizes, when the diffusion term plays a leading role, eqn [33]) provides only poor description.67,68,83 As the cluster nucle — ation normally takes place at the beginning of irradi­ation, that is, when the clusters are small, the results obtained using F-P equation are expected to be less accurate compared to that of ME.

1.13.4.4.1 Mean-size approximation

In eqn [24], the term with V(x, t) is responsible for an increase of the mean cluster size, while the term with D(x) is responsible for cluster nucle — ation and broadening of the SDF. For large mean cluster size, most of the clusters are stable and the diffusion term is negligible. This is the case when the nucleation stage is over, and the cluster density does not change significantly with time. A reasonably accurate description of the cluster evolution is then given in the mean-size approxima­tion, when fc(x, t) = Nc8(x — (x(t))) where 8(£) is the Kronecker delta and Nc is the cluster density. The rate of change of the mean size in this case can be calculated by omitting the last term in the right-hand side of eqn [24], multiplying both sides by x, integrating over x from 0 to infinity, and taking into account that f (x = 1, t) = 0 and f (x = 0, t) = 0

d(x) = V((x), t) [35]

Advanced KMC Methods

To overcome some of the limitations described above, two techniques have been recently derived: the first-passage Green’s function and synchronous parallel KMC. The first-passage Green’s function approach has been successfully used in various sub­areas ofcomputational science but so far has escaped the widespread attention of materials scientists. Pre­liminary work indicates that constructive use of the first-passage Green’s function approach for modeling radiation microstructures is possible46,47 but will re­quire considerable effort to develop a time-dependent formulation of the method. However, the potential payoff is well worth it: preliminary estimates show that it should be possible to boost the effective perfor­mance of the (exact) Monte Carlo simulations by several orders ofmagnitude. The speedup ofthe sim­ulation with the first-passage Green’s function ap­proach can be estimated from a rough argument based on the number of events required to process the diffusion of a vacancy from one void to another in Oswald coarsening. For example, voids in irradiated alloys are separated by ^0.2 mm. But as the atomic jump distance is typically on the order of 0.25 nm, the ratio between the required diffusion length and the atomic jump distance is around 103. The ripening process consists mainly of vacancies detaching from one void and diffusing to a neighbor. If this is done with a direct hop simulation, then ^106 random walk diffusion hop events would be required, from vacancy emission to absorption. Each such event requires the generation of one or more random numbers and changes in bookkeeping tables that store current posi­tions for each defect. Using the first-passage Green’s function algorithm, the vacancy in most cases will reach the vicinity of a void in several (10-30) steps, each of which will require <10 times the number of calculations for a simple hop. Thus, it is possible to conclude that the simulation of the ripening of voids (which is similar to modeling radiation-induced pre­cipitation processes) at this spacing would require about three orders of magnitude less computer time than the current KMC programs.

At the moment, the first-passage Green’s function KMC appears to work very well for some specific cases such as the one mentioned above, but when one tries to model a more realistic case such as the con­tinuous introduction of displacement cascades in which all the defects are very close to each other and diffuse with very different diffusivities, the pos­sible ‘protective domains’ become very small and the technique is not very efficient.

One additional difficulty with KMC simulations is the fact that the current state-of-the-art simulation codes utilize serial computing only. Thus, there exists a critical need to accelerate the maturation of multi­scale modeling of fusion reactor materials, namely the development of advanced and highly efficient Monte Carlo algorithms for the simulation of materi­als evolution when controlling processes occur with characteristic time scales between 10_ 2 and ^10°s. There has recently been some activity associated with synchronous parallel KMC48; however, the problem of dealing with highly inhomogeneous regions and species diffusing at rates that are disparate by many of orders of magnitude tends to greatly reduce the par­allel algorithm performance. Clearly, more effort is needed on the development of advanced algorithms for KMC simulations. Further, it is imperative that the algorithms developed be highly efficient in today’s massively parallel computing architectures.

At the moment, no single KMC method can efficiently treat the complex microstructures and kinetic evolution associated with radiation effects in multi-component materials, nor efficiently balance the computational requirements to treat inhomoge­neous domains consisting of very different defect densities. It is possible that a combination of different techniques in the course of a single simulation will be the most efficient pathway.

Thermochemical Equilibrium Computational Codes

There are a variety of software packages that will perform chemical equilibrium calculations for complex systems such as nuclear fuels. These have become quite versatile, able to compute the thermal difference in specific reactions as well as determining global equilibria at uniform temperature or in an adiabatic system. They also provide output through internal postprocessors or by exporting to text or spreadsheet applications. There are also a variety of output forms including activities/partial pressures, compositions within solution phases, and amounts, which can include plotting of phase and predomi­nance diagrams. The commercial products include FactSage62 and ThermoCalc63 which also contain optimization modules that allow use of activity and phase equilibria to obtain thermochemical values and fit to models for solutions. Other products include Thermosuite,64 MTDATA,65 PANDAT,66 HSC,67 and MALT.57

1.17.5 Outlook

Computational thermodynamics as applied to nuclear materials has already substantially contribu­ted to the development of nuclear materials ranging from oxide and metal fuel processing to assessing clad alloy behavior. Yet, in both development of data and models for complex fuel and fuel-fission product systems and in the application of equilibrium calcu­lations to reactor modeling and simulation, there is much to accomplish. Databases containing accu­rate representations of both metallic and oxide fuels with minor actinides are lacking, and even less is known about more advanced fuel concepts such as carbide and nitride fuels. Representations for mul­tielement fission products dissolved in fuel phases or as secondary phases generated after considerable burnup are also unavailable, although some simple binary and ternary systems have been determined. These are critically needed as they will help govern activities in metal and oxide fuels, influencing ther­mal conductivity and providing source terms for transport of important species such as those contain­ing iodine.

The other broad area that needs significant atten­tion is the development of algorithms for computing chemical equilibria. Although there are robust and accurate codes for computing equilibria within the software packages discussed in Section 1.17.6, these suffer from relatively slow execution. That is not a problem for the codes noted above where only a few calculations are required at any time. However, incorporation of equilibrium state calculations in broad fuel modeling and simulation codes with millions ofnodes to determine the spatial distribution of phases, solution compositions (e. g., local O/M in oxide fuel), and local activities poses a different prob­lem. Current algorithms are far too slow for such use, and therefore, new techniques need to be developed to accomplish these calculations within the larger modeling and simulation codes.68

Acknowledgments

The author wishes to thank Steve Zinkle, Stewart Voit, and Roger Stoller for their valuable comments. Research supported by the U.S. Department of Energy, Office of Nuclear Energy, under the Fuel Cycle Research and Development and Nuclear Energy Advanced Modeling and Simulation Pro­grams. This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC05- 00OR22725 with the U. S. Department of Energy. The U. S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U. S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manu­script, or allow others to do so, for U. S. Government purposes.

Defects

Point defects are of course very important in a nuclear complex as they are created either by irradiation or by accommodation of impurities (e. g., fission prod­ucts (FP)) (see Chapter 1.02, Fundamental Point Defect Properties in Ceramics and Chapter 1.03, Radiation-Induced Effects on Microstructure). More generally, they have a tremendous role in the kinetic properties of the materials. It is therefore not surprising if countless ab initio studies exist on point defects in nuclear materials. Most of them are based on a supercell approach in which the unit cell of the perfect crystal is periodically repeated up to the larg­est possible simulation box. A point defect is then introduced, and the structure is allowed to relax. By difference with the defect-free structure, one can cal­culate the formation energy ofthe defect that drives its equilibrium concentration. Some care must be taken in writing this difference as the number and types of atoms should be preserved in the process. Point defects are also the perfect object for the saddle point calcula­tions that give the energy that drives their kinetic properties. Ab initio permits accurate calculation of these energies and also consideration of (for insulating materials) the various possible charge states of the defects. They have shown that the properties of defects can vary greatly with their charge states.

Many different kinds of defects can be considered. A list of possible defects follows with the characteristic associated thermodynamical and kinetic energies.

Interaction of point defects with alloying elements or impurities in iron

The diffusion of point defects produced by irradia­tion may induce fluxes of solutes, for example, toward
or away from defect sinks, depending on the defect — solute interactions. DFT is again a very powerful tool to predict such interactions, which can then be used in kinetic models. This approach is also useful in the absence of irradiation, and a very interesting example has been obtained in the simulation of the first stages of the coherent precipitation of copper in bcc-Fe. DFT calculations predicted that the vacancy — formation energy in metastable bcc-Cu (which is not known experimentally since bulk Cu is fcc) is 0.9 eV, that is, much smaller than that in bcc iron, namely 2.1 eV. This leads to strong trapping of vacan­cies by the Cu precipitates. As a result, precipitates containing up to several tens of copper atoms are quite surprisingly predicted to be much more mobile than individual copper atoms in the iron matrix.26 Another very illustrative example is given by the study of atomic transport via interstitials in dilute Fe-P alloys. DFT results indeed predict that Fe-P mixed dumbbells are highly mobile but that they can be deeply trapped by a substitutional P atom.79 A systematic study of the interaction of monovacan­cies and self-interstitials with all transition-metal solutes has been reported recently (see Figure 8).80