Perspective

The previous sections give a hands-on introduc­tion to the basic techniques of MD simulation. More involved discussions of the technical aspects may be found in the literature.3 Here, we offer comments on several enduring attributes of MD from the standpoint of benefits and drawbacks, along with an outlook on future development.

MD has an unrivalled ability for describing material geometry, that is, structure. The Greek phi­losopher Democritus (ca. 460 BCE-370 BCE) recog­nized early on that the richness of our world arose from an assembly of atoms. Even without very sophis­ticated interatomic potentials, a short MD simulation run will place atoms in quite ‘reasonable’ locations with respect to each other so that their cores do not overlap. This does not mean that the atomic positions are correct, as there could be multiple metastable configurations, but it provides reasonable guesses. Unlike some other simulation approaches, MD is capable of offering real geometric surprises, that is to say, providing new structures that the modeler would never have expected before the simulation run. For this reason, visualization of atomistic

structure at different levels of abstraction is very

important, and there are several pieces of free soft-

13,50,51

ware for this purpose.

As the ball-and-stick model of DNA by Watson and Crick52 was nothing but an educated guess based on atomic radii and bond angles, MD simula­tions can be regarded as ‘computational Watson and Crick’ that are potentially powerful for structural discovery. This remarkable power is both a blessing and a curse for modelers, depending on how it is harnessed. Remember that Watson and Crick had X-ray diffraction data against which to check their structural model. Therefore, it is very important to check the MD-obtained structures against experiments (diffraction, high-resolution transmission electron microscopy, NMR, etc.) and ab initio calculations whenever one can.

Another notable allure of MD simulations is that it creates a ‘perfect world’ that is internally consis­tent, and all the information about this world is accessible. If MD simulation is regarded as a numeri­cal experiment, it is quite different from real experi­ments, which all practitioners know are ‘messy’ and involve extrinsic factors. Many of these extrinsic fac­tors may not be well controlled, or even properly identified, for instance, moisture in the carrier gas, initial condition of the sample, the effects of vibra­tion, thermal drift, and so on. The MD ‘world’ is much smaller, with perfectly controlled initial condi­tions and boundary conditions. In addition, real experiments can only probe a certain aspect, a small subset of the properties, while MD simulation gives the complete information. When the experimental result does not work out as expected, there could be extraneous factors, such as a vacuum leak, impurity in the reagents, and so on that could be very difficult to trace back. In contrast, when a simulation gives a result that is unexpected, there is always a way to understand it, because one has complete control of the initial conditions, boundary conditions, and all the intermediate configurations. One also has access to the code itself. A simulation, even if a wrong one (with bugs in the program), is always repeatable. Not so with actual experiments.

It is certainly true that any interatomic potential used in an MD simulation has limitations, which means the simulation is always an approximation of the real material. It also can happen that the limita­tions are not as serious as one might think, such as in establishing a conceptual framework for fundamental mechanistic studies. This is because the value of MD is much greater than simply calculating material parameters. MD results can contribute a great deal towards constructing a conceptual framework and some kind of analytical model. Once the conceptual framework and analytical model are established, the parameters for a specific material may be obtained by more accurate ab initio calculations or more readily by experiments. It would be bad practice to regard MD simulation primarily as a black box that can provide a specific value for some property, without a deeper analysis of the trajectories and interpreta­tion in light of an appropriate framework. Such a framework, external to the MD simulation, is often broadly applicable to a variety of materials; for exam­ple, the theory and expressions of solute strengthen­ing in alloys based on segregation in the dislocation core. If solute strengthening occurs in a wide variety of materials, then it should also occur in ‘computer materials.’ Indeed, the ability to parametrically tune the interatomic potential, to see which energetic aspect is more important for a specific behavior or property, is a unique strength of MD simulations compared with experiments. One might indeed argue that the value of science is to reduce the com­plex world to simpler, easier-to-process models. If one wants only all the unadulterated complexity, one can just look at the world without doing anything. Thus, the main value of simulation should not be in the final result but also in the process, and the role of simulations should be to help simplify and clarify, not just to reproduce, the complexity. According to this view, the problem with a specific interatomic poten­tial is not that it does not work, but that it is not known which properties the potential can describe and which it cannot, and why.

There are also fundamental limitations in the MD simulation method that deserve comment. The algo­rithm is entirely classical, that is, it is Newtonian mechanics. As such, it misses relativistic and quantum effects. Below the Debye temperature,53 quan­tum effects become important. The equipartition theorem from classical statistical mechanics, stating that every degree of freedom possesses kBT/2 kinetic energy, breaks down for the high-frequency modes at low temperatures. In addition to thermal uncertain­ties in a particle’s position and momentum, there are also superimposed quantum uncertainties (fluctua­tions), reflected by the zero-point motion. These effects are particularly severe for light-mass elements such as hydrogen.54 There exist rigorous treatments for mapping the equilibrium thermodynamics of a quantum system to a classical dynamics system. For instance, path-integral molecular dynamics (PIMD)55,56 can be used to map each quantum particle to a ring of identical classical particles connected by Planck’s constant-dependent springs to represent quan­tum fluctuations (the ‘multi-instance’ classical MD approach). There are also approaches that correct for the quantum heat capacity effect with single-instance MD.53,57 For quantum dynamical properties outside of thermal equilibrium, or even for evaluating equilibrium time-correlation functions, the treatment based on an MD-like algorithm becomes even more complex.58-60

It is well recognized in computational material research that MD has a time-scale limitation. Unlike viscous relaxation approaches that are first order in time, MD is governed by Newtonian dynamics that is second order in time. As such, inertia and vibration are essential features of MD simulation. The necessity to resolve atomic-level vibrations requires the MD time step to be of the order picosecond/100, where a picosecond is the charac­teristic time period for the highest-frequency oscilla­tion mode in typical materials, and about 100 steps are needed to resolve a full oscillation period with sufficient accuracy. This means that the typical timescale of MD simulation is at the nanosecond level, although with massively parallel computer and linear-scaling parallel programs such as LAMMPS,49 one may push the simulations to microsecond to millisecond level nowadays. A nanosecond-level MD simulation is often enough for the convergence of physical properties such as elastic constants, ther­mal expansion, free energy, thermal conductivity, and so on. However, chemical reaction processes, diffusion, and mechanical behavior often depend on events that are ‘rare’ (seen at the level of atomic vibrations) but important, for instance, the emission of a dislocation from grain boundary or surface.61 There is no need to track atomic vibrations, impor­tant as they are, for time periods much longer than a nanosecond for any particular atomic configura­tion. Important conceptual and algorithmic advances were made in the so-called Accelerated Molecular Dynamics approaches,62-66 which filter out repetitive vibrations and are expected to become more widely used in the coming years.

. .. Above all, it seems to me that the human mind

sees only what it expects.

These are the words of Emilio Segre (Noble Prize in Physics, 1959, for the discovery of the antiproton) in a historical account of the discovery of nuclear fission by O. Hahn and F. Strassmann,67 which led to a Nobel Prize in Chemistry, 1944, for Hahn. Prior to the discovery, many well-known scientists had worked on the problem of bombarding uranium with neutrons, including Fermi in Rome, Curie in Paris, and Hahn and Meitner in Berlin. All were looking for the pro­duction of transuranic elements (elements heavier than uranium), and none were open minded enough to recognize the fission reaction. As atomistic simula­tion can be regarded as an ‘atomic camera,’ it would be wise for anyone who wishes to study nature through modeling and simulation to keep an open mind when interpreting simulation results.