Computer Simulation

Before describing some effects that are not yet completely understood, we should mention the basis for believing that these problems are not insoluble. That’s the important subject of computer simulation. In the 1970s and 1980s, when unantici­pated difficulties with instabilities arose, computers were still in their infancy. To the dismay of both fusion scientists and congressmen, the date for the first demonstration reactor kept being pushed forward by decades. The great progress seen in Fig. 8.1 since the 1980s was in large part aided by the advances in computers, as seen in Fig. 8.2. In a sense, advances in fusion science had to wait for the development of computer science; then the two fields progressed dramatically together. Nowadays, a $300 personal computer has more capability than a room-size computer had 50 years ago when the first principles of magnetic confinement were being formulated.

Подпись: Fig. 8.10 Hokusai’s painting of the Big Wave
image275

Computer simulation was spearheaded by the late John Dawson, who worked out the first principles and trained a whole cadre of students who have developed the science to its present advanced level. A computer can be programmed to solve an equation, but equations usually cannot even be written to describe something as complicated as a plasma in a torus. What, for instance, does wavebreaking mean? In Hokusai’s famous painting in Fig. 8.10, we see that the breaking wave doubles over on itself. In mathematical terms, the wave amplitude is double-valued. Ignoring the fractals that Hokusai also put into the picture, we see that the height of the wave after breaking has two values, one at the bottom and one at the top. Equations cannot handle this; Dawson’s first paper showed how to handle this on a computer.

So the idea is to ask the computer to track where each plasma particle goes without using equations. For each particle, the computer has to memorize the x, y, z coordinates of its position as well as its three velocity components. Summing over the particles would give the electrical charge at each place, and that leads to the electric fields that the particles generate. Summing over their velocities gives the currents generated, and these specify the magnetic fields generated by the plasma motions. The problem is this. There are as many as 1014 ions and electrons per cubic centimeter in a plasma. That’s 200,000,000,000,000 particles. No computer in the foreseeable future can handle all that data! Dawson decided that particles near one another will move together, since they will feel about the same electric and magnetic fields at that position. He divided the particles into bunches, so that only, say, 40,000 of these superparticles have to be followed. This is done time step by time step. Depending on the problem, these time steps can be as short as a nano­second. At each time step, the superparticle positions and velocities are used to solve for the E — and B-fields at each position. These fields then tell how each particle moves and where they will be at the beginning of the next time step. The process is repeated over and over again until the behavior is clear (or the project runs out of money). A major problem is how to treat collisions between superparticles, since, with their large charges, the collisions would be more violent than in reality. How to overcome this is one of the principles worked out by Dawson.

image276

Fig. 8.11 Electric field pattern in a turbulent plasma (from ITER Physics Basis 2007 [26], quoted from [14]. The plot is of electric potential contours of electron-temperature-gradient turbulence in a torus)

Before computers, scientists’ bugaboo was nonlinearity. This is nonproportionality, like income taxes, which go up faster than your income. Linear equations could be solved, but nonlinear equations could not, except in special cases. A computer does not care whether a system behaves linearly or not; it just chugs along, time step by time step. A typical result is shown in Fig. 8.11. This shows the pattern of the electric fields generated by an instability that starts as a coherent wave but then goes non­linear and takes on an irregular form. This turbulent state, however, has a structure that could not have been predicted without computation; namely, there are long “fingers” or “streamers” stretching in the radial direction (left to right). These are the dangerous perturbations that are broken up by the zonal flows of Chap. 7.

The simulation techniques developed in fusion research are also useful in other disciplines, like predicting climate change. There is a big difference, however, between 2D and 3D computations. A cylinder is a 2D object, with radial and azimuthal directions and an ignorable axial direction, along which everything stays the same. When you bend a cylinder into a torus, it turns into a 3D object, and a computer has to be much larger to handle that. For many years, theory could explain experimental data after the fact, but it could not predict the plasma behavior. When computers capable of 2D calculations came along, the nonlinear behavior of plasmas could be studied. Computers are now fast enough to do 3D calculations in a tokamak, greatly expanding theorists’ predictive capability. Here is an example of a 3D computation (Fig. 8.12). The lines follow the electric field of an unstable perturbation called an ion-temperature-gradient mode. These lines pretty much follow the magnetic field lines. On the two cross sections, however, you can see the how these lines move in time. The intersections trace small eddies, unlike those in the previous illustration. It is this capability to predict how the plasma will move under complex forces in a complicated geometry that gives confidence that the days of conjectural design of magnetic bottles are over.

The science of computer simulation has matured so that it has its own philosophy and terminology, as explained by Martin Greenwald [15]. In the days of Aristotle, physical models were based on indisputable axioms, using pure logic with no input from human senses. In modern times, models are based on empiricism and must agree with observations. However, both the models and the observations are inexact. Measurements always have errors, and models can keep only the essential elements. This is particularly true for plasmas, where one cannot keep track of

image277

Fig. 8.12 A 3D computer simulation of turbulence in a D-shaped tokamak (courtesy of W. W. Lee, Princeton Plasma Laboratory)

every single particle. The problem is to know what elements are essential and which are not. Computing introduces an important intermediate step between theory (models) and experiment. Computers can only give exact solutions to inexact equations or approximate solutions to more exact (and complicated) equations. Computer models (codes) have to be introduced. For instance, a plasma can be represented as particles moving in a space divided into cells, or as a continuous fluid with no individual particles. Benchmarking is checking agreement between different codes to solve the same problem. Verification is checking that the computed results agree with the physical model; that is, that the code solves the equations correctly. Validation is checking that the results agree with experiment; that is, that the equa­tions are the right ones to solve. Plasma physics is more complicated than, say, accelerator physics, where only a few particles have to be treated at a time. Because even the models (equations) describing a plasma cannot be exact, the development of fusion could not proceed until the science of computer simulation had been developed.