News : Home
History & Origin

Why Computational Physics?

Computational physics can be traced back to the tremendous calculations of solar system dynamics in the 19th century, but Computational Physics, as the discipline we know today began with the development of programmable computers in the mid 20th century, though significant early work was carried out on hand calculators.

The availability of programmable computers that can carry out endless repetitive calculations has greatly extended the range of problems that can be solved. For example one used to be restricted to solving a very limited range of differential equations and had to make approximations so that the model would fit the method. No longer is there such a restriction. Now the concern is not whether the differential equation can be solved, rather it is to ensure that the numerical method being used is stable.

However one must continually be on the lookout to ensure that computational results are valid. The output from a computer is at most as good as the assumptions fed in. Whenever possible a computer result should be compared with the results of real experiments. But that is not always practical, and sometimes not even possible. There are many problems which are too complicated for analytical solution, and where for a variety of reasons a real experiment is impossible or undesirable.

Examples of impossible experiments are simulations of stellar and galactic evolution. Examples of experiments which are undesirable are those which are very expensive or very dangerous. We should all have been better off had the operators at Chernobyl performed a computer simulation. Such computer simulations of large systems, where the underlying physics is well understood, can be very effective, particularly in the training of operators.

An advantage of computational simulation is that one can control conditions much more closely than in a real experiment. An example demonstrating this is the work of Matsuda and colleagues, who were studying flux flow in type II superconductors. Flux flow is strongly influenced by impurities in the superconductor. In the real experiment the impurities lay on a periodic lattice. It was found that there was a highly stable configuration when the lattice of flux lines was commensurate with the impurity lattice. In a computer simulation it is possible to control impurities much more closely than in a real experiment, for example one can ensure that the impurity lattice is truly periodic. However it is important to observe that what is being done here is to use the computer to extend and interpret real experimental results.

A very important reason for using computational physics in problem solving is the speed of the computers. In many applications speed of calculation is essential: as has often been said, one does not want a forecast of yesterday’s weather.

The Unexpected

Much of the advance of science is orderly: it advances either by small, incremental experimental steps or by new experiments suggested by theory, for example the observations of the bending of the light by the sun to test Einstein’s general theory of relativity. However sometimes the advances are wholly unexpected, such as the discovery of radioactivity. In this respect, computational physics, to some extent resembles experimental physics, in that qualitatively unexpected phenomena are observed.

A major significance of the power to calculate (in a computer intensive way, not by hand!) lies in the possibility of studying non linear equations, whose qualitative behaviours are totally unexpected since we have no analytical, exact, indication of how they should behave. The first impact of the capacity to calculate introduced by computers into the theory was really here, in the possibility of studying the behaviour of systems whose evolution follows laws that we know how to write, but not how to solve.

This is not unusual in physics. As massive calculations revealed, it is not unusual even in mechanics, so creating an important divide between the situation before and after the advent of computers.

Well-known examples of this arise in the work of Lorenz and of Feigenbaum, each of whose discoveries was completely unexpected, and each of whose computational experimental discoveries has led to significant advances in theoretical understanding.

Ed Lorenz in the 1960s developed a ‘toy model’ of atmospheric dynamics, and discovered the existence of ‘chaos’ and of strange attractors: who has not heard of ‘The Butterfly Effect’? Feigenbaum, in the 1970s, using a programmable pocket calculator (not all the results we discuss involve the use of state of the art computers!), discovered a universality in non-linear mappings, which has led to advances in our understanding of the behaviour of nonlinear dynamical systems. (It may be noted that Feigenbaum had considerable difficulty in getting his discovery published.)

A New Area in Physics

Computational Physics is that methodological branch of physics where a computer is
the basic tool for exploring the laws of nature.

Modern computers have developed in a tight symbiosis with computations in physics, and their development has proceeded through mutual interaction. Computational Physics has been rapidly maturing as the third leg of the scientific enterprise alongside Experiment and Theory. The increase in computing power has been so great that it has effectively resulted in a qualitative change in the way computers can be used, and in the emergence of Computational Physics as a distinct methodology.

Traditionally, computational physicists have been self-trained, there having been no formal curricula aimed at educating computationalists. Indeed there really has been little formal discussion of what computational physics is. However, in recent years, many universities have begun undergraduate and graduate-level programs in computational science and engineering. Computational physics plays a vital role in these programs. This development will help ensure a growing cadre of trained computational physicists.

Physicists (and other scientists) use computers in a variety of different ways, such as for controlling experiments and gathering data. In general when we talk about Computational Physics, we exclude such uses, but see below Handling experimental data.

Computational methods and tools continue to evolve at explosive rates. In the 1970s Moore formulated his ‘Law’ predicting that the power and speed and memory of microcomputers halves or doubles, as appropriate, every 18 months. This halving and doubling continues at the time of writing (2003). Today a personal computer [PC] has a computing speed as fast as that provided by a 10 year old supercomputer and may well have a much larger memory. Indeed many current supercomputers are built up from a large number of PCs.

The democratisation of supercomputing is opening up computational physics to scientists throughout the world, including those in ‘less developed’ countries. We may thus expect far greater use of simulation as a basic tool of physics enquiry in the coming decades.

New Possibilities for Simulations

In the remainder of this chapter we shall describe some topics in Computational Physics. Not all current topics are mentioned. Moreover, Computational Physics is expanding rapidly, and new areas outside the traditional boundaries of physics are continually being developed.

Statistical Physics

In the past few years, researchers have increasingly been able to investigate fundamental questions in physics computationally with unprecedented fidelity and level of detail. For example, fluid dynamics simulations aimed at understanding turbulence have recently been carried out with over 200 billion computational degrees of freedom in three dimensions. This allows the examination of fluctuations over a three-order-of-magnitude span in length scales. Using Monte Carlo methods,
researchers can now carry out statistical mechanical simulations with stunning fidelity and detail. For example, with current algorithms on the largest machines they could now carry out simulations of the Ising model with 1012 spins in three dimensions. Doing so would allow them to faithfully simulate the behaviour of the Ising model over six orders of magnitude in reduced temperature near the critical point.

However, CP is far more important in statistical physics than just providing better accuracy. CP is crucial for basically all areas: The study of disordered systems, polymers, membranes, glasses, granular materials etc. The whole field of nonequilibrium statistical physics uses CP methods, as they are inevitable for non-linear dynamics, turbulence etc. Whole topics have been initiated by the use of computers such as fractal growth phenomena, and cellular automata modelling. CP substantially contributes to the spectacular broadening of the fields of applications of statistical
physics, including areas far beyond the scope of traditional physics, including (i) the description of co-operative phenomena in biological systems and (ii) the analysis and modelling of finance phenomena.

Diffusion Limited Aggregation [DLA]

The traditional computational physics approach is to perform numerical experiments. In such simulations, real experiments are approximated by including a large amount of small-scale detail. Computer models for granular media, including detailed description at the level of the individual grain, are examples of this methodology. A second and more subtle application of computers reflects the novel paradigm of ‘algorithmic modelling’: In this case, the level of small-scale detail is minimised. The large-scale structure of the physical system emerges when a simple algorithm is repeated many times. One example of a structure that is particularly well described by algorithmic modelling is the invasion front of one fluid penetrating into another in a porous medium (diffusion limited aggregation), where models that seems vastly oversimplified actually give quantitatively correct description of the physical process. The DLA model provides deep insight into a rather common natural phenomenon, one which it is completely impossible to approach by non-computational techniques.

Many Body Systems and Quantum Monte Carlo Methods

Of the computational physics problems of the 21st century, the study of quantum dynamics of many-body systems is one of the deepest, and is often described as insoluble. Recent progress in this field has occurred through the use of operator representation techniques, which allow these problems to be transformed into stochastic equations on a classical-like phase-space, that can then be straightforwardly integrated numerically. Theoretical predictions are directly testable through experiments in quantum and atom optics, which allow control of quantum initial conditions and also of the Hamiltonian that determines time-evolution. An example of this is the prediction and experimental verification of quantum ‘squeezing’ in soliton propagation. Further progress in this area is likely as numerical algorithms and representations improve, along with experimental improvements in quantum optics, atom lasers and mesoscopic solid-state devices.

From Electronic Structure Towards Quantum-based Materials Science

For a long time electronic structure studies of metals, semiconductors, and molecules based on density-functional theories have been one of the fundamental tools of materials science. Important progress has been realised recently in extending the applicability of these techniques to larger, more complex systems and in improving the accuracy (and hence the predictive ability) of these techniques. Only a decade or so ago, simulations of systems containing about 100 valence electrons could be performed only on large supercomputers. Today, thanks to concurrent improvements in the basic algorithms and of computer power, ab-initio molecular dynamics studies can be performed on clusters of personal computers for systems with thousand of atoms and ten thousands of electrons. This is opening the way to accurate predictions of material and molecular structures and properties and to simulation of complex processes. Chemical reactions can be simulated “on the fly”, ab-initio calculated potential-energy surfaces can successfully be used for kinetic modelling of a catalytic process, leading to ab-initio predictions of reaction rates. The ideal strength of materials can be calculated from first principles, without any other input than the composition and the atomic numbers of the components. Today, electronic structure theory is on the way towards quantum-based materials design. Electronic structure theory has also acquired a strongly interdisciplinary character, spreading into many areas from geo-science to biology. Ab-initio molecular dynamics has been used to calculate the viscosity of liquid iron under the conditions of the earth’s core and to determine the chemical reactivity of enzymes, to name only two examples.

The corner-stone of density-functional theory is the simplified, yet for many purposes sufficiently accurate description of the many-electron problem - the most accurate description being provided by Quantum Monte Carlo (QMC) techniques. With today’s faster computers and algorithmic improvements for handling the Fermion sign problem, QMC simulations become feasible at large enough scale as to make QMC a viable alternative to post-Hartree-Fock or density-functional calculations when accuracy is paramount. QMC simulations will also provide benchmarks for the development of improved density-functionals.

Quantum Chromo-Dynamics [QCD]

In elementary particle physics, the numerical simulation of theories on a lattice by means of Monte Carlo algorithms was introduced more than 20 years ago by K. Wilson. This has now become one of the most important tools for obtaining predictions from theoretical models. The major field of application is Quantum Chromo-Dynamics (QCD), the theory of the strong interactions. QCD simulations have begun to provide quantitative estimates of important properties of hadronic particles, such as masses and form factors. Moreover, numerical simulations are necessary for determining the fundamental parameters of QCD. Another area of application is the behaviour of matter under extreme conditions, such as in the early history of the universe and in different astrophysical objects like neutron stars. For the theoretical study of nuclear matter in such situations Monte Carlo simulation is an indispensable tool.

Sono-Luminescence and Resistive Magneto-Hydrodynamics

Puzzling phenomena such as sono-luminescence are yielding to understanding through a combination of molecular-dynamic and continuum shock-physics simulations. Four-dimensional, highly detailed simulations combining resistive magneto-hydrodynamics with radiation transport are becoming possible. These tools are aiding us in understanding the life cycle of stars and are being used to design inertial-confinement fusion experiments aimed at net energy gain from fusion in the
laboratory.

Biophysics

In biophysics, there is revolutionary progress in simulating the folding of complex proteins. Simulations are helping to unravel the physical processes involved in the informational role of primary DNA structures (the genetic sequence) as well as delving into the role of secondary structures (e.g., detachments and loops) in DNA. Researchers are also modelling with increasing detail the physics of enzymatic catalysis. Key progress is being made in the use of classical density functional theory to model ion channels in cells.

Handling Experimental Data

Computational physics involves more than using simulation to provide insight and interpretation. It involves the acquisition and management and understanding of vast amounts of experimental data. Two areas stand out.

In high-energy physics, the ability to acquire, store and interpret terabyte data sets is becoming a key part of progress in accelerator experiments.

A new approach is pursued by the DataGrid Project. Its objective is to build the next generation computing infrastructure providing intensive computation and analysis of shared large-scale databases, from hundreds of Terabytes to Petabytes, across widely distributed scientific communities. In geophysical modelling of global climates, satellite data provide critical information on the overall status of global climate as well as key global parameters needed to improve models and to validate theoretical methods. The acquisition and management of these data sets poses grand challenges in real-time data acquisition, in large-scale data management, and in data visualisation.

Prospects for the Future

Looking forward, within the next few years we may expect lattice gauge simulations in QCD to become sufficiently accurate to confirm or eliminate current theoretical models and approximations. In biophysics, we should see great progress in ab-initio and classical molecular mechanical simulation of many of the dynamic processes involved in the microscopic evolution of cellular building blocks. In material physics, we should see a revolution in mesoscopic physics enabled by microscopic computer experiments on solids and melts with realistically modelled defects.

In the area of the physics of computing, progress continues in attempts to develop fundamentally new approaches to computation based on quantum computers. The problems to be overcome are extremely formidable. Nevertheless great progress has been made recently and we may expect a rich exploratory development phase to continue to unfold in this area over the next few years.

On a more practical level, the availability of inexpensive, commodity simulation engines with capabilities in the many gigaflops/gigabyte range together with new educational and research initiatives will continue to attract more physicists into the computational arena.

Computational Physics is growing. In sum, the past few years have been the brightest in its history and progress in the next few will eclipse even the accomplishments of the recent past.


Computational Physics is one of a number of computational sciences, e.g. Computational Chemistry and Computational Biology. In Computational Science computers are used as tools for investigating the laws of nature. In Computer Science it is computers themselves that are studied.

  ...................................................................................................................................