Next: Analysis of errors Up: Numerical Methods Previous: The initial and boundary

# Other numerical techniques: simulated annealing and simulated tempering

Sometimes, we are interested not in dynamics, but in static configurations of the system, generated by a potential, which describes particle interactions. Minimization techniques aim to find the configurations that minimize this potential (i.e. local and global minima). There are different numerical recipes, which can handle this problem and the most popular one is simulated annealing. Another interesting method, which was applied in our project, is simulated tempering. This method belongs to the wide class of the accelerated Monte-Carlo methods. These two methods have been applied to find the equilibrium configurations of point defects at non-zero temperatures in vanadium.

Let us begin with the simulated annealing. In this method an initial configuration, a random or a completely ordered one, is chosen and the system is heated up to a high temperature . Then the system is cooled down in a very slow rate, which depends on specific system under investigation. The Monte-Carlo algorithm is used to equilibrate and to sample the system during this cooling process.

The sampling of the system is performed in the following way:
1.) A random atom selected and displaced randomly, therefore a new configuration of the atoms is generated. This displacement is called a trial move.
2.) The potential energy of the new configuration of the system is calculated and compared with the potential energy of the system before the trial move .
3.) If the energy of the system after the trial move is lowered : the trial move accepted unconditionally.
4.) But, if the energy of the system is increased, than this trial move could be accepted with a certain probability, i.e. we generate a random number between and compare it with .
If this expression larger than the trial move accepted, otherwise it is rejected.
5.) This procedure repeats again and again.

This method allows us to escape from local minima, while we are sampling the phase space and looking for the global minimum. However, we can not be confident in general, that using this method we will not be trapped in some local minimum, due to limited time of our Monte-Carlo run ( and therefore we can not prove that the the global minimum is necessary found). If we are trapped in a local minimum most of the simulation time, then only a small part of the phase space will be explored, and hence the physical quantities will not be calculated with appropriate accuracy (pure statistics).

One of effective way to overcome those difficulties is to perform simulations in a so called generalized (extended) ensemble, where the probability to cross energy barriers (get away from local minima ) could be increased in an artificial way, thus MC simulations can be sufficiently accelerated. The simulated tempering method, which belongs to the class of MC accelerated algorithms [69], has been applied in studying of point defects configurations at room temperature. This methods was originally invented by Lubartsev et al. [70], and independently about at the same time by Marinary and Parisi [71]. In simulated tempering we perform a random walk in the energy space, like in standard Monte-Carlo method, as well as in the temperature space. The temperature space, unlike the energy space, is chosen in a special way, i.e. it has a discrete structure:

 (4.16)

We start with an initial temperature , belonging to the set of the temperatures , and perform a usual MC simulation in the energy space:
a particle is picked up randomly, its position changed, and the new configuration is accepted or rejected according to the Metropolis scheme. After several thousands of MC steps in the energy space we try to update the current temperature, i.e. to perform random walk in the temperature space. The current temperature could be increased or decreased to the value of the nearest neighbor in the temperature space. The higher the temperature, the larger the probability to escape form a local minimum. In order to accept or reject the temperature update, we have to introduce a criterion for acceptance (or rejection) of these updates. This can be done by extension of the canonical Metropolis scheme. We introduce a new probability distribution for the extended ensemble , where by we denote coordinates of the atoms:
 (4.17)

where an arbitrary function , that controls the distribution is introduced. The function is determined for the set of , and we can use the Metropolis update when we try to change the current temperature. The algorithm steps are following:
1.) Trial move - change temperature or :
 (4.18)

2.) Calculate :
 (4.19)

3.) If accept it

4.) Otherwise, pick up a random number from .

 (4.20)

5.) Back to the usual MC steps.

The only question is how to determine function? Well, unfortunately, there is no prescription how to find this function in general. Actually, for each system there is its own special function, which reflects the system properties. This function has to be chosen very carefully, if we want to be efficient, otherwise the system gets stuck around of some, usually uncontrolled value of , and there is no good sampling in the temperature space, and we return to a standard Monte-Carlo scheme. The simplest way to choose is following:

 (4.21)

where is the state sum. In this case the system visits all points of the temperature space and for a long computation run spends at each value of approximately the same amount of time. But how do we know value of a priori before the simulation? Any algorithm that requires the value of as inputs appears to be unrealistic because is usually unknown quantity that is computed , for example, by means of Monte Carlo simulation. The simple solution is to perform a preliminary MC run, calculate and find the initial parameters of which will be optimized by iterations in several consequent runs. Hence it is an algorithm which learns to find the optimal values of iteratively by means of the preliminary runs [69-71].

Next: Analysis of errors Up: Numerical Methods Previous: The initial and boundary
2003-01-15