Here I will implement a simple Ising model with two different step mechanism - spin flipping and spin swapping (Kawasaki dynamics). The Ising model was initially developed to describe phase separation in ferromagnetic systems, however, in my example I will use a general simplified system.
Main points to take away from this post:
- Monte Carlo
- Metropolis criterion
- Ising model
Since we will be using the Monte Carlo in this example let us first understand how this criteria works.
Monte Carlo
Let us imagine we have a piece of paper and we want to fold it into a paper crane. Since we don't have a manual we do not know the exact steps to take to make a paper crane. This leaves us with an infinite number of possible steps and combinations of steps to try out. It is near impossible to do this unless we have some guiding factor. In our case let us assume we have a variable called Energy(E) which can be calculated by how well the paper is folded. The better the paper is folded, the lower the value of E. Let us also assume that when the paper is folded into a beautiful crane it is the most well-folded and thus has the least value of E.
(PS: I know I am making strong analogies with protein folding. I could have simply talked in terms of protein folding but that could have alienated some people and it is not required to understand the mechanism of the process.)
With this value E in our hands, now all we have to do is to try out different steps and see if it leads to a reduction in E from the previous step. If it does it means that we are getting closer to making a crane. In simplistic terms, this is how a Monte Carlo process is done:
Pseudocode
- Calculate E of the current state of the paper, let us call it \(E_{old}\)
- Randomly choose a move to make. This could be the move to fold the left corner.
- Calculate the new value of E and store it as \(E_{new}\)
- If $E_{new} < E_{old} $ then accept the move
- Else reject the move
This type of selection should lead us to the final crane shape, theoretically. The obvious major issue here is the large number of steps that may be required to try before reaching the desired shape. This can be alleviated by reducing the sample space of all possible moves, which I will not talk about here but I might put some references related to that later.
In this simplistic descent along E we can reach the final shape, if E as a function of folds has only one minima. Let us look at the figure below:
As can be in Figure.1 if E is a simple function of folds, then there will be a single minima, and our task would then be to just converge to that minima. However, often that is not the case and we can have more than one minima. So suppose we reach position A which is a local minima. At this point we are stuck, since on either side of A, no matter what move we make it will lead to an increase in E. So our selection criteria is preventing us from jumping over the small hill and reaching B.
This is where the Metropolis criterion comes in:
Pseudocode
In Molecular Dynamics simulations the probability is determmined by the Maxwell-Boltzmann distribution as shown below:
- If \(E_{new} < E_{old}\) then accept the move
- else accept with certain probability
\[\begin{equation} E_{new} = \begin{cases} \text{accept}, & \text{if}\ e^{\frac{-\Delta E}{k_{b}T}} > r \\ \text{reject}, & \text{otherwise} \end{cases} \end{equation}\]
where, r is a random number.
This selection criteria seems obvious, if the new fold leads to a reduction in energy then we should accept the fold else we should reject it. However, in the second part of it why would we want to accept folds which lead to increase in energy with certain probability? To understand this let us take a look at the Figure.2 below. Imagine we started off at some point in the phase-space on the right end of graph. At every time-step there is a reduction in energy which will lead us to point A. When we have reached point A we get stuck, since it is a local minima every step we take leads to an increase in energy. Thus this way we will never reach point B which is the global minima and converge to the lowest possible energy, which is a beautiful paper crane. Another way to imagine this is to think of an inertialess ball rolling down the slope on the right hand side, it will reach A and stop, it will never roll over the hill separating A and B. Thus to counter this problem we introduce the Monte Carlo criteria where we accept folds leading to increase in energy with certain probability so that our ball may be able to jump over the hill. In our example we have set the probability of acceptance to be based on a uniform random number distribution.
Now if we look at the criteria:
\[e^{\frac{-\Delta E}{k_{b}T}} > r\]
we see that when \(\Delta E\) is small then the probability function given by the Boltzmann distribution is large, whereas when \(\Delta E\) is larger then the probability is small. Looking at the plots in Figure.3 we can see that using this rule we end up accepting folds which make small deviations from the previous energy most of the time. And this is indeed important since if the energy jumped at every step then the energy might never converge.
With the above information we can go ahead and apply Monte Carlo criteria in a simple lattice to simulate the Ising model.
Ising model
In our example we will use a simple 10x10 lattice to observe phase-separation. Let us first create a 10x10 square lattice and give each point in the lattice a property called spin (I am calling it spin just for convenience). The spin can take either a value of +1 or -1. Initially we will randomly give each lattice point a spin of +/-1 and then calculate the energy of the whole system and call it the Hamiltonian \(H\). The Hamiltonian is calculated as below:
\[H = \frac{1}{2} \sum_{i=1}^{N} J_{i,j} \sigma_{i} \sigma_{j} \]
where \(\sigma_{i}\) is the spin of the \(i^{th}\) lattice point and \(J_{i,j}\) is the interaction parameter between points \(i\) and \(j\). We put a factor of \(\frac{1}{2}\) to account for double-counting. In our example we will set \(J = 1\) to make it even simpler. In our example when neighbouring points have like spins then the energy decreases, so if we have a point with spin +1 and it has a neighbour with spin -1 then the energy will reduce if we change the neighbour's spin to +1.
Now as in the paper folding example earlier, we want this lattice system to converge to the lowest energy state and this can be achieved in two ways. We can make changes (Monte Carlo moves) which conserve the spins (Kawasaki dynamics) or not (spin-flipping).
As can already be guessed, in the case of conserved-spins we should see phase-separation where like-spins will segregate into clumps. While in the case of non-conserved spins we expect to see the entire lattice become uniform in one spin or the other.
Spin-flipping
In this method we will change the lattice as follows:
Pseudocode
- Pick any random lattice point
- Find its Hamiltonian (H) using only the four cardinal neighbours (to make things faster)
- Flip the the spin of the lattice point and re-calculate the Hamiltonian (H')
- If H' < H, then accept the flip, else use Metropolis criteria to accept/reject
If we keep doing the above steps many times we will see all the points converge to the same spin.