fbpx
Wikipedia

Wang and Landau algorithm

The Wang and Landau algorithm, proposed by Fugao Wang and David P. Landau,[1] is a Monte Carlo method designed to estimate the density of states of a system. The method performs a non-Markovian random walk to build the density of states by quickly visiting all the available energy spectrum. The Wang and Landau algorithm is an important method to obtain the density of states required to perform a multicanonical simulation.

The Wang–Landau algorithm can be applied to any system which is characterized by a cost (or energy) function. For instance, it has been applied to the solution of numerical integrals[2] and the folding of proteins.[3][4] The Wang–Landau sampling is related to the metadynamics algorithm.[5]

Overview edit

The Wang and Landau algorithm is used to obtain an estimate for the density of states of a system characterized by a cost function. It uses a non-Markovian stochastic process which asymptotically converges to a multicanonical ensemble.[1] (I.e. to a Metropolis–Hastings algorithm with sampling distribution inverse to the density of states) The major consequence is that this sampling distribution leads to a simulation where the energy barriers are invisible. This means that the algorithm visits all the accessible states (favorable and less favorable) much faster than a Metropolis algorithm.[6]

Algorithm edit

Consider a system defined on a phase space  , and a cost function, E, (e.g. the energy), bounded on a spectrum  , which has an associated density of states  , which is to be estimated. The estimator is  . Because Wang and Landau algorithm works in discrete spectra,[1] the spectrum   is divided in N discrete values with a difference between them of  , such that

 .

Given this discrete spectrum, the algorithm is initialized by:

  • setting all entries of the microcanonical entropy to zero,  
  • initializing   and
  • initializing the system randomly, by putting in a random configuration  .

The algorithm then performs a multicanonical ensemble simulation:[1] a Metropolis–Hastings random walk in the phase space of the system with a probability distribution given by   and a probability of proposing a new state given by a probability distribution  . A histogram   of visited energies is stored. Like in the Metropolis–Hastings algorithm, a proposal-acceptance step is performed, and consists in (see Metropolis–Hastings algorithm overview):

  1. proposing a state   according to the arbitrary proposal distribution  
  2. accept/refuse the proposed state according to
 
where   and  .

After each proposal-acceptance step, the system transits to some value  ,   is incremented by one and the following update is performed:

 .

This is the crucial step of the algorithm, and it is what makes the Wang and Landau algorithm non-Markovian: the stochastic process now depends on the history of the process. Hence the next time there is a proposal to a state with that particular energy  , that proposal is now more likely refused; in this sense, the algorithm forces the system to visit all of the spectrum equally.[1] The consequence is that the histogram   is more and more flat. However, this flatness depends on how well-approximated the calculated entropy is to the exact entropy, which naturally depends on the value of f.[7] To better and better approximate the exact entropy (and thus histogram's flatness), f is decreased after M proposal-acceptance steps:

 .

It was later shown that updating the f by constantly dividing by two can lead to saturation errors.[7] A small modification to the Wang and Landau method to avoid this problem is to use the f factor proportional to  , where   is proportional to the number of steps of the simulation.[7]

Test system edit

We want to obtain the DOS for the harmonic oscillator potential.

 

The analytical DOS is given by,

 

by performing the last integral we obtain

 

In general, the DOS for a multidimensional harmonic oscillator will be given by some power of E, the exponent will be a function of the dimension of the system.

Hence, we can use a simple harmonic oscillator potential to test the accuracy of Wang–Landau algorithm because we know already the analytic form of the density of states. Therefore, we compare the estimated density of states   obtained by the Wang–Landau algorithm with  .

Sample code edit

The following is a sample code of the Wang–Landau algorithm in Python, where we assume that a symmetric proposal distribution g is used:

 

The code considers a "system" which is the underlying system being studied.

currentEnergy = system.randomConfiguration() # A random initial configuration while f > epsilon: system.proposeConfiguration() # A proposed configuration is proposed proposedEnergy = system.proposedEnergy() # The energy of the proposed configuration computed if random() < exp(entropy[currentEnergy] - entropy[proposedEnergy]): # If accepted, update the energy and the system: currentEnergy = proposedEnergy system.acceptProposedConfiguration() else: # If rejected system.rejectProposedConfiguration() H[currentEnergy] += 1 entropy[currentEnergy] += f if isFlat(H): # isFlat tests whether the histogram is flat (e.g. 95% flatness) H[:] = 0 f *= 0.5 # Refine the f parameter 

Wang and Landau molecular dynamics: Statistical Temperature Molecular Dynamics (STMD) edit

Molecular dynamics (MD) is usually preferable to Monte Carlo (MC), so it is desirable to have a MD algorithm incorporating the basic WL idea for flat energy sampling. That algorithm is Statistical Temperature Molecular Dynamics (STMD), developed [8] by Jaegil Kim et al at Boston University.

An essential first step was made with the Statistical Temperature Monte Carlo (STMC) algorithm. WLMC requires an extensive increase in the number of energy bins with system size, caused by working directly with the density of states. STMC is centered on an intensive quantity, the statistical temperature,  , where E is the potential energy. When combined with the relation,  , where we set  , the WL rule for updating the density of states gives the rule for updating the discretized statistical temperature,

 

where where   is the energy bin size, and   denotes the running estimate. We define f as in,[1] a factor >1 that multiplies the estimate of the DOS for the i'th energy bin when the system visits an energy in that bin.

The details are given in Ref.[8] With an initial guess for   and the range restricted to lie between   and  , the simulation proceeds as in WLMC, with significant numerical differences. An interpolation of   gives a continuum expression of the estimated   upon integration of its inverse, allowing the use of larger energy bins than in WL. Different values of   are available within the same energy bin when evaluating the acceptance probability. When histogram fluctuations are less than 20% of the mean,   is reduced according to  .

STMC was compared with WL for the Ising model and the Lennard-Jones liquid. Upon increasing energy bin size, STMC gets the same results over a considerable range, while the performance of WL deteriorates rapidly. STMD can use smaller initial values of   for more rapid convergence. In sum, STMC needs fewer steps to obtain the same quality of results.

Now consider the main result, STMD. It is based on the observation that in a standard MD simulation at temperature   with forces derived from the potential energy  , where   denotes all the positions, the sampling weight for a configuration is  . Furthermore, if the forces are derived from a function  , the sampling weight is  .

For flat energy sampling, let the effective potential be   - entropic molecular dynamics. Then the weight is  . Since the density of states is  , their product gives flat energy sampling.

The forces are calculated as

 

where   denotes the usual force derived from the potential energy. Scaling the usual forces by the factor   produces flat energy sampling.

STMD starts with an ordinary MD algorithm at constant   and V. The forces are scaled as indicated, and the statistical temperature is updated every time step, using the same procedure as in STMC. As the simulation converges to flat energy sampling, the running estimate   converges to the true  . Technical details including steps to speed convergence are described in [8] and.[9]

In STMD   is called the kinetic temperature as it controls the velocities as usual, but does not enter the configurational sampling, which is unusual. Thus STMD can probe low energies with fast particles. Any canonical average can be calculated with reweighting, but the statistical temperature,  , is immediately available with no additional analysis. It is extremely valuable for studying phase transitions. In finite nanosystems   has a feature corresponding to every “subphase transition”. For a sufficiently strong transition, an equal-area construction on an S-loop in   gives the transition temperature.

STMD has been refined by the BU group,[9] and applied to several systems by them and others. It was recognized by D. Stelter that despite our emphasis on working with intensive quantities,   is extensive. However   is intensive, and the procedure   based on histogram flatness is replaced by cutting   in half every fixed number of time steps. This simple change makes STMD entirely intensive and substantially improves performance for large systems.[9] Furthermore, the final value of the intensive   is a constant that determines the magnitude of error in the converged  , and is independent of system size. STMD is implemented in LAMMPS as fix stmd.

STMD is particularly useful for phase transitions. Equilibrium information is impossible to obtain with a canonical simulation, as supercooling or superheating is necessary to cause the transition. However an STMD run obtains flat energy sampling with a natural progression of heating and cooling, without getting trapped in the low energy or high energy state. Most recently it has been applied to the fluid/gel transition [9] in lipid-wrapped nanoparticles.

Replica exchange STMD [10] has also been presented by the BU group.

References edit

  1. ^ a b c d e f Wang, Fugao & Landau, D. P. (Mar 2001). "Efficient, Multiple-Range Random Walk Algorithm to Calculate the Density of States". Phys. Rev. Lett. 86 (10): 2050–2053. arXiv:cond-mat/0011174. Bibcode:2001PhRvL..86.2050W. doi:10.1103/PhysRevLett.86.2050. PMID 11289852. S2CID 2941153.
  2. ^ R. E. Belardinelli and S. Manzi and V. D. Pereyra (Dec 2008). "Analysis of the convergence of the 1/t and Wang–Landau algorithms in the calculation of multidimensional integrals". Phys. Rev. E. 78 (6): 067701. arXiv:0806.0268. Bibcode:2008PhRvE..78f7701B. doi:10.1103/PhysRevE.78.067701. PMID 19256982. S2CID 8645288.
  3. ^ P. Ojeda and M. Garcia and A. Londono and N.Y. Chen (Feb 2009). "Monte Carlo Simulations of Proteins in Cages: Influence of Confinement on the Stability of Intermediate States". Biophys. J. 96 (3): 1076–1082. arXiv:0711.0916. Bibcode:2009BpJ....96.1076O. doi:10.1529/biophysj.107.125369. PMC 2716574. PMID 18849410.
  4. ^ P. Ojeda & M. Garcia (Jul 2010). "Electric Field-Driven Disruption of a Native beta-Sheet Protein Conformation and Generation of alpha-Helix-Structure". Biophys. J. 99 (2): 595–599. Bibcode:2010BpJ....99..595O. doi:10.1016/j.bpj.2010.04.040. PMC 2905109. PMID 20643079.
  5. ^ Christoph Junghans, Danny Perez, and Thomas Vogel. "Molecular Dynamics in the Multicanonical Ensemble: Equivalence of Wang–Landau Sampling, Statistical Temperature Molecular Dynamics, and Metadynamics." Journal of Chemical Theory and Computation 10.5 (2014): 1843-1847. doi:10.1021/ct500077d
  6. ^ Berg, B.; Neuhaus, T. (1992). "Multicanonical ensemble: A new approach to simulate first-order phase transitions". Physical Review Letters. 68 (1): 9–12. arXiv:hep-lat/9202004. Bibcode:1992PhRvL..68....9B. doi:10.1103/PhysRevLett.68.9. PMID 10045099. S2CID 19478641.
  7. ^ a b c Belardinelli, R. E. & Pereyra, V. D. (2007). "Wang–Landau algorithm: A theoretical analysis of the saturation of the error". The Journal of Chemical Physics. 127 (18): 184105. arXiv:cond-mat/0702414. Bibcode:2007JChPh.127r4105B. doi:10.1063/1.2803061. PMID 18020628. S2CID 25162388.
  8. ^ a b c Kim, Jaegil; Straub, John & Keyes, Tom (Aug 2006). "Statistical-Temperature Monte Carlo and Molecular Dynamics Algorithms". Phys. Rev. Lett. 97 (5): 50601–50604. doi:10.1103/PhysRevLett.97.050601.
  9. ^ a b c d Stelter, David & Keyes, Tom (2019). "Simulation of fluid/gel phase equilibrium in lipid vesicles". Soft Matter. 15: 8102–8112. doi:10.1039/c9sm00854c.
  10. ^ Kim, Jaegil; Straub, John & Keyes, Tom (Apr 2012). "Replica Exchange Statistical-Temperature Molecular Dynamics Algorithm". Journal of Physical Chemistry B. 116: 8646–8653. doi:10.1021/jp300366j.

wang, landau, algorithm, proposed, fugao, wang, david, landau, monte, carlo, method, designed, estimate, density, states, system, method, performs, markovian, random, walk, build, density, states, quickly, visiting, available, energy, spectrum, important, meth. The Wang and Landau algorithm proposed by Fugao Wang and David P Landau 1 is a Monte Carlo method designed to estimate the density of states of a system The method performs a non Markovian random walk to build the density of states by quickly visiting all the available energy spectrum The Wang and Landau algorithm is an important method to obtain the density of states required to perform a multicanonical simulation The Wang Landau algorithm can be applied to any system which is characterized by a cost or energy function For instance it has been applied to the solution of numerical integrals 2 and the folding of proteins 3 4 The Wang Landau sampling is related to the metadynamics algorithm 5 Contents 1 Overview 2 Algorithm 3 Test system 4 Sample code 5 Wang and Landau molecular dynamics Statistical Temperature Molecular Dynamics STMD 6 ReferencesOverview editThe Wang and Landau algorithm is used to obtain an estimate for the density of states of a system characterized by a cost function It uses a non Markovian stochastic process which asymptotically converges to a multicanonical ensemble 1 I e to a Metropolis Hastings algorithm with sampling distribution inverse to the density of states The major consequence is that this sampling distribution leads to a simulation where the energy barriers are invisible This means that the algorithm visits all the accessible states favorable and less favorable much faster than a Metropolis algorithm 6 Algorithm editConsider a system defined on a phase space W displaystyle Omega nbsp and a cost function E e g the energy bounded on a spectrum E G E min E max displaystyle E in Gamma E min E max nbsp which has an associated density of states r E displaystyle rho E nbsp which is to be estimated The estimator is r E exp S E displaystyle hat rho E equiv exp S E nbsp Because Wang and Landau algorithm works in discrete spectra 1 the spectrum G displaystyle Gamma nbsp is divided in N discrete values with a difference between them of D displaystyle Delta nbsp such that N E max E min D displaystyle N frac E max E min Delta nbsp Given this discrete spectrum the algorithm is initialized by setting all entries of the microcanonical entropy to zero S E i 0 i 1 2 N displaystyle S E i 0 i 1 2 N nbsp initializing f 1 displaystyle f 1 nbsp and initializing the system randomly by putting in a random configuration r W displaystyle boldsymbol r in Omega nbsp The algorithm then performs a multicanonical ensemble simulation 1 a Metropolis Hastings random walk in the phase space of the system with a probability distribution given by P r 1 r E r exp S E r displaystyle P boldsymbol r 1 hat rho E boldsymbol r exp S E boldsymbol r nbsp and a probability of proposing a new state given by a probability distribution g r r displaystyle g boldsymbol r rightarrow boldsymbol r nbsp A histogram H E displaystyle H E nbsp of visited energies is stored Like in the Metropolis Hastings algorithm a proposal acceptance step is performed and consists in see Metropolis Hastings algorithm overview proposing a state r W displaystyle boldsymbol r in Omega nbsp according to the arbitrary proposal distribution g r r displaystyle g boldsymbol r rightarrow boldsymbol r nbsp accept refuse the proposed state according toA r r min 1 e S S g r r g r r displaystyle A boldsymbol r rightarrow boldsymbol r min left 1 e S S frac g boldsymbol r rightarrow boldsymbol r g boldsymbol r rightarrow boldsymbol r right nbsp dd dd where S S E r displaystyle S S E boldsymbol r nbsp and S S E r displaystyle S S E boldsymbol r nbsp dd dd After each proposal acceptance step the system transits to some value E i displaystyle E i nbsp H E i displaystyle H E i nbsp is incremented by one and the following update is performed S E i S E i f displaystyle S E i leftarrow S E i f nbsp This is the crucial step of the algorithm and it is what makes the Wang and Landau algorithm non Markovian the stochastic process now depends on the history of the process Hence the next time there is a proposal to a state with that particular energy E i displaystyle E i nbsp that proposal is now more likely refused in this sense the algorithm forces the system to visit all of the spectrum equally 1 The consequence is that the histogram H E displaystyle H E nbsp is more and more flat However this flatness depends on how well approximated the calculated entropy is to the exact entropy which naturally depends on the value of f 7 To better and better approximate the exact entropy and thus histogram s flatness f is decreased after M proposal acceptance steps f f 2 displaystyle f leftarrow f 2 nbsp It was later shown that updating the f by constantly dividing by two can lead to saturation errors 7 A small modification to the Wang and Landau method to avoid this problem is to use the f factor proportional to 1 t displaystyle 1 t nbsp where t displaystyle t nbsp is proportional to the number of steps of the simulation 7 Test system editWe want to obtain the DOS for the harmonic oscillator potential E x x 2 displaystyle E x x 2 nbsp The analytical DOS is given by r E d E x E d x d x 2 E d x displaystyle rho E int delta E x E dx int delta x 2 E dx nbsp by performing the last integral we obtain r E E 1 2 for x R 1 const for x R 2 E 1 2 for x R 3 displaystyle rho E propto begin cases E 1 2 text for x in mathbb R 1 text const text for x in mathbb R 2 E 1 2 text for x in mathbb R 3 end cases nbsp In general the DOS for a multidimensional harmonic oscillator will be given by some power of E the exponent will be a function of the dimension of the system Hence we can use a simple harmonic oscillator potential to test the accuracy of Wang Landau algorithm because we know already the analytic form of the density of states Therefore we compare the estimated density of states r E displaystyle hat rho E nbsp obtained by the Wang Landau algorithm with r E displaystyle rho E nbsp Sample code editThe following is a sample code of the Wang Landau algorithm in Python where we assume that a symmetric proposal distribution g is used g x x g x x displaystyle g boldsymbol x rightarrow boldsymbol x g boldsymbol x rightarrow boldsymbol x nbsp The code considers a system which is the underlying system being studied currentEnergy system randomConfiguration A random initial configuration while f gt epsilon system proposeConfiguration A proposed configuration is proposed proposedEnergy system proposedEnergy The energy of the proposed configuration computed if random lt exp entropy currentEnergy entropy proposedEnergy If accepted update the energy and the system currentEnergy proposedEnergy system acceptProposedConfiguration else If rejected system rejectProposedConfiguration H currentEnergy 1 entropy currentEnergy f if isFlat H isFlat tests whether the histogram is flat e g 95 flatness H 0 f 0 5 Refine the f parameterWang and Landau molecular dynamics Statistical Temperature Molecular Dynamics STMD editMolecular dynamics MD is usually preferable to Monte Carlo MC so it is desirable to have a MD algorithm incorporating the basic WL idea for flat energy sampling That algorithm is Statistical Temperature Molecular Dynamics STMD developed 8 by Jaegil Kim et al at Boston University An essential first step was made with the Statistical Temperature Monte Carlo STMC algorithm WLMC requires an extensive increase in the number of energy bins with system size caused by working directly with the density of states STMC is centered on an intensive quantity the statistical temperature T E 1 d S E d E displaystyle T E 1 dS E dE nbsp where E is the potential energy When combined with the relation W E e S E displaystyle Omega E e S E nbsp where we set k B 1 displaystyle k B 1 nbsp the WL rule for updating the density of states gives the rule for updating the discretized statistical temperature T j 1 a j 1 T j 1 displaystyle tilde T j pm 1 prime alpha j pm 1 tilde T j pm 1 nbsp where where a j 1 1 1 d f T j 1 d f l n f 2 D E D E displaystyle alpha j pm 1 1 1 mp delta f tilde T j pm 1 delta f lnf 2 Delta E Delta E nbsp is the energy bin size and T displaystyle tilde T nbsp denotes the running estimate We define f as in 1 a factor gt 1 that multiplies the estimate of the DOS for the i th energy bin when the system visits an energy in that bin The details are given in Ref 8 With an initial guess for T E displaystyle T E nbsp and the range restricted to lie between T L displaystyle T L nbsp and T U displaystyle T U nbsp the simulation proceeds as in WLMC with significant numerical differences An interpolation of T E displaystyle tilde T E nbsp gives a continuum expression of the estimated S E displaystyle S E nbsp upon integration of its inverse allowing the use of larger energy bins than in WL Different values of S E displaystyle S E nbsp are available within the same energy bin when evaluating the acceptance probability When histogram fluctuations are less than 20 of the mean f displaystyle f nbsp is reduced according to f f displaystyle f rightarrow sqrt f nbsp STMC was compared with WL for the Ising model and the Lennard Jones liquid Upon increasing energy bin size STMC gets the same results over a considerable range while the performance of WL deteriorates rapidly STMD can use smaller initial values of f d f 1 displaystyle f d f 1 nbsp for more rapid convergence In sum STMC needs fewer steps to obtain the same quality of results Now consider the main result STMD It is based on the observation that in a standard MD simulation at temperature T 0 displaystyle T 0 nbsp with forces derived from the potential energy E x displaystyle E x nbsp where x displaystyle x nbsp denotes all the positions the sampling weight for a configuration is e E x T 0 displaystyle e E x T 0 nbsp Furthermore if the forces are derived from a function W E displaystyle W E nbsp the sampling weight is e W E x T 0 displaystyle e W E x T 0 nbsp For flat energy sampling let the effective potential be T 0 S E displaystyle T 0 S E nbsp entropic molecular dynamics Then the weight is e S E displaystyle e S E nbsp Since the density of states is e S E displaystyle e S E nbsp their product gives flat energy sampling The forces are calculated as F d d x T 0 S E T 0 d S d E d d x E x T 0 T E F 0 displaystyle F d dx T 0 S E T 0 dS dE d dx E x T 0 T E F 0 nbsp where F 0 displaystyle F 0 nbsp denotes the usual force derived from the potential energy Scaling the usual forces by the factor T 0 T E displaystyle T 0 T E nbsp produces flat energy sampling STMD starts with an ordinary MD algorithm at constant T 0 displaystyle T 0 nbsp and V The forces are scaled as indicated and the statistical temperature is updated every time step using the same procedure as in STMC As the simulation converges to flat energy sampling the running estimate T E displaystyle tilde T E nbsp converges to the true T E displaystyle T E nbsp Technical details including steps to speed convergence are described in 8 and 9 In STMD T 0 displaystyle T 0 nbsp is called the kinetic temperature as it controls the velocities as usual but does not enter the configurational sampling which is unusual Thus STMD can probe low energies with fast particles Any canonical average can be calculated with reweighting but the statistical temperature T E displaystyle T E nbsp is immediately available with no additional analysis It is extremely valuable for studying phase transitions In finite nanosystems T E displaystyle T E nbsp has a feature corresponding to every subphase transition For a sufficiently strong transition an equal area construction on an S loop in 1 T E displaystyle 1 T E nbsp gives the transition temperature STMD has been refined by the BU group 9 and applied to several systems by them and others It was recognized by D Stelter that despite our emphasis on working with intensive quantities l n f displaystyle ln f nbsp is extensive However d f l n f 2 D E displaystyle delta f ln f 2 Delta E nbsp is intensive and the procedure f f displaystyle f rightarrow sqrt f nbsp based on histogram flatness is replaced by cutting d f displaystyle delta f nbsp in half every fixed number of time steps This simple change makes STMD entirely intensive and substantially improves performance for large systems 9 Furthermore the final value of the intensive d f displaystyle delta f nbsp is a constant that determines the magnitude of error in the converged T E displaystyle T E nbsp and is independent of system size STMD is implemented in LAMMPS as fix stmd STMD is particularly useful for phase transitions Equilibrium information is impossible to obtain with a canonical simulation as supercooling or superheating is necessary to cause the transition However an STMD run obtains flat energy sampling with a natural progression of heating and cooling without getting trapped in the low energy or high energy state Most recently it has been applied to the fluid gel transition 9 in lipid wrapped nanoparticles Replica exchange STMD 10 has also been presented by the BU group References edit a b c d e f Wang Fugao amp Landau D P Mar 2001 Efficient Multiple Range Random Walk Algorithm to Calculate the Density of States Phys Rev Lett 86 10 2050 2053 arXiv cond mat 0011174 Bibcode 2001PhRvL 86 2050W doi 10 1103 PhysRevLett 86 2050 PMID 11289852 S2CID 2941153 R E Belardinelli and S Manzi and V D Pereyra Dec 2008 Analysis of the convergence of the 1 t and Wang Landau algorithms in the calculation of multidimensional integrals Phys Rev E 78 6 067701 arXiv 0806 0268 Bibcode 2008PhRvE 78f7701B doi 10 1103 PhysRevE 78 067701 PMID 19256982 S2CID 8645288 P Ojeda and M Garcia and A Londono and N Y Chen Feb 2009 Monte Carlo Simulations of Proteins in Cages Influence of Confinement on the Stability of Intermediate States Biophys J 96 3 1076 1082 arXiv 0711 0916 Bibcode 2009BpJ 96 1076O doi 10 1529 biophysj 107 125369 PMC 2716574 PMID 18849410 P Ojeda amp M Garcia Jul 2010 Electric Field Driven Disruption of a Native beta Sheet Protein Conformation and Generation of alpha Helix Structure Biophys J 99 2 595 599 Bibcode 2010BpJ 99 595O doi 10 1016 j bpj 2010 04 040 PMC 2905109 PMID 20643079 Christoph Junghans Danny Perez and Thomas Vogel Molecular Dynamics in the Multicanonical Ensemble Equivalence of Wang Landau Sampling Statistical Temperature Molecular Dynamics and Metadynamics Journal of Chemical Theory and Computation 10 5 2014 1843 1847 doi 10 1021 ct500077d Berg B Neuhaus T 1992 Multicanonical ensemble A new approach to simulate first order phase transitions Physical Review Letters 68 1 9 12 arXiv hep lat 9202004 Bibcode 1992PhRvL 68 9B doi 10 1103 PhysRevLett 68 9 PMID 10045099 S2CID 19478641 a b c Belardinelli R E amp Pereyra V D 2007 Wang Landau algorithm A theoretical analysis of the saturation of the error The Journal of Chemical Physics 127 18 184105 arXiv cond mat 0702414 Bibcode 2007JChPh 127r4105B doi 10 1063 1 2803061 PMID 18020628 S2CID 25162388 a b c Kim Jaegil Straub John amp Keyes Tom Aug 2006 Statistical Temperature Monte Carlo and Molecular Dynamics Algorithms Phys Rev Lett 97 5 50601 50604 doi 10 1103 PhysRevLett 97 050601 a b c d Stelter David amp Keyes Tom 2019 Simulation of fluid gel phase equilibrium in lipid vesicles Soft Matter 15 8102 8112 doi 10 1039 c9sm00854c Kim Jaegil Straub John amp Keyes Tom Apr 2012 Replica Exchange Statistical Temperature Molecular Dynamics Algorithm Journal of Physical Chemistry B 116 8646 8653 doi 10 1021 jp300366j Retrieved from https en wikipedia org w index php title Wang and Landau algorithm amp oldid 1147000513, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.