RMM-DIIS: Difference between revisions

From VASP Wiki
No edit summary
No edit summary
Line 1: Line 1:
The schemes like {{TAG|Davidson iteration scheme}} and {{TAG|Conjugate gradient optimization}}, try to optimize the expectation value
The implementation of the Residual Minimization Method with Direct Inversion in the Iterative Subspace (RMM-DIIS) in {{VASP}}{{cite|kresse:cms:1996}}{{cite|kresse:prb:96}} is based on the original work of Pulay:{{cite|pulay:cpl:1980}}{{cite}}
of the Hamiltonian for each wavefunction using an increasing trial basis-set.
Instead of minimizing the expectation value it is also possible to
minimize the norm of the residual vector. This leads to a similar iteration scheme as
described in {{TAG|Efficient single band eigenvalue-minimization}}, but a different eigenvalue problem
has to be solved (see Refs. {{cite|wood:jpa:1985}}{{cite|pulay:cpl:1980}}).
 
There is a significant difference
between optimizing the eigenvalue and the norm of the residual vector.
The norm of the
residual vector is given by


* The procedure starts with the evaluation of the preconditioned residual vector for a selected orbital <math>\psi^0_m</math>:
::<math>
K \vert R^0_m \rangle = K \vert R(\psi^0_m) \rangle
</math>
:where <math>K</math> is the [[preconditioning]] function, and the residual is computed as:
::<math>\vert R(\psi) \rangle = (H-\epsilon_{\rm app}) \vert \psi \rangle
</math>
:with
::<math>
::<math>
\langle R_n  | R_n \rangle =\langle \phi_n | (H - \epsilon)^+ (H - \epsilon) | \phi_n \rangle,
\epsilon_{\rm app} = \frac{\langle \psi \vert H \vert \psi \rangle}{\langle \psi \vert S \vert \psi \rangle}
</math>
* Then a Jacobi-like trial step is taken in the direction of the vector:
::<math> \vert \psi^1_m \rangle = \vert \psi^0_m \rangle + \lambda K \vert R^0_m \rangle
</math>
: and a new residual vector is determined:
::<math>\vert R^1_m \rangle = \vert R(\psi^1_m) \rangle
</math>
</math>
* Next a linear combination of the initial orbital <math>\psi^0_m</math> and the trial orbital <math>\psi^1_m</math>
::<math>\vert \bar{\psi}^M \rangle = \sum^M_{i=0} \alpha_i \vert \psi^i_m \rangle, \,\, M=1 </math>
:is sought, such that the norm of the residual vector is minimized. Assuming linearity in the residual vector:
::<math>\vert \bar{R}^M \rangle = \vert R(\bar{\psi}^M) \rangle = \sum^M_{i=0} \alpha_i \vert R^i_m \rangle
</math>
: this requires the minimization of:
::<math>\frac{\sum_{ij} \alpha_i^* \alpha_j \langle R^i_m \vert R^j_m \rangle}{\sum_{ij}\alpha_i^* \alpha_j \langle \psi^i_m \vert S \vert \psi^j_m \rangle}
</math>
: with respect to <math>{\{\alpha_i | i=0,..,M\}}</math>.
: This step is usually called ''direct inversion of the iterative subspace'' (DIIS).
* The next trial step (<math>M=2</math>) starts from <math>\bar{\psi}^1</math>, along the direction <math>K \bar{R}^1</math>. In each iteration <math>M</math> is increased by 1, and a new trial orbital:
::<math>\vert \psi^M_m \rangle = \vert \bar{\psi}^{M-1} \rangle + \lambda K \vert \bar{R}^{M-1} \rangle
</math>
: and its corresponding residual vector <math>R(\psi^M_m)</math> are added to the iterative subspace, that is subsequently inverted to yield <math>\bar{\psi}^M</math>.
: The algorithm keeps iterating until the norm of the residual has dropped below a certain threshold, or the maximum number of iterations per orbital has been reached ({{TAG|NRMM}}).
* Move on to the next orbital <math>\psi^0_{m+1}</math>.


and possesses a quadratic unrestricted minimum at the each
eigenfunction <math>\phi_n</math>. If you have a good starting guess for
the eigenfunction it is possible to use this algorithm without
the knowledge of other wavefunctions, and therefore
without the explicit orthogonalization of the preconditioned residual vector
(eq. for <math>g_n</math> in {{TAG|Single band steepest descent scheme}}).
In this case, after a sweep over all bands a Gram-Schmidt orthogonalization is  necessary
to obtain a new orthogonal trial-basis set.
Without the  explicit orthogonalization to the current set of trial wavefunctions
all other algorithms tend to converge to the lowest band, no matter
from which band they started.
More information regarding related input variables and potential problems is available in the [[IALGO#RMM-DIIS]] section.
== References ==
== References ==
<references/>
<references/>
----
----
[[Category:Electronic minimization]][[Category:Theory]]
[[Category:Electronic minimization]][[Category:Theory]]

Revision as of 10:20, 20 October 2023

The implementation of the Residual Minimization Method with Direct Inversion in the Iterative Subspace (RMM-DIIS) in VASP[1][2] is based on the original work of Pulay:[3]

  • The procedure starts with the evaluation of the preconditioned residual vector for a selected orbital :
where is the preconditioning function, and the residual is computed as:
with
  • Then a Jacobi-like trial step is taken in the direction of the vector:
and a new residual vector is determined:
  • Next a linear combination of the initial orbital and the trial orbital
is sought, such that the norm of the residual vector is minimized. Assuming linearity in the residual vector:
this requires the minimization of:
with respect to .
This step is usually called direct inversion of the iterative subspace (DIIS).
  • The next trial step () starts from , along the direction . In each iteration is increased by 1, and a new trial orbital:
and its corresponding residual vector are added to the iterative subspace, that is subsequently inverted to yield .
The algorithm keeps iterating until the norm of the residual has dropped below a certain threshold, or the maximum number of iterations per orbital has been reached (NRMM).
  • Move on to the next orbital .

References