Known issues: Difference between revisions
Vaspmaster (talk | contribs) No edit summary |
No edit summary |
||
(41 intermediate revisions by 10 users not shown) | |||
Line 9: | Line 9: | ||
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
|- | |- | ||
! style=width:5em | | ! style=width:5em | Version fixed !! style="text-align:center;"| Version first noticed !! Date !! Description | ||
|- | |- | ||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<5.4.4||2024-11-14||<div id="KnownIssue23" style="display:inline;"></div> | |||
'''There is a bug in the implementation of the van der Waals dDsC method ({{TAG|IVDW}}=4).''' The atomic volumes and charges are not calculated correctly, which translates into errors in the total energy and forces. | |||
|- | |- | ||
| | | style="background:#9AB7FE" | Planned || style="background:#EAAEB2" |6.4.2||2024-11-13||<div id="KnownIssue23" style="display:inline;"></div> | ||
<div id=" | '''SCDM method gives incorrect results for second spin channel when {{TAGO|ISPIN|2}}''': | ||
The SCDM method ({{TAGO|LSCDM|True}}) uses a rank-revealing QR decomposition with column pivoting to select the optimal columns of the density matrix. For the second spin channel, the pivot array is not correctly initialized to zero. This causes the results to be unreliable. | |||
Thanks to Patrick J. Taylor who reported the behavior in [https://www.vasp.at/forum/viewtopic.php?p=29739| this forum post]. | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<6||2024-08-26||<div id="KnownIssue21" style="display:inline;"></div> | |||
'''NaNs if {{TAG|NSW}}*{{TAG|NBLOCK}}, {{TAG|NSW}}*{{TAG|ML_OUTBLOCK}} or {{TAG|NBLOCK}}*{{TAG|KBLOCK}} <math>> 2^{31}-1</math>''': | |||
Either product must not exceed the largest integer(4) number 2147483647 (<math>=2^{31}-1</math>). As a solution split the [[molecular dynamics]] run into multiple calculations with smaller values for {{TAG|NSW}} or {{TAG|KBLOCK}}. | |||
Thanks to Renjie Chen who reported the behavior in this post: [https://www.vasp.at/forum/viewtopic.php?p=27708#p27708| Error in MD annealing simulation with ML potential] | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |6.4.3||2024-08-26||<div id="KnownIssue21" style="display:inline;"></div> | |||
'''Memory estimation in ML_MODE=TRAIN is wrong''': | |||
The memory estimation for the major arrays (design matrix - FMAT, covariance matrix - CMAT, etc.) can be significantly too small. This problem mainly appears if ML_MODE=TRAIN is selected especially in a continuation run. Until this is not officially fixed don't rely on the memory estimation! | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<6||2024-08-21||<div id="KnownIssue20" style="display:inline;"></div> | |||
'''Dielectric tensor and Born effective charges from density functional perturbation theory are incorrect for non-collinear spin calculations when symmetries are used''': | |||
The rotation of the spinor part of the derivatives of the wavefunctions with respect to '''k''' was missing which leads to incorrect results when using LEPSILON=.TRUE. in combination with LNONCOLLINEAR=.TRUE. and ISYM>=0. The fix for previous versions is to use ISYM=-1. | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |6.4.3||2024-08-05||<div id="KnownIssue17" style="display:inline;"></div> | |||
'''ML_MODE=REFIT is broken if VASP is compiled without the precompiler flag -DscaLAPACK''': | |||
Some arrays are allocated with the wrong size, leading to a crash with unclear exit messages of the code. This bug should not affect many users, since we strongly suggest to run ML_MODE=REFIT with -DscaLAPACK, because otherwise the SVD and related routines are not parallelized. This bug will be fixed in the next release. | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<6||2024-05-31||<div id="KnownIssue16" style="display:inline;"></div> | |||
'''Compiler optimizations with the Fujitsu compiler on A64FX platforms''': | |||
The <code>bulk_BN_vdW-DF3-opt1</code> and <code>TiO2_IBRION=2</code> tests might fail when VASP is compiled with distributed <code>makefile.include.fujitsu_a64fx</code> or <code>makefile.include.fujitsu_a64fx_omp</code> files. It is strongly suggested to modify the following line <code>OBJECTS_O2 += fft3dlib.o nonl.o vdw_nl.o</code> in the above mentioned files. | |||
Thanks to Ivan Rostov for the [https://www.vasp.at/forum/viewtopic.php?p=26568 bug report]. | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<6||2024-05-27||<div id="KnownIssue15" style="display:inline;"></div> | |||
'''Calculations with {{TAG|LMODELHF}}=.TRUE. crash if started without {{TAG|WAVECAR}} file in the directory''': | |||
The crash occurs because of a division by the screening parameter that is zero during the first few iterations that are done with the functional from the {{TAG|POTCAR}} file. If a {{TAG|WAVECAR}} file is present, then these first few iterations are skipped. | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |6.4.3||2024-05-14||<div id="KnownIssue14" style="display:inline;"></div> | |||
'''Using {{TAG|LCALCEPS}} in combination with [[Hybrid functionals]] may lead to a crash when running on GPU''': | |||
VASP may crash when using [[Hybrid functionals]] in combination with {{TAG|LCALCEPS}} when running on GPUs due to an error when distributing the electronic states to be optimized for a batched fft. For the moment to overcome this issue, users can set NBLOCK_FOCK <= number of occupied states in the {{FILE|INCAR}} file. Thank you [https://vasp.at/forum/viewtopic.php?p=26631#p26631 Sergey Lisenkov and Francesco Ricci] for reporting the bug. | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<6||2024-05-13||<div id="KnownIssue13" style="display:inline;"></div> | |||
'''Reading the file {{TAG|DYNMATFULL}} may lead to a crash in MPI-parallel calculations ''': | |||
If {{TAG|SCALEE}}≠1, then the file {{TAG|DYNMATFULL}} is read if present. This may lead to a crash in MPI-parallel calculations, in particular with the gfortran compiler. | |||
Thanks to Vyacheslav Bryantsev for the [https://www.vasp.at/forum/viewtopic.php?t=19523 bug report]. | |||
|- | |||
| style="background:#EAAEB2" | Open|| style="background:#EAAEB2" |6.4.3||2024-04-10||<div id="KnownIssue12" style="display:inline;"></div> | |||
'''Compilation error for GCC with ELPA support''': | |||
Compilation with ELPA support ([[Makefile.include#ELPA_(optional)]]) failes for the GNU Fortran compiler because the Fortran standard for [https://fortranwiki.org/fortran/show/c_loc c_loc] was not strictly followed. Other compilers (e.g. NVIDIAs Fortran compiler) might not enforce the standard in this case and will produce a working binary. | |||
Solution: Add the <code>TARGET</code> attribute to the variable declarations of matrices <code>A</code> in line 2236 and <code>Z</code> in line 2252 in <code>src/scala.F</code>. | |||
Thanks to user rogeli_grima for the [https://www.vasp.at/forum/viewtopic.php?t=19488 bug report]! | |||
|- | |||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |6.4.2||2024-04-10||<div id="KnownIssue11" style="display:inline;"></div> | |||
'''AOCC >= 4.0 does not produce runnable code when compiling without OpenMP-support''': | |||
The AOCC compiler version >= 4.0 apparently uses a more aggressive optimization on a particular symmetry routine (SGRGEN) when compiling '''without''' OpenMP-support. Thus, code produced using arch/makefile.include.aocc_ompi_aocl exits with: | |||
<code>VERY BAD NEWS! internal error in subroutine SGRGEN: Too many elements 49</code> | |||
Solution: adapt your makefile.include by adding <code>symlib.o</code> to the <code>OBJECTS_O1</code> line. This fixes the issue. The other options is to compile with openMP support (using arch/makefile.include.aocc_ompi_aocl_omp). | |||
Thanks to users jelle_lagerweij, huangjs, and jun_yin2 for the [https://www.vasp.at/forum/viewtopic.php?t=19390 bug report] and investigations. | |||
|- | |- | ||
| | | style="background:#EAAEB2" | Open || style="background:#EAAEB2" |6.4.3||2024-04-03||<div id="KnownIssue10" style="display:inline;"></div> | ||
'''-DnoAugXCmeta is broken''': | |||
We no longer recommend compilation of VASP with this precompiler option since it negatively affects the results of SCAN and SCAN-like meta-GGA calculations. To make matters worse, this feature is broken in VASP.6.4.3. So definitely '''do not''' compile VASP.6.4.3 with '''-DnoAugXCmeta'''. | |||
|- | |- | ||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |6.4.2||2024-03-21||<div id="KnownIssue9" style="display:inline;"></div> | |||
'''Wannier90 <tt>exclude_bands</tt> not supported for SCDM method''': | |||
When using {{TAG|LSCDM}} together with {{TAG|LWANNIER90}} or {{TAG|LWANNIER90_RUN}}, the use of <tt>exclude_bands</tt> in the Wannier90 input file is currently not supported. | |||
|- | |- | ||
| style="background:#9AB7FE" | Planned || style="background:#EAAEB2" |5.4.0||2024-10-14||<div id="KnownIssue22" style="display:inline;"></div> | |||
'''Uninitialized variable IFLAG in ELMIN for ICHARG=5''': | |||
When running VASP with {{TAG|ICHARG}}=5, the variable IFLAG is not properly initialized before calling EDDIAG. This caused inconsistent behavior during the EDDIAG call when using the DAV algorithm (each MPI rank has a random value for IFLAG). Depending on the compiler used, this could result in either VASP waiting indefinitely during EDDIAG due to waiting MPI ranks, or led to the absence of the requested preconditioning for the DAV solver, slowing down convergence. While this did not produce incorrect results, VASP could either hang indefinitely without throwing an error or exhibit slower convergence. | |||
|- | |- | ||
| style="background:#ACE9E5" | 6.4.3 || style="background:#EAAEB2" |<6||2024-08-21||<div id="KnownIssue19" style="display:inline;"></div> | |||
'''Interface to Wannier90 and PEAD calculations lead to incorrect results for non-collinear spin calculations when symmetries are used''': | |||
The rotation of the spinor part of the wavefunctions was missing which leads to incorrect results when computing the projections and overlaps written to the AMN and MMN files used by Wannier90 when LNONCOLLINEAR=.TRUE. and ISYM>=0 are set in the INCAR file. The fix for previous versions is to use ISYM=-1. | |||
|- | |||
| style="background:#ACE9E5" | 6.4.3 || style="background:#EAAEB2" |6.4.2||2024-08-16||<div id="KnownIssue18" style="display:inline;"></div> | |||
'''Hash codes in POSCAR and CONTCAR files''': | |||
Hash codes are printed out to CONTCAR files. This does not affect the calculation but confuses some users. This has been fixed as of VASP 6.4.3, cf. forum posts: https://www.vasp.at/forum/viewtopic.php?f=4&t=19108 , https://www.vasp.at/forum/viewtopic.php?f=3&t=19113 , and https://vasp.at/forum/viewtopic.php?p=27108#p27108. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.4.3|| style="background:#EAAEB2" |6.4.2||2024-02-06||<div id="KnownIssue8" style="display:inline;"></div> | |||
'''The combination of {{TAG|VCAIMAGES}} and {{TAGO|ISIF|3}} results in non-averaged trajectories''': | |||
The issue arises because the stress tensor is not averaged over the two images of VASP runs. | |||
|- | |||
| style="background:#ACE9E5" |6.4.3|| style="background:#EAAEB2" |6.2.1||2023-10-19||<div id="KnownIssue7" style="display:inline;"></div> | |||
'''Phonon calculations ({{TAGO|IBRION|6}}) fails for some trigonal cells with {{TAGO|ISIF|3}}''': | |||
VASP prints a bug error message complaining that it could not find some '''k''' points of the original mesh in the larger one with reduced symmetry of a distortion. | VASP prints a bug error message complaining that it could not find some '''k''' points of the original mesh in the larger one with reduced symmetry of a distortion. | ||
As a fix, you can set {{TAGO|KBLOWUP|F}} to circumvent this error message while we work on a fix. | As a fix, you can set {{TAGO|KBLOWUP|F}} to circumvent this error message while we work on a fix. | ||
Line 44: | Line 124: | ||
|- | |- | ||
|2023- | | style="background:#ACE9E5" |6.4.3|| style="background:#EAAEB2" |6.4.2||2023-09-20||<div id="KnownIssue6" style="display:inline;"></div> | ||
'''Fast-mode predictions will crash together with finite difference (IBRION=5,6)''': At the end of the calculation the fast-mode is supposed to deallocate important arrays using {{TAG|NSW}}. In the finite differences method {{TAG|NSW}} is not used and the fast-mode can wrongly deallocate at an earlier stage. This results in an error if the code wants to access the deallocated arrays. Until a patch is released we suggest two possible quick fixes: | '''Specific cases of {{TAG|SAXIS}} gave unexpected quantization axis''': | ||
For sx=0 and sy<0, VASP falsely assumes alpha=pi/2. It should correctly yield alpha=-pi/2. This error has probably been there for a long time but on one hand the setting is probably rarely chosen and on the other hand as the treatment is consistent within the calculation, the results should not be affected much. | |||
|- | |||
| style="background:#ACE9E5" |6.4.3|| style="background:#EAAEB2" |6.4.2||2023-08-21||<div id="KnownIssue5" style="display:inline;"></div> | |||
'''Restarting a calculation from vaspwave.h5 when the number of k points changed crashes with a bug message''': | |||
This can happen, e.g., because {{TAG|ISYM}} is changed. VASP should behave the same as restarting from WAVECAR. | |||
|- | |||
| style="background:#ACE9E5" |6.4.3|| style="background:#EAAEB2" |6.4.0||2023-04-06||<div id="KnownIssue4" style="display:inline;"></div> | |||
'''LOCPOT file for vasp_ncl is not written correctly''': | |||
{{TAG|LVTOT}}=T for vasp_ncl should write the potential in the "density, magnetization" representation, i.e., the scalar potential (v0), and magnetic field (Bx, By, Bz), to the {{FILE|LOCPOT}} file. However, VASP writes the potential in the (upup, updown, downup, downdown) representation to real numbers, which is incomplete. | |||
<div id="ncl-LOCPOT" style="display:inline;"></div> | |||
|- | |||
| style="background:#ACE9E5" |6.4.2|| style="background:#EAAEB2" |6.4.0||2023-05-31||<div id="KnownIssue3" style="display:inline;"></div> | |||
'''Fast-mode predictions will crash together with finite difference (IBRION=5,6)''': | |||
At the end of the calculation the fast-mode is supposed to deallocate important arrays using {{TAG|NSW}}. In the finite differences method {{TAG|NSW}} is not used and the fast-mode can wrongly deallocate at an earlier stage. This results in an error if the code wants to access the deallocated arrays. Until a patch is released we suggest two possible quick fixes: | |||
'''(1)''' Avoid explicit deallocations at the end of the program and let the compiler deallocate when the code runs out of scope. For that remove lines 568, 569, 570 and 572 in the ml_ff_ff2.F file. | '''(1)''' Avoid explicit deallocations at the end of the program and let the compiler deallocate when the code runs out of scope. For that remove lines 568, 569, 570 and 572 in the ml_ff_ff2.F file. | ||
Line 54: | Line 151: | ||
|- | |- | ||
| style="background:#ACE9E5" |6.4.2|| style="background:#EAAEB2" |6.4.0||2023-05-17||<div id="KnownIssue2" style="display:inline;"></div> | |||
'''Incorrect MLFF fast-mode predictions for some triclinic geometries''': Due to an error in the cell list algorithm the MLFF predictions (energy, forces and stress tensor) in the fast-execution mode (<code>{{TAG|ML_MODE}} {{=}} run</code>) may be incorrect for triclinic systems with small or large lattice angles (i.e., large deviations from right angles). Until a patch is released we suggest two possible quick fixes: | '''Incorrect MLFF fast-mode predictions for some triclinic geometries''': | ||
Due to an error in the cell list algorithm the MLFF predictions (energy, forces and stress tensor) in the fast-execution mode (<code>{{TAG|ML_MODE}} {{=}} run</code>) may be incorrect for triclinic systems with small or large lattice angles (i.e., large deviations from right angles). Until a patch is released we suggest two possible quick fixes: | |||
'''(1)''' Avoid using the cell list algorithm for neighbor list builds ('''recommended'''): Add <code>this%algo_type = 2</code> in a new line below line 923 in <code>src/ml_ff_neighbor.F</code> and recompile {{VASP}}, '''or''', | '''(1)''' Avoid using the cell list algorithm for neighbor list builds ('''recommended'''): Add <code>this%algo_type = 2</code> in a new line below line 923 in <code>src/ml_ff_neighbor.F</code> and recompile {{VASP}}, '''or''', | ||
Line 64: | Line 162: | ||
|- | |- | ||
| style="background:#ACE9E5" |6.4.2|| style="background:#EAAEB2" |6.4.1||2023-05-15||<div id="KnownIssue1" style="display:inline;"></div> | |||
<div id="KnownIssue1" style="display:inline;"></div>'''Bugs in interface to wannier90''': | '''Bugs in interface to wannier90''': | ||
* If no projections are supplied (e.g. [[LOCPROJ]], [[LSCDM]]) and there are no projections found in the wannier90 input file, VASP does not produce the UNK files. This also leads to a crash if [[LWANNIER90 RUN]] is used. | * If no projections are supplied (e.g. [[LOCPROJ]], [[LSCDM]]) and there are no projections found in the wannier90 input file, VASP does not produce the UNK files. This also leads to a crash if [[LWANNIER90 RUN]] is used. | ||
* The format of the UNK files is broken for the gamma-only version of VASP. | * The format of the UNK files is broken for the gamma-only version of VASP. | ||
Thanks to guyohad for the [https://www.vasp.at/forum/viewtopic.php?f=3&t=18949 bug report]. | Thanks to guyohad for the [https://www.vasp.at/forum/viewtopic.php?f=3&t=18949 bug report]. | ||
|- | |- | ||
| style="background:#ACE9E5" |6.4.1|| style="background:#EAAEB2" |6.4.0||2023-03-07|| | |||
'''Output of memory estimate in machine learning force fields is wrong for SVD refitting''': The SVD algorithm ({{TAG|ML_IALGO_LINREG}}=3, 4) uses the design matrix and two helping arrays with the size of the design matrix. In the memory estimates these two helping arrays are not considered correctly. The entry "FMAT for basis" at the beginning of the {{TAG|ML_LOGFILE}} should be three times larger. The algorithm will be fixed such that it only requires twice the design matrix arrays instead of three times and the outputs for the estimates will contain the correct values. | '''Output of memory estimate in machine learning force fields is wrong for SVD refitting''': | ||
The SVD algorithm ({{TAG|ML_IALGO_LINREG}}=3, 4) uses the design matrix and two helping arrays with the size of the design matrix. In the memory estimates these two helping arrays are not considered correctly. The entry "FMAT for basis" at the beginning of the {{TAG|ML_LOGFILE}} should be three times larger. The algorithm will be fixed such that it only requires twice the design matrix arrays instead of three times and the outputs for the estimates will contain the correct values. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.4.1|| style="background:#EAAEB2" |6.4.0||2023-03-07|| | |||
'''Bug in sparsification routine for machine learning force fields''': This bug effects more severely calculatoins where the number of local reference configurations is getting close to {{TAG|ML_MB}}. By setting {{TAG|ML_MB}} to a high value this bug can be avoided in most cases (there are still some cases, especially where a small number of local reference configurations is picked and the structure contains many atoms per type or {{TAG|ML_MCONF_NEW}} is set to a high value). This bug can especially affect refitting runs, resulting in no {{TAG|ML_FFN}} file. | '''Bug in sparsification routine for machine learning force fields''': | ||
This bug effects more severely calculatoins where the number of local reference configurations is getting close to {{TAG|ML_MB}}. By setting {{TAG|ML_MB}} to a high value this bug can be avoided in most cases (there are still some cases, especially where a small number of local reference configurations is picked and the structure contains many atoms per type or {{TAG|ML_MCONF_NEW}} is set to a high value). This bug can especially affect refitting runs, resulting in no {{TAG|ML_FFN}} file. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.4.1|| style="background:#EAAEB2" |6.4.0||2023-03-07|| | |||
'''{{TAG|ML_ISTART}}=2 on sub element types broken for fast force field''': When the force is trained for multiple element types, but the production runs ({{TAG|ML_ISTART}}=2) are carried out for a subset of types, the code most likely crashes. This bug will be urgently fixed. | '''{{TAG|ML_ISTART}}=2 on sub element types broken for fast force field''': | ||
When the force is trained for multiple element types, but the production runs ({{TAG|ML_ISTART}}=2) are carried out for a subset of types, the code most likely crashes. This bug will be urgently fixed. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.4.1|| style="background:#EAAEB2" |6.2.0||2023-02-20|| | |||
'''INCAR reader issues''': | '''INCAR reader issues''': | ||
*Moving an {{FILE|INCAR}} file from a system with Windows line endings to a Unix-based system can cause the {{FILE|INCAR}} reader to fail. As a workaround convert the {{FILE|INCAR}} file to Unix line endings e.g. by <code>:set ff=unix</code> in vi. | * Moving an {{FILE|INCAR}} file from a system with Windows line endings to a Unix-based system can cause the {{FILE|INCAR}} reader to fail. As a workaround convert the {{FILE|INCAR}} file to Unix line endings e.g. by <code>:set ff=unix</code> in vi. | ||
*Comments lines do not work properly with inline tags separated by semicolon if the comment character occurs before the semicolon but not at the beginning of the line. As a workaround, split the tags over multiple lines so that you can comment out what you want. Please also refer to the {{FILE|OUTCAR}} file and see whether VASP understood your input. | * Comments lines do not work properly with inline tags separated by semicolon if the comment character occurs before the semicolon but not at the beginning of the line. As a workaround, split the tags over multiple lines so that you can comment out what you want. Please also refer to the {{FILE|OUTCAR}} file and see whether VASP understood your input. | ||
|- | |- | ||
| style="background:#ACE9E5" |6.4.1|| style="background:#EAAEB2" |6.4.0||2023-02-17|| | |||
'''Corrupt ML_FFN files on some file systems''': Insufficient protection against concurrent write statements may lead to corrupt {{FILE|ML_FFN}} files on some file systems. The broken files will often remain unnoticed until they are used in a prediction-only run with {{TAG|ML_ISTART}}=2. Then, {{VASP}} is likely to exit with some misleading error message about incorrect types present in the {{FILE|ML_FF}} file. As a workaround it may help to refit starting from the last {{FILE|ML_AB}} file with {{TAG|ML_MODE}}=refit which may generate a working {{FILE|ML_FFN}} file (this is anyway highly recommended to gain access to the fast execution mode in {{TAG|ML_ISTART}}=2). Alternatively, there is a patch for VASP.6.4.0 available (see attachment to [https://www.vasp.at/forum/viewtopic.php?f=3&t=18842#p23422 this forum post]). Thanks a lot to [https://www.vasp.at/forum/memberlist.php?mode=viewprofile&u=67168 xiliang_lian] and [https://www.vasp.at/forum/memberlist.php?mode=viewprofile&u=68404 szurlle] for [https://www.vasp.at/forum/viewtopic.php?f=3&t=18842 reporting this and testing the patch]. | '''Corrupt ML_FFN files on some file systems''': | ||
Insufficient protection against concurrent write statements may lead to corrupt {{FILE|ML_FFN}} files on some file systems. The broken files will often remain unnoticed until they are used in a prediction-only run with {{TAG|ML_ISTART}}=2. Then, {{VASP}} is likely to exit with some misleading error message about incorrect types present in the {{FILE|ML_FF}} file. As a workaround it may help to refit starting from the last {{FILE|ML_AB}} file with {{TAG|ML_MODE}}=refit which may generate a working {{FILE|ML_FFN}} file (this is anyway highly recommended to gain access to the fast execution mode in {{TAG|ML_ISTART}}=2). Alternatively, there is a patch for VASP.6.4.0 available (see attachment to [https://www.vasp.at/forum/viewtopic.php?f=3&t=18842#p23422 this forum post]). Thanks a lot to [https://www.vasp.at/forum/memberlist.php?mode=viewprofile&u=67168 xiliang_lian] and [https://www.vasp.at/forum/memberlist.php?mode=viewprofile&u=68404 szurlle] for [https://www.vasp.at/forum/viewtopic.php?f=3&t=18842 reporting this and testing the patch]. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.4.0|| style="background:#EAAEB2" |6.3.2||2023-01-18|| | |||
'''makefile.include template does not work for AOCC 4.0.0''': The ''flang'' preprocessor explicitly requires specifying that the code is in free format <code>-ffree-form</code>. In earlier versions of VASP you can add this flag to the <code>CPP</code> rule in the makefile.include. Thanks to [https://www.vasp.at/forum/memberlist.php?mode=viewprofile&u=66916 liu_jiyuan] for reporting [https://www.vasp.at/forum/viewtopic.php?f=2&t=18802 this bug]. | '''makefile.include template does not work for AOCC 4.0.0''': | ||
The ''flang'' preprocessor explicitly requires specifying that the code is in free format <code>-ffree-form</code>. In earlier versions of VASP you can add this flag to the <code>CPP</code> rule in the makefile.include. Thanks to [https://www.vasp.at/forum/memberlist.php?mode=viewprofile&u=66916 liu_jiyuan] for reporting [https://www.vasp.at/forum/viewtopic.php?f=2&t=18802 this bug]. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.4.0|| style="background:#EAAEB2" |6.1.0||2022-11-23|| | |||
'''Memory leak in MD in OpenMP version compiled with AOCC and NV''': | '''Memory leak in MD in OpenMP version compiled with AOCC and NV''': | ||
This problem originates from the <code>DEFAULT(PRIVATE)</code> clause in <code>SET_DD_PAW</code> in <code>paw.F</code> because the NV and AOCC compilers do not correctly clean up the memory for arrays that were allocated outside the OMP PARALLEL region and used as private inside. We advise against compiling with OpenMP support with the NV and AOCC compilers for vasp <= 6.3.2. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.3.2|| style="background:#EAAEB2" |5.4.4||2021-11-12|| | |||
''' | '''Ionic contributions to the macroscopic polarization with atoms at the periodic boundary''': | ||
This | Removed a section of code from POINT_CHARGE_DIPOL that adds a copy of the atom when it is at the periodic boundary. | ||
This can lead to a different value of "Ionic dipole moment: p[ion]" being reported in the OUTCAR with respect to previous versions of VASP. | |||
. | This result, although numerically different is still correct since the polarization is defined up to integer multiples of the polarization quantum. | ||
Thanks to Chengcheng Xiao the [https://www.vasp.at/forum/viewtopic.php?f=3&t=18141 bug report]. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.3.2|| style="background:#EAAEB2" |6.3.1||2022-05-11|| | |||
'''ML_ISTART=1 fails for some scenarios''': Due to a bug in the rearrangement of the structures found on the {{FILE|ML_AB}} file, restarting the training of a force field by means of {{TAG|ML_ISTART}}{{=}}1 fails in some cases. '''N.B.: this problem only occurs in a scenario where one repeatedly restarts the training, and returns to training for a structure that was trained on before (that means exactly same element types and number of atoms per element), but not immediately before.''' Example: one starts training a force field for structure A, follows this by a continuation run to train for structure B, and then restarts a second time returning to training for structure A again. | '''ML_ISTART=1 fails for some scenarios''': | ||
Due to a bug in the rearrangement of the structures found on the {{FILE|ML_AB}} file, restarting the training of a force field by means of {{TAG|ML_ISTART}}{{=}}1 fails in some cases. '''N.B.: this problem only occurs in a scenario where one repeatedly restarts the training, and returns to training for a structure that was trained on before (that means exactly same element types and number of atoms per element), but not immediately before.''' Example: one starts training a force field for structure A, follows this by a continuation run to train for structure B, and then restarts a second time returning to training for structure A again. | |||
<!-- | <!-- | ||
|- | |- | ||
|style = "background:#9AB7FE"|6.4. ||style = "background:#EAAEB2"|6.2.0||2022-05-11|| | |||
'''ELF is POTCAR dependent''': The electronic localisation function (ELF) is only implemented on the plane-wave grid. In practice, this leads to a strong PAW potential dependence which can be tested by comparing the POTCAR files recommended for GW and GGA. | '''ELF is POTCAR dependent''': | ||
The electronic localisation function (ELF) is only implemented on the plane-wave grid. In practice, this leads to a strong PAW potential dependence which can be tested by comparing the POTCAR files recommended for GW and GGA. | |||
--> | --> | ||
|- | |- | ||
| style="background:#ACE9E5" |6.3.1|| style="background:#EAAEB2" |6.2.0||2022-05-05|| | |||
'''Treatment of the Coulomb divergence in hybrid-functional band-structure calculations is only correct for PBE0''': The Coulomb divergence correction for states at and near the Γ-point in hybrid-functional band-structure calculations (see {{TAG|HFRCUT}}) was only correctly implemented for PBE0 and {{TAG|HFRCUT}}{{=}}-1. Note: HSE band-structure calculations are not expected to be (strongly) affected because this hybrid functional only includes “short-range” Fock exchange. | '''Treatment of the Coulomb divergence in hybrid-functional band-structure calculations is only correct for PBE0''': | ||
The Coulomb divergence correction for states at and near the Γ-point in hybrid-functional band-structure calculations (see {{TAG|HFRCUT}}) was only correctly implemented for PBE0 and {{TAG|HFRCUT}}{{=}}-1. Note: HSE band-structure calculations are not expected to be (strongly) affected because this hybrid functional only includes “short-range” Fock exchange. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.3.1|| style="background:#EAAEB2" |6.2.0||2022-03-14|| | |||
'''Bug in interface with Wannier90 for non-collinear spin calculations''': The spin axis for non-collinear spin calculations is not correctly read from the wannier90 input file. This is because this line in the <code>mlwf.F</code> file: <code>MLWF%LPRJ_functions(IS)%spin_qaxis = proj_s_qaxisx(3,IS)</code> should instead be: <code>MLWF%LPRJ_functions(IS)%spin_qaxis = proj_s_qaxisx(:,IS)</code>. Thanks to Domenico Di Sante for reporting this [https://www.vasp.at/forum/viewtopic.php?f=3&t=18424 bug]. | '''Bug in interface with Wannier90 for non-collinear spin calculations''': | ||
The spin axis for non-collinear spin calculations is not correctly read from the wannier90 input file. This is because this line in the <code>mlwf.F</code> file: <code>MLWF%LPRJ_functions(IS)%spin_qaxis = proj_s_qaxisx(3,IS)</code> should instead be: <code>MLWF%LPRJ_functions(IS)%spin_qaxis = proj_s_qaxisx(:,IS)</code>. Thanks to Domenico Di Sante for reporting this [https://www.vasp.at/forum/viewtopic.php?f=3&t=18424 bug]. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.3.1|| style="background:#EAAEB2" |6.3.0||2022-02-04|| | |||
'''Incompatibility with Fujitsu compiler''': Fujitsu's Fortran compiler does not support overloaded internal subroutines. A simple workaround is to compile without [[:Category:Machine-learned force fields|machine-learning–force-fileds capabilities]]. Comment out the macro definition of <code>ML_AVAILABLE</code> in line 626 of <code>src/symbol.inc</code> by adding a <code>!</code> in front, i.e. it should look like this: <code>!#define ML_AVAILABLE</code>. Then do a complete rebuild of VASP: run <code>make veryclean</code> followed by your desired build command. | '''Incompatibility with Fujitsu compiler''': | ||
Fujitsu's Fortran compiler does not support overloaded internal subroutines. A simple workaround is to compile without [[:Category:Machine-learned force fields|machine-learning–force-fileds capabilities]]. Comment out the macro definition of <code>ML_AVAILABLE</code> in line 626 of <code>src/symbol.inc</code> by adding a <code>!</code> in front, i.e. it should look like this: <code>!#define ML_AVAILABLE</code>. Then do a complete rebuild of VASP: run <code>make veryclean</code> followed by your desired build command. | |||
|- | |- | ||
| style="background:#ACE9E5" |6.3.0|| style="background:#EAAEB2" |6.2.0||2021-05-28|| | |||
'''Bug in interface with Wannier90 writing UNK when exclude_bands present''': The UNK files generated by VASP include all bands where bands specified by `exclude_bands` should be excluded. | '''Bug in interface with Wannier90 writing UNK when exclude_bands present''': | ||
The UNK files generated by VASP include all bands where bands specified by `exclude_bands` should be excluded. | |||
The fix is to pass the `exclude_bands` array to `get_wave_functions` in mlwf.F. Thanks to Chengcheng Xiao for reporting this [https://vasp.at/forum/viewtopic.php?f=3&t=18140 bug]. | The fix is to pass the `exclude_bands` array to `get_wave_functions` in mlwf.F. Thanks to Chengcheng Xiao for reporting this [https://vasp.at/forum/viewtopic.php?f=3&t=18140 bug]. | ||
|- | |||
| style="background:#ACE9E5" |6.2.0|| style="background:#EAAEB2" |6.1.0||2022-08-29|| | |||
'''Inconsistent energy for fixed electron occupancies''': | |||
Rickard Armiento pointed out that the HF total energy for fixed electron occupancies was inconsistent when compared to 5.4.4 or older versions. | |||
This bug was introduced in 6.1.0 in order to support {{TAG|IALGO}}=3 in combination with {{TAG|ISMEAR}}=-2 (for <code>SPHPRO</code> calculations as post-processing step) but broke the CG algorithms ({{TAG|IALGO}}=53) | |||
The fix was added in <code> src/main.F </code> with <code> IF (INFO%LONESW .OR. (INFO%IALGO==3 .AND. KPOINTS%ISMEAR/=-2)) THEN \n IF (INFO%LONESW) W_F%CELTOT = W%CELTOT </code> | |||
. | |||
|- | |||
| style="background:#ACE9E5" | >=6 || style="background:#EAAEB2" |<6||2023-10-31|| | |||
'''For {{TAG|LORBIT}} >= 11 and {{TAG|ISYM}} = 2, the partial charge densities are not correctly symmetrized''': | |||
This can result in different charges for symmetrically equivalent partial charge densities. For older versions of VASP, we recommend a two-step procedure: | |||
*1. Self-consistent calculation with symmetry switched on ({{TAG|ISYM}}=2) | |||
*2. Recalculation of the partial charge density with symmetry switched off ({{TAG|ISYM}}=0) | |||
To avoid unnecessary large {{TAG|WAVECAR}} files, we recommend setting {{TAG|LWAVE}}=.FALSE. in step 2. | |||
|- | |||
| style="background:#CBCBCB" | PBE.64 || style="background:#CBCBCB" | - ||2024-10-25|| | |||
'''Date in Nd POTCAR of release PBE.64 should be 25 May 2022 (not 2002)''': | |||
There is a typo in the first line of the Nd POTCAR for the PBE version 64 release. It reads "PAW_PBE Nd 25May2002" instead of "PAW_PBE Nd 25May2022". | |||
|} | |} | ||
Line 139: | Line 272: | ||
[[Category:VASP]] | [[Category:VASP]] | ||
[[Category: | [[Category:Version]] |
Latest revision as of 15:16, 14 November 2024
Below we provide an incomplete list of known issues. Please mind the description to see whether the issue has been fixed.
Color legend: Open Resolved Planned Obsolete
Version fixed | Version first noticed | Date | Description |
---|---|---|---|
Open | <5.4.4 | 2024-11-14 |
There is a bug in the implementation of the van der Waals dDsC method (IVDW=4). The atomic volumes and charges are not calculated correctly, which translates into errors in the total energy and forces. |
Planned | 6.4.2 | 2024-11-13 |
SCDM method gives incorrect results for second spin channel when Thanks to Patrick J. Taylor who reported the behavior in this forum post. |
Open | <6 | 2024-08-26 |
NaNs if NSW*NBLOCK, NSW*ML_OUTBLOCK or NBLOCK*KBLOCK : Either product must not exceed the largest integer(4) number 2147483647 (). As a solution split the molecular dynamics run into multiple calculations with smaller values for NSW or KBLOCK. Thanks to Renjie Chen who reported the behavior in this post: Error in MD annealing simulation with ML potential |
Open | 6.4.3 | 2024-08-26 |
Memory estimation in ML_MODE=TRAIN is wrong: The memory estimation for the major arrays (design matrix - FMAT, covariance matrix - CMAT, etc.) can be significantly too small. This problem mainly appears if ML_MODE=TRAIN is selected especially in a continuation run. Until this is not officially fixed don't rely on the memory estimation! |
Open | <6 | 2024-08-21 |
Dielectric tensor and Born effective charges from density functional perturbation theory are incorrect for non-collinear spin calculations when symmetries are used: The rotation of the spinor part of the derivatives of the wavefunctions with respect to k was missing which leads to incorrect results when using LEPSILON=.TRUE. in combination with LNONCOLLINEAR=.TRUE. and ISYM>=0. The fix for previous versions is to use ISYM=-1. |
Open | 6.4.3 | 2024-08-05 |
ML_MODE=REFIT is broken if VASP is compiled without the precompiler flag -DscaLAPACK: Some arrays are allocated with the wrong size, leading to a crash with unclear exit messages of the code. This bug should not affect many users, since we strongly suggest to run ML_MODE=REFIT with -DscaLAPACK, because otherwise the SVD and related routines are not parallelized. This bug will be fixed in the next release. |
Open | <6 | 2024-05-31 |
Compiler optimizations with the Fujitsu compiler on A64FX platforms:
The Thanks to Ivan Rostov for the bug report. |
Open | <6 | 2024-05-27 |
Calculations with LMODELHF=.TRUE. crash if started without WAVECAR file in the directory: The crash occurs because of a division by the screening parameter that is zero during the first few iterations that are done with the functional from the POTCAR file. If a WAVECAR file is present, then these first few iterations are skipped. |
Open | 6.4.3 | 2024-05-14 |
Using LCALCEPS in combination with Hybrid functionals may lead to a crash when running on GPU: VASP may crash when using Hybrid functionals in combination with LCALCEPS when running on GPUs due to an error when distributing the electronic states to be optimized for a batched fft. For the moment to overcome this issue, users can set NBLOCK_FOCK <= number of occupied states in the INCAR file. Thank you Sergey Lisenkov and Francesco Ricci for reporting the bug. |
Open | <6 | 2024-05-13 |
Reading the file DYNMATFULL may lead to a crash in MPI-parallel calculations : If SCALEE≠1, then the file DYNMATFULL is read if present. This may lead to a crash in MPI-parallel calculations, in particular with the gfortran compiler. Thanks to Vyacheslav Bryantsev for the bug report. |
Open | 6.4.3 | 2024-04-10 |
Compilation error for GCC with ELPA support:
Compilation with ELPA support (Makefile.include#ELPA_(optional)) failes for the GNU Fortran compiler because the Fortran standard for c_loc was not strictly followed. Other compilers (e.g. NVIDIAs Fortran compiler) might not enforce the standard in this case and will produce a working binary.
Solution: Add the Thanks to user rogeli_grima for the bug report! |
Open | 6.4.2 | 2024-04-10 |
AOCC >= 4.0 does not produce runnable code when compiling without OpenMP-support: The AOCC compiler version >= 4.0 apparently uses a more aggressive optimization on a particular symmetry routine (SGRGEN) when compiling without OpenMP-support. Thus, code produced using arch/makefile.include.aocc_ompi_aocl exits with:
Solution: adapt your makefile.include by adding Thanks to users jelle_lagerweij, huangjs, and jun_yin2 for the bug report and investigations. |
Open | 6.4.3 | 2024-04-03 |
-DnoAugXCmeta is broken: We no longer recommend compilation of VASP with this precompiler option since it negatively affects the results of SCAN and SCAN-like meta-GGA calculations. To make matters worse, this feature is broken in VASP.6.4.3. So definitely do not compile VASP.6.4.3 with -DnoAugXCmeta. |
Open | 6.4.2 | 2024-03-21 |
Wannier90 exclude_bands not supported for SCDM method: When using LSCDM together with LWANNIER90 or LWANNIER90_RUN, the use of exclude_bands in the Wannier90 input file is currently not supported. |
Planned | 5.4.0 | 2024-10-14 |
Uninitialized variable IFLAG in ELMIN for ICHARG=5: When running VASP with ICHARG=5, the variable IFLAG is not properly initialized before calling EDDIAG. This caused inconsistent behavior during the EDDIAG call when using the DAV algorithm (each MPI rank has a random value for IFLAG). Depending on the compiler used, this could result in either VASP waiting indefinitely during EDDIAG due to waiting MPI ranks, or led to the absence of the requested preconditioning for the DAV solver, slowing down convergence. While this did not produce incorrect results, VASP could either hang indefinitely without throwing an error or exhibit slower convergence. |
6.4.3 | <6 | 2024-08-21 |
Interface to Wannier90 and PEAD calculations lead to incorrect results for non-collinear spin calculations when symmetries are used: The rotation of the spinor part of the wavefunctions was missing which leads to incorrect results when computing the projections and overlaps written to the AMN and MMN files used by Wannier90 when LNONCOLLINEAR=.TRUE. and ISYM>=0 are set in the INCAR file. The fix for previous versions is to use ISYM=-1. |
6.4.3 | 6.4.2 | 2024-08-16 |
Hash codes in POSCAR and CONTCAR files: Hash codes are printed out to CONTCAR files. This does not affect the calculation but confuses some users. This has been fixed as of VASP 6.4.3, cf. forum posts: https://www.vasp.at/forum/viewtopic.php?f=4&t=19108 , https://www.vasp.at/forum/viewtopic.php?f=3&t=19113 , and https://vasp.at/forum/viewtopic.php?p=27108#p27108. |
6.4.3 | 6.4.2 | 2024-02-06 |
The combination of VCAIMAGES and |
6.4.3 | 6.2.1 | 2023-10-19 |
Phonon calculations ( Thanks for barshab for the bug report. |
6.4.3 | 6.4.2 | 2023-09-20 |
Specific cases of SAXIS gave unexpected quantization axis: For sx=0 and sy<0, VASP falsely assumes alpha=pi/2. It should correctly yield alpha=-pi/2. This error has probably been there for a long time but on one hand the setting is probably rarely chosen and on the other hand as the treatment is consistent within the calculation, the results should not be affected much. |
6.4.3 | 6.4.2 | 2023-08-21 |
Restarting a calculation from vaspwave.h5 when the number of k points changed crashes with a bug message: This can happen, e.g., because ISYM is changed. VASP should behave the same as restarting from WAVECAR. |
6.4.3 | 6.4.0 | 2023-04-06 |
LOCPOT file for vasp_ncl is not written correctly: LVTOT=T for vasp_ncl should write the potential in the "density, magnetization" representation, i.e., the scalar potential (v0), and magnetic field (Bx, By, Bz), to the LOCPOT file. However, VASP writes the potential in the (upup, updown, downup, downdown) representation to real numbers, which is incomplete. |
6.4.2 | 6.4.0 | 2023-05-31 |
Fast-mode predictions will crash together with finite difference (IBRION=5,6): At the end of the calculation the fast-mode is supposed to deallocate important arrays using NSW. In the finite differences method NSW is not used and the fast-mode can wrongly deallocate at an earlier stage. This results in an error if the code wants to access the deallocated arrays. Until a patch is released we suggest two possible quick fixes: (1) Avoid explicit deallocations at the end of the program and let the compiler deallocate when the code runs out of scope. For that remove lines 568, 569, 570 and 572 in the ml_ff_ff2.F file. (2) Avoid the fast-prediction mode: Retrain the MLFF without support for the fast mode, i.e., use Thanks for Soungminbae for the bug report. |
6.4.2 | 6.4.0 | 2023-05-17 |
Incorrect MLFF fast-mode predictions for some triclinic geometries:
Due to an error in the cell list algorithm the MLFF predictions (energy, forces and stress tensor) in the fast-execution mode ( (1) Avoid using the cell list algorithm for neighbor list builds (recommended): Add (2) Avoid the fast-prediction mode: Retrain the MLFF without support for the fast mode, i.e., use Thanks to Johan for a very detailed bug report. |
6.4.2 | 6.4.1 | 2023-05-15 |
Bugs in interface to wannier90:
Thanks to guyohad for the bug report. |
6.4.1 | 6.4.0 | 2023-03-07 |
Output of memory estimate in machine learning force fields is wrong for SVD refitting: The SVD algorithm (ML_IALGO_LINREG=3, 4) uses the design matrix and two helping arrays with the size of the design matrix. In the memory estimates these two helping arrays are not considered correctly. The entry "FMAT for basis" at the beginning of the ML_LOGFILE should be three times larger. The algorithm will be fixed such that it only requires twice the design matrix arrays instead of three times and the outputs for the estimates will contain the correct values. |
6.4.1 | 6.4.0 | 2023-03-07 |
Bug in sparsification routine for machine learning force fields: This bug effects more severely calculatoins where the number of local reference configurations is getting close to ML_MB. By setting ML_MB to a high value this bug can be avoided in most cases (there are still some cases, especially where a small number of local reference configurations is picked and the structure contains many atoms per type or ML_MCONF_NEW is set to a high value). This bug can especially affect refitting runs, resulting in no ML_FFN file. |
6.4.1 | 6.4.0 | 2023-03-07 |
ML_ISTART=2 on sub element types broken for fast force field: When the force is trained for multiple element types, but the production runs (ML_ISTART=2) are carried out for a subset of types, the code most likely crashes. This bug will be urgently fixed. |
6.4.1 | 6.2.0 | 2023-02-20 |
INCAR reader issues:
|
6.4.1 | 6.4.0 | 2023-02-17 |
Corrupt ML_FFN files on some file systems: Insufficient protection against concurrent write statements may lead to corrupt ML_FFN files on some file systems. The broken files will often remain unnoticed until they are used in a prediction-only run with ML_ISTART=2. Then, VASP is likely to exit with some misleading error message about incorrect types present in the ML_FF file. As a workaround it may help to refit starting from the last ML_AB file with ML_MODE=refit which may generate a working ML_FFN file (this is anyway highly recommended to gain access to the fast execution mode in ML_ISTART=2). Alternatively, there is a patch for VASP.6.4.0 available (see attachment to this forum post). Thanks a lot to xiliang_lian and szurlle for reporting this and testing the patch. |
6.4.0 | 6.3.2 | 2023-01-18 |
makefile.include template does not work for AOCC 4.0.0:
The flang preprocessor explicitly requires specifying that the code is in free format |
6.4.0 | 6.1.0 | 2022-11-23 |
Memory leak in MD in OpenMP version compiled with AOCC and NV:
This problem originates from the |
6.3.2 | 5.4.4 | 2021-11-12 |
Ionic contributions to the macroscopic polarization with atoms at the periodic boundary: Removed a section of code from POINT_CHARGE_DIPOL that adds a copy of the atom when it is at the periodic boundary. This can lead to a different value of "Ionic dipole moment: p[ion]" being reported in the OUTCAR with respect to previous versions of VASP. This result, although numerically different is still correct since the polarization is defined up to integer multiples of the polarization quantum. Thanks to Chengcheng Xiao the bug report. |
6.3.2 | 6.3.1 | 2022-05-11 |
ML_ISTART=1 fails for some scenarios: Due to a bug in the rearrangement of the structures found on the ML_AB file, restarting the training of a force field by means of ML_ISTART=1 fails in some cases. N.B.: this problem only occurs in a scenario where one repeatedly restarts the training, and returns to training for a structure that was trained on before (that means exactly same element types and number of atoms per element), but not immediately before. Example: one starts training a force field for structure A, follows this by a continuation run to train for structure B, and then restarts a second time returning to training for structure A again. |
6.3.1 | 6.2.0 | 2022-05-05 |
Treatment of the Coulomb divergence in hybrid-functional band-structure calculations is only correct for PBE0: The Coulomb divergence correction for states at and near the Γ-point in hybrid-functional band-structure calculations (see HFRCUT) was only correctly implemented for PBE0 and HFRCUT=-1. Note: HSE band-structure calculations are not expected to be (strongly) affected because this hybrid functional only includes “short-range” Fock exchange. |
6.3.1 | 6.2.0 | 2022-03-14 |
Bug in interface with Wannier90 for non-collinear spin calculations:
The spin axis for non-collinear spin calculations is not correctly read from the wannier90 input file. This is because this line in the |
6.3.1 | 6.3.0 | 2022-02-04 |
Incompatibility with Fujitsu compiler:
Fujitsu's Fortran compiler does not support overloaded internal subroutines. A simple workaround is to compile without machine-learning–force-fileds capabilities. Comment out the macro definition of |
6.3.0 | 6.2.0 | 2021-05-28 |
Bug in interface with Wannier90 writing UNK when exclude_bands present: The UNK files generated by VASP include all bands where bands specified by `exclude_bands` should be excluded. The fix is to pass the `exclude_bands` array to `get_wave_functions` in mlwf.F. Thanks to Chengcheng Xiao for reporting this bug. |
6.2.0 | 6.1.0 | 2022-08-29 |
Inconsistent energy for fixed electron occupancies:
Rickard Armiento pointed out that the HF total energy for fixed electron occupancies was inconsistent when compared to 5.4.4 or older versions.
This bug was introduced in 6.1.0 in order to support IALGO=3 in combination with ISMEAR=-2 (for |
>=6 | <6 | 2023-10-31 |
For LORBIT >= 11 and ISYM = 2, the partial charge densities are not correctly symmetrized: This can result in different charges for symmetrically equivalent partial charge densities. For older versions of VASP, we recommend a two-step procedure:
To avoid unnecessary large WAVECAR files, we recommend setting LWAVE=.FALSE. in step 2. |
PBE.64 | - | 2024-10-25 |
Date in Nd POTCAR of release PBE.64 should be 25 May 2022 (not 2002): There is a typo in the first line of the Nd POTCAR for the PBE version 64 release. It reads "PAW_PBE Nd 25May2002" instead of "PAW_PBE Nd 25May2022". |