ML CX: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 6: | Line 6: | ||
The usage of this tag in combination with the learning algorithms is described here: [[Machine learning force field calculations: Basics#Threshold for error of forces|here]]. | The usage of this tag in combination with the learning algorithms is described here: [[Machine learning force field calculations: Basics#Threshold for error of forces|here]]. | ||
If {{TAG|ML_ICRITERIA}}>0, {{TAG|ML_CTIFOR}} is set to the average of the Bayesian errors of the forces stored in history. Note that {{TAG|ML_ICRITERIA}}=1 and {{TAG|ML_ICRITERIA}}=2, average over different data. In the first case the average is performed over Bayesian errors after updates of the force fields, and the second case over all Bayesian error estimates (see {{TAG|ML_ICRITERIA}}). In both cases, if {{TAG|ML_CTIFOR}} is updated, it is set to | If {{TAG|ML_ICRITERIA}}>0, {{TAG|ML_CTIFOR}} is set to the average of the Bayesian errors of the forces stored in a history. Note that {{TAG|ML_ICRITERIA}}=1 and {{TAG|ML_ICRITERIA}}=2, average over different data. In the first case the average is performed over Bayesian errors after updates of the force fields, and in the second case over all Bayesian error estimates (see {{TAG|ML_ICRITERIA}}). In both cases, if {{TAG|ML_CTIFOR}} is updated, it is set to | ||
{{TAG|ML_CTIFOR}} = (average of the stored Bayesian errors in the history) *(1.0 + {{TAG|ML_CX}}). | {{TAG|ML_CTIFOR}} = (average of the stored Bayesian errors in the history) *(1.0 + {{TAG|ML_CX}}). | ||
Obviously setting {{TAG|ML_CX}} to a positive value will result in fewer first principles calculations and fewer updates of the MLFF, whereas negative values result in more frequent first principles calculations (as well as updates of the MLFF). Typical values of {{TAG|ML_CX}} are between -0.2 and 0.0 {{TAG|ML_ICRITERIA}}=1 and, 0.0 and 0.3 for {{TAG|ML_ICRITERIA}}=2. For training runs using heating, the default usually results in very well balanced machine learned force fields. When the training is performed at a fixed temperature, it is often desirable to decrease to {{TAG|ML_CX}}=-0.1, in order to increase the number of first principle calculations and thus the size of the training set (the default can result in too few training data). | Obviously setting {{TAG|ML_CX}} to a positive value will result in fewer first principles calculations and fewer updates of the MLFF, whereas negative values result in more frequent first principles calculations (as well as updates of the MLFF). Typical values of {{TAG|ML_CX}} are between -0.2 and 0.0 for {{TAG|ML_ICRITERIA}}=1 and, 0.0 and 0.3 for {{TAG|ML_ICRITERIA}}=2 (a good starting value is 0.15 for {{TAG|ML_ICRITERIA}}=2). For training runs using heating, the default usually results in very well balanced machine learned force fields. When the training is performed at a fixed temperature, it is often desirable to decrease to {{TAG|ML_CX}}=-0.1, in order to increase the number of first principle calculations and thus the size of the training set (the default can result in too few training data). | ||
The number of entries in the history are controlled by {{TAG|ML_MHIS}} for {{TAG|ML_ICRITERIA}}=1, and it is currently fixed to 400 for {{TAG|ML_ICRITERIA}}=2. | The number of entries in the history are controlled by {{TAG|ML_MHIS}} for {{TAG|ML_ICRITERIA}}=1, and it is currently fixed to 400 for {{TAG|ML_ICRITERIA}}=2. |
Revision as of 09:55, 17 September 2022
ML_CX = [real]
Default: ML_CX = 0.0
Description: The parameter determines how the threshold (ML_CTIFOR) is updated within the machine learning force field methods.
The usage of this tag in combination with the learning algorithms is described here: here.
If ML_ICRITERIA>0, ML_CTIFOR is set to the average of the Bayesian errors of the forces stored in a history. Note that ML_ICRITERIA=1 and ML_ICRITERIA=2, average over different data. In the first case the average is performed over Bayesian errors after updates of the force fields, and in the second case over all Bayesian error estimates (see ML_ICRITERIA). In both cases, if ML_CTIFOR is updated, it is set to
ML_CTIFOR = (average of the stored Bayesian errors in the history) *(1.0 + ML_CX).
Obviously setting ML_CX to a positive value will result in fewer first principles calculations and fewer updates of the MLFF, whereas negative values result in more frequent first principles calculations (as well as updates of the MLFF). Typical values of ML_CX are between -0.2 and 0.0 for ML_ICRITERIA=1 and, 0.0 and 0.3 for ML_ICRITERIA=2 (a good starting value is 0.15 for ML_ICRITERIA=2). For training runs using heating, the default usually results in very well balanced machine learned force fields. When the training is performed at a fixed temperature, it is often desirable to decrease to ML_CX=-0.1, in order to increase the number of first principle calculations and thus the size of the training set (the default can result in too few training data).
The number of entries in the history are controlled by ML_MHIS for ML_ICRITERIA=1, and it is currently fixed to 400 for ML_ICRITERIA=2.
Related tags and articles
ML_LMLFF, ML_ICRITERIA, ML_CTIFOR, ML_MHIS, ML_CSIG, ML_CSLOPE