Running Fleur
This section adresses the question of how to run Fleur "by hand". If one is interested in running Fleur in a scripting environment the AiiDA plug-in should be considered.
There are several executables created in the build process:
- inpgen: The input generator used to construct the full Fleur input file using only basic structural input about the unit cell
for
inpgen
itself. The generated Fleur input file features unit-cell-adapted default parameters.
and one of:
- fleur: A serial version (i.e. no MPI distributed memory parallelism, multithreading can still be used)
- fleur_MPI: A parallel version of Fleur able to run on multiple nodes using MPI. It can also be started in a serial way without the respective MPI command.
In most cases you will first run the input generator to create an inp.xml.
Afterwards you will run fleur
or fleur_MPI
using this inp.xml file.
Please note that the Fleur executable will always read its input from an inp.xml file in the current directory.
Command line options
The run-time behaviour of Fleur can be modified using command line switches. These switches modify the way Fleur might operate or in some cases determine what Fleur actually does. If you want to change the calculation setup you should modify the inp.xml file.
In the following the most relevant command line options are listed. For a full list of available options, please run fleur -h
General options:
-h
: Prints a help listing all command-line options.-check
: Runs only the init-part of Fleur, useful to check if setup is correct.-debugtime
: Prints out all starting/stopping of timers. Can be useful to monitor the progress of the run.
Options controlling the IO of eigenvectors/values: (not all are available if you did not compile with the required libraries)
-eig mem
: no IO, all eigenvectors are stored in memory. This can be a problem if you have little memory and many points. Default for serial version of Fleur.-eig da
: write data to disk using Fortran direct-access files. Fastest disk IO scheme. Only available in serial version of FLEUR.-eig mpi
: no IO, all eigenvectors are stored in memory in a distributed fashion. Uses MPI one-sided communication. Default for MPI version of Fleur. Only available in MPI version of FLEUR.-eig hdf
: write data to disk using HDF5 library. Can be used in serial and MPI version (if HDF5 is compiled for MPI-IO).
Options controlling the Diagonalization: (not all are available if you did not compile with the required libraries)
-diag lapack
: Use standard LAPACK routines. Default in Fleur (if not parallel eigenvector parallelization)-diag scalapack
: Use SCALAPACK for parallel eigenvector parallelization.-diag elpa
: Use ELPA for parallel eigenvector parallelization.- Further options might be available, check
fleur -h
for a list.
Environment Variables
There are basically two environments variables you might want to change when using Fleur.
- OMP_NUM_THREADS: As Fleur uses OpenMP it is generally a good idea to consider adjusting OMP_NUM_THREADS to use
all cores available. While this might happen automatically in you queuing system you should check if you use
appropriate values. Check the output of Fleur to standard out. One might want to use
export OMP_NUM_THREADS=2
or something similar. - juDFT: You can use the juDFT variable to set command line switches that do not require an additional argument. For
example
export juDFT="-debugtime"
would run FLEUR with this command line switch.
Starting Fleur with MPI parallelization
While the OpenMP parallelization is controlled with the environment variable mentioned above or other external settings the degree of
MPI parallelization is controlled in the command line call to the MPI executable of Fleur. Typically fleur_MPI
is provided
as a command-line argument to the respective MPI executable mpirun
, mpiexec
, srun
, or similar. The degree of MPI
parallelization is also a command-line argument to the respective MPI executable. Please be aware that invoking such an executable with
the non-MPI-parallelized version of Fleur will yield Fleur errors. The distribution of the Fleur calculation onto different compute nodes
of a cluster is typically set by the respective variables in a Jobfile for the used queueing system.
For a guide on how to choose good parallelization schemes please have a look at the respective section.