1. Obtaining and Installing Fleur on the RWTH cluster
1.1. Setting up the system
To start the tutorial log in to your RWTH cluster account with something like
ssh -X yourID@login.hpc.itc.rwth-aachen.de
A typical account on the RWTH cluster is set up to be used with a 'z' shell. In this tutorial we deviate from this by using 'bash'. To switch to bash you have to add
if [[ -o login ]]; then bash; exit fi
to the end of the file '.zshrc' in your home directory. Of course, the latter part of this tutorial can also be adapted to every other shell.
The RWTH cluster uses a module system to load certain software environments. The loaded module can be seen by typing in the command 'module list'. The available modules are seen with 'module avail'. With 'module load ...' and 'module unload ...' you can specify to load or unload certain modules.
To make Fleur compilable and executable on the cluster we have to load a certain software environment consisting of some modules. Other modules have to be unloaded. The most simple way to make sure that all needed modules are loaded on every startup is to add the respective commands to the '.bashrc' file in your home directory. We have to add the following lines:
module switch intel intel/19.0 module switch openmpi intelmpi/2018.3 module load cmake/3.10.1 module load LIBRARIES module load hdf5
Please note that the hdf5 library is optional. It is an IO library that is used in Fleur to write out more userfriendly output files. Without this library the user obtains a slightly different set of output files and has to be cautious to always keep a consistent set of files.
For the usage of Fleur we have to add some more lines to '.bashrc':
The memory of a computer is typically arranged in a so-called heap and a so-called stack. The size of the stack is typically rather limited and often not enough for running Fleur. To overcome this issue we add the line
ulimit -s unlimited
One specialty of the RWTH cluster is that every program that is started from the login node with MPI (message passing interface) parallelization is moved to a different cluster node. These nodes feature a different architecture and thus Fleur will not run on them. We typically compile for the architecture on which the compilation takes place. To avoid this problem one has to use mpirun with the option '-host localhost'. We will first encounter this problem when we run the tests to check whether the Fleur compilation was successful. Theses tests are partially MPI parallelized. The exact call with which the MPI parallelized tests are started can be overridden by defining the environment variable juDFT_MPI. For this we add the line
export juDFT_MPI='mpirun -np 2 -host localhost'
to '.bashrc'.
1.2. Obtaining Fleur
Different versions of the Fleur code are available at different websites. Official releases can be downloaded from www.flapw.de. You can also use Fleur in a virtual box (i.e. if you want to use it on a Windows computer) as part of the Quantum Mobile package. In this package you also find other freely available DFT codes. For this tutorial we use a more up to date version of Fleur that is actually a snapshop of the current development version. For this we create a directory 'fleur' (or something similar) and clone the Fleur git repository into this directory with:
git clone https://iffgit.fz-juelich.de/fleur/fleur.git
In general the newest aspects of the Fleur development (development version, open issues, ...) are directly available at the Fleur Gitlab server.
In your fleur directory you now find a new subdirectory 'fleur' in which the Git repository is found. The snapshot is stored in the 'stable' branch of the repository. To change into this branch invoke:
cd fleur git checkout stable cd ..
1.3. Installing Fleur
The general documentation on the installation of Fleur can be found on the Installation of FLEUR pages. For our case the installation is described below.
We assume a directory structure in which the source files are found in a directory '.../fleur/fleur'. In this directory you find a script 'configure.sh'. Invoking this script with the adequate options will generate a build directory in the current working directory in which a fleur version can be compiled. We want to have this directory in the parent directory '.../fleur'. To see the available options we first invoke the script with the -h option:
./fleur/configure.sh -h
There are switches that can be used to specify the paths for certain libraries and also a switch to automatically download libraries that are not yet available on the system. Fortunately we don't need them for the RWTH cluster. To generate a build directory the configure script has to be invoked with a specified machine. If you want to install Fleur on a notebook with a gfortran compiler of version >6.3 you can use AUTO as machine. With this the script looks for compilers and libraries itself. For the RWTH cluster we already have a predefined machine specification that works: CLAIX. Invoke the script with
./fleur/configure.sh CLAIX
to finally obtain the build directory. You also get an output informing you about about which libraries have been found. Some libraries are mandatory, others are only optional but enhance the capabilities of the Fleur code. If everything works you are advised to change to the build directory and invoke the make command to compile Fleur.
'make' can either be invoked in a serial or in a parallelized (with '-j') way. We don't want to block the whole login node by building Fleur so we either only invoke 'make' or at most 'make -j2' to allow a two-fold parallelization.
If everyting compiles you now should have 3 executable files in the build directory: 'fleur', 'fleur_MPI', and 'inpgen'. 'fleur' and 'fleur_MPI' are two versions of Fleur with different degrees of parallelization. 'fleur' only uses an OpenMP parallelization. 'fleur_MPI' additionally features an MPI parallelization. Calculations on complex structures have to be distributed over several nodes of a computing cluster. For this MPI parallelization is needed. For most calculations we will not need this but we can use 'fleur_MPI' anyway. 'inpgen' is the Fleur input generator. It is used to convert simple text files describing the structure of a unit cell into a Fleur input file consisting of many parameters that are at first set to default values.
We check if the code works as expected by invoking the tests with:
ctest
2. A first calculation
For the first calculation we choose a perfect Si crystal in diamond structure as example system. The inpgen input for such a system is:
Si bulk &lattice latsys='cF', a0=1.8897269, a=5.43 / 2 14 0.125 0.125 0.125 14 -0.125 -0.125 -0.125
The first line in this input is just a comment line. The '&lattice' line defines the lattice. In this case we here specify with 'latsys='cF' ' a cubic close-packed lattice. 'a=5.43' specifies the lattice constant. inpgen and fleur in general assume atomic units but in this case we provide the the lattice constant in Angstrom. Therefore we need a conversion factor from Angstrom to atomic units. This is specified by 'a0=1.8897269'.
The next line contains the number of atoms in the unit cell. It is followed by a list containing for each atom the atomic number (14 for Si) and the relative position in the unit cell.
A documentation of the general layout of inpgen inputs is provided at the respective page.
Create a directory for this calculation and in it a text file 'inpSi.txt' (or similar) with the content above. To generate the Fleur input invoke in the new directory:
pathToInpgen/inpgen < inpSi.txt
Several files are created. Of these the input files for the Fleur calculation are 'inp.xml' for the full parametrization of the calculation and 'sym.out' for a list of symmetry operations present in the crystal. The file 'struct.xsf' is an XCrysDen structure file that can be used to visualize the unit cell. 'out' is the general text output of inpgen (or fleur after letting fleur run in the directory). If something went wrong in the generation of the Fleur input it is a good idea to have a look in that File to see whether there is a hint what was wrong. 'FleurInputSchema.xsd' is not relevant to the user. It is a general specification of the inp.xml file format in terms of an XML Schema definition.
Have a closer look at 'inp.xml'. We will discuss the contents.
Next we invoke fleur in the directory with the 'inp.xml' and 'sym.out' files:
pathToFleur/fleur
In practice DFT is implemented as an iterative algorithm that starts with a first guess for the electron density and ends after several iterations with a self-consistent density. In Fleur by default up to 9 iterations of the self-consistency loop are performed (can be changed in inp.xml). You can observe the development of the distance between the input and output densities of each iteration in the terminal output. Alternatively this can also be obtained after the calculation by invoking 'grep dist out' to find the respective entries in the generated out file. 9 iterations are not yet enough to obtain a self-consistent result for the example system we use here. Therefore we start Fleur again to let it run for a few more iterations.
The output of the fleur calculation is available in the 'out' file and also in the 'out.xml' file. We will discuss the contents.