The official website for JDFTx is: https://jdftx.org/

Packages that you need:

So we need to compile: cmake, GSL, OpenMPI (both CPU and GPU version using different compilers), FFTW3 and finally JDFTx (both CPU and GPU)

Compile cmake

The procedure is quite straightforward:

tar -zxvf cmake-3.23.1.tar.gz # 3.23.1 could be different for your case
cd cmake-3.23.1
./bootstrap --prefix=/your_folder/cmake-3.23.1 -- -DCMAKE_USE_OPENSSL=OFF
make -j8 all
make install

In here -DCMAKE_USE_OPENSSL=OFF means we don't need OpenSSL, but if you want to install that, the website is: https://www.openssl.org/

After the installation, you need to add /your_folder/cmake-3.23.1/bin to your PATH environment variable:

Add export PATH=$PATH:/your_folder/cmake-3.23.1/bin in ~/.bashrc, and then do source ~/.bashrc

You can check whether your installation is correct by:
which cmake, and cmake -h, the results are in below

Compile GSL

The compilation of GSL is similar to cmake (actually, the installation of these packages are all similar, but with different settings for configure)

The procedures for compiling GSL are:

module compiler/gcc/10.2

tar -zxvf gsl-latest.tar.gz
cd gsl-2.7.1/
./autogen.sh
./configure --prefix=/home/z.he/JDFTX/gsl-2.7.1
make -j8 all
make install

We need to load the modules for the GCC compiler, we can use which gcc to check which compiler we are using.

Compile FFTW3

The procedures for compiling FFTW3 is in below:

module load compiler/gcc/10.2 intel/compiler/latest intel/mkl/latest mpi/openmpi/4.0.5-gcc-10.2.0

./configure CC=icc CPP=icpc --prefix=/home/z.he/JDFTX/fftw-3.3.10 --enable-threads --enable-openmp
make -j8 CFLAGS=-fPIC all
make install

In the configure, the --enable-threads and --enable-openmp are important, because they can generate *_omp.(l)a and *_threads.(l)a libraries, which needs to be used in compiling JDFTx.

Compile OpenMPI

OpenMPI is important for utilizing the multi-core nodes and multi-GPU architectures in modern supercomputers. I have compiled it myself, if you already have a good version of openmpi (or other CUDA-aware MPI, then you can load them instead). The compilation of GPU version of OpenMPI are already written in my previous note about compiler GPU version of Quantum ESPRESSO. (link) In the following we will show how to compile the CPU version.

The procedures are:

module load compiler/gcc/10.2

./configure --prefix=/your_location/openmpi-4.1.3/ FCFLAGS=-fPIC
make -j8 all
make install

Then you can add the ~/openmpi-4.1.3/lib to LD_LIBRARY_PATH, and ~/openmpi/bin to PATH (using export, examples given above)

Now we have: (1) cmake (2) GSL (3) FFTW3 (4) OpenMPI. Everything is ready, now let's compile the CPU and GPU version of JDFTx.

Compile JDFTx

Before we do the compilation, there are several syntax errors and patches need to be added to the source code.


For ~/jdftx-1.7.0/jdftx/core/Util.cpp

  • Change 1:

Replace

logPrintf("%s %s" PACKAGE_NAME " " VERSION_MAJOR_MINOR_PATCH " %s %s\n",                                                                                     
         deco.c_str(), prefix.c_str(), (strlen(VERSION_HASH) ? "(git hash " VERSION_HASH ")" : ""), deco.c_str());   

to

logPrintf("%s %s", PACKAGE_NAME, " ", VERSION_MAJOR_MINOR_PATCH, " %s %s\n", 
deco.c_str(), prefix.c_str(), (strlen(VERSION_HASH) ? "(git hash " 
VERSION_HASH ")" : ""), deco.c_str());
  • Change 2:

Add

#define VERSION_MAJOR_MINOR_PATCH "1.7.0"
#define VERSION_HASH ""

to the file (I know this is the issue of looking for config.h, but this is a really ugly but effective solution)


For /home/z.he/JDFTX/jdftx-1.7.0/jdftx/electronic/DumpQMC.cpp

  • Change 1:

Add:

#define VERSION_MAJOR_MINOR_PATCH "1.7.0"
#define VERSION_HASH ""
  • Change 2:

Replace: (add , in between, don't know whether this is a common bug or just me)

ofs <<                                                                                                                                                       
         "START HEADER\n"                                                                                                                                         
         " CASINO Blip external potential exported by " PACKAGE_NAME " " VERSION_MAJOR_MINOR_PATCH "\n"

to

ofs <<
        "START HEADER\n"
        " CASINO Blip external potential exported by ", PACKAGE_NAME, " ", VERSION_MAJOR_MINOR_PATCH, "\n"
  • Change 3:

Replace:

"    " << gInfo.S[0] << " " << gInfo.S[1] << " " << gInfo.S[2] << "\n"

to

"    ", gInfo.S[0], " ", gInfo.S[1], " ", gInfo.S[2], "\n"

CPU version

Before compiling the code, you need to check whether you have the correct version of MPI (in this case we will use the CPU version of MPI)

The procedures are given below:

module load compiler/gcc/10.2 intel/mkl/latest

tar -zxvf jdftx-1.7.0.tar.gz
mkdir build
cd build
CC=gcc CXX=g++ cmake 
    -D EnableProfiling=yes \
    -D GSL_PATH=/your_location/gsl-2.7.1 \
    -D EnableMKL=yes \
    -D MKL_PATH=/your_location/mkl \
    -D ForceFFTW=yes \
    -D FFTW3_PATH=/your_location/fftw-3.3.10/lib \
    ../jdftx-1.7.0/jdftx
make -j8 all

At this step, you should already get the jdftx, phonon and wannier executables. Then you should check cmake_install.cmake file to make sure that set(CMAKE_INSTALL_PREFIX "/your_jdftx_build_loc") is correct.

Then execute make install.

The submission script for CPU version is:

#!/bin/bash
#SBATCH --job-name="tmp"
#SBATCH --get-user-env
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=stderr
#SBATCH --partition=tmp
##SBATCH --account=tmp
#SBATCH --nodes=2
##SBATCH --ntasks-per-node=48
##SBATCH --gres=gpu:4
#SBATCH --time=1-00:00:00
#export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ml compiler/gcc/10.2 intel/mkl/latest

EXEC_MPIRUN=/your_location/openmpi-4.1.3-gcc/bin/mpirun # CPU version of MPI
EXEC_JDFTX=/your_build/build/bin/jdftx

$EXEC_MPIRUN $EXEC_JDFTX -n 96 jdftx -i jdftx.in -o jdftx.out

For the water examples in JDFTx website, the parallelism can be checked via the output file (in below):

Output file for JDFTx (cpu version)

GPU version of JDFTx

Before compilation, check whether MPI is compiled by NVIDIA compilers (pgcc/nvcc and pgf90/nvfortran)

We have 4 V100 GPUs on our node. So computation capacity is 7.0.

The procedures are:

module load nvidia/cuda/11.2 nvidia/hpc-sdk/21.2

tar -zxvf jdftx-1.7.0.tar.gz
mkdir build
cd build
CC=gcc CXX=g++ cmake 
    -D EnableCUDA=yes \ 
    -D EnableCuSolver=yes \
    -D CudaAwareMPI=yes \
    -D CUDA_ARCH=compute_70 \
    -D CUDA_CODE=sm_70 \
    -D EnableProfiling=yes \
    -D GSL_PATH=/your_location/gsl-2.7.1 \
    -D EnableMKL=yes \
    -D MKL_PATH=/your_location/mkl \
    -D ForceFFTW=yes \
    -D FFTW3_PATH=/your_location/fftw-3.3.10/lib \
    ../jdftx-1.7.0/jdftx
make -j8 all

At this step, you should already get the jdftx(_gpu), phonon(_gpu) and wannier_(gpu) executables. Then you should check cmake_install.cmake file to make sure that set(CMAKE_INSTALL_PREFIX "/your_jdftx_build_loc") is correct.

Then execute make install.

The submission script for GPU version is:

#!/bin/bash
#SBATCH --job-name="tmp"
#SBATCH --get-user-env
#SBATCH --output=_scheduler-stdout.txt
#SBATCH --error=stderr
#SBATCH --partition=tmp
##SBATCH --account=tmp
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --gres=gpu:4
#SBATCH --time=1-00:00:00
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ml compiler/gcc/10.2 nvidia/cuda/11.2 nvidia/hpc-sdk/21.2

EXEC_MPIRUN=/your_location/openmpi-4.1.3/bin/mpirun # GPU version MPI (CUDA-Aware)
EXEC_JDFTX=/your_location/bin/jdftx_gpu

$EXEC_MPIRUN -n 4 $EXEC_JDFTX -i jdftx.in -o jdftx.out

The output file shows:

output file for GPU version

which shows that JDFTx is able to utilize all 4 GPUs.

Hope this post will help you compile the code, if you have any issues, please send to me an email (zhengdahe.electrocatalysis@gmail.com).