Intro

We have discussed in detail about the high-dimensional neural network potential (HDNNP) in this post. In this post we will show how to compile the RuNNer code, which was developed by the creator of HDNNP, Prof. Jörg Behler.

You can email him (via joerg.behler@uni-goettingen.de) in order to get the link to the GitLab page. Once you are there, you could download the software package, then upload to your supercomputer.

In the following I will use the supercomputer in our own lab to compile the parallel version of RuNNer code.

Steps

First we should load some modules, all the modules I have loaded are listed below:

intel/debugger/10.0.0; intel/dpl/2021.1.1; intel/compiler/latest; intel/compiler-rt/2021.1.1; intel/tbb/2021.1.1; intel/mkl/2021.1.1; intel/mpi/2021.1.1

Then we can cd in the source code folder "src-devel", and open the Makefile.

You need to modify the Makefile according to your needs, mine is like this:

  • FC     = ifort
  • USE_MPI = yes # no (serial), yes (parallel)
  • MPIFC  = mpiifort # switch on for parallel
  • FFLAGS = -O3 -xHost -fp-model strict -132 -fopenmp -I/cluster/intel/oneapi/mkl/2021.1.1/include
  • FFLAGS_MPI = -O3 -xHost -fp-model strict -132 -fopenmp -I/cluster/intel/oneapi/mkl/2021.1.1/include # if everything is set up correctly, this will properly link mkl; for static linking we had to check
  • USE_C_FILES = yes # mandatory
  • CC = mpicc # icc (serial), mpicc (parallel)
  • CFLAGS = -O3 -xHost -fp-model strict
  • LIB = -L/cluster/intel/oneapi/mkl/2021.1.1/lib/intel64 -lmkl_gf_lp64 -lmkl_core -lmkl_intel_thread -lpthread -lm -ldl # mkl library will be used with the -mkl flag

Some notes:

  • If you want to compile the parallel version, you need to add FFLAGS_MPI variable
  • You need to add the include in the FFLAGS(_MPI) and the lib/intel64 in LIB in order to use Intel MKL

Results

Once you have finished modifying the Makefile. Just use "make mpi" to compile, when everything is OK, you will something like this:

If no error appears, you have successfully compiled the parallel version of RuNNer. Congradulations!

The usage and detailed introduction about RuNNer will be given later once I have used it extensively.

See you in next post.

Best,

Zhengda