Installation

Installation of pre-compiled binaries

After registration at the Mrcc homepage, pre-built binaries are available in the download area. These binaries were compiled with the Intel compiler (and Intel MKL) version 2021.10.0 and should utilize optimal instruction sets (e.g., AVX-512) of modern CPUs. If your CPU is still not correctly identified, the MKL_ENABLE_INSTRUCTIONS environment variable enables you to use an architecture-specific code path of your choice in Intel MKL routines. To install these executables, Linux operating system and the 2.27 or later version of the GNU C Library (glibc) is required. To use the MPI-parallel executables, Intel MPI 2021 also has to be installed. The precompiled binaries are linked against the Intel MPI 2021.10 library, and it is highly recommended to use this particular Intel MPI version with the precompiled binaries. It is also strongly suggested to install the newest stable version of Libfabric (1.9.0 or later) from the https://github.com/ofiwg/libfabric repository as some of the previous versions provided via the Intel MPI package could cause irregular runtime behavior. The binaries are provided in a gzipped tar file, mrcc.YYYY-MM-DD.binary.tar.gz, where YYYY-MM-DD is the release date of the program. Note that you will find several program versions on the homepage. Unless there are overriding reasons not to do so, please always download the last version. To unpack the file type

tar xvzf mrcc*.binary.tar.gz

Please do not forget to add the name of the directory where the executables are placed to your PATH environment variable.

Installation from source code

To install Mrcc from source code some version of the Unix operating system, Fortran 90 and C compilers as well as BLAS (basic linear algebra subprograms) and LAPACK (linear algebra package) libraries are required. The tested BLAS and LAPACK implementations are Intel MKL (oneMKL), AOCL-BLIS, and AOCL-libFLAME. Optionally, Mrcc can also be linked with the Libxc library of density functionals [85, 56] and the PCMSolver library for continuum solvation [17, 1]. For an MPI-parallel build, a working MPI installation is also required. Please be sure that the directories where the compilers are located are included in your PATH environment variable. Please also check your LD_LIBRARY_PATH environment variable, which must include the directories containing the BLAS and LAPACK and, if linked against, the Libxc, PCMSolver, and MPI libraries.

After registration at the Mrcc homepage the program can be downloaded as a gzipped tar file, mrcc.YYYY-MM-DD.tar.gz, where YYYY-MM-DD is the release date of the program. Note that you will find several program versions on the homepage. Unless there are overriding reasons not to do so, please always download the last version. To unpack the file type

tar xvzf mrcc*.tar.gz

To install Mrcc add the current working directory to PATH (e.g., export PATH=.:$PATH on bash) and run the build.mrcc script as

build.mrcc [<compiler>] [-i<option1>] [-p<option2>] [-g] [-d] [-s] [-f<folder>] [-l<library>] [-L<lspath>] [-nomkl]

<compiler> specifies the compiler to be used. The options are as follows. Note that currently only the Intel compiler system is supported. The freely available Intel oneAPI compilers (with the base and hpc toolkits) are recommended if an existing Intel compiler installation is not available. Previously, the other listed compiler systems were also used to generate working binaries.

Intel Intel compiler GNU GNU compiler (g77 or gfortran) PGF Portland Group Fortran compiler G95 G95 Fortran 95 compiler PATH Pathscale compiler HP HP Fortran Compiler DEC Compaq Fortran Compiler (DEC machines) XLF XL Fortran Compiler (IBM machines) Solaris10 Sun Solaris10 and Studio10 Fortran Compiler (AMD64)

If the build.mrcc script is invoked without specifying the <compiler> variable, a help message is displayed.

Optional arguments:

-i

specifies if 32- or 64-bit integer variables are used. Accordingly, <option1> can take the value of 32 or 64. The 32-bit integer version is not supported any more.
Default: 64 for 64-bit machines, 32 otherwise.

-p

generates parallel code using message passing interface (MPI) or open multi-processing (OpenMP) technologies. Accordingly, <option2> can take the MPI[=<MPI implementation>] or OMP values. OpenMP and MPI parallelizations have been tested with the Intel compiler and the Intel MPI and Open MPI implementations. If -pMPI is specified, the library given by <MPI implementation> will be linked to Mrcc. The default value of <MPI implementation> is IntelMPI with the Intel compiler and OpenMPI for other compilers. Please note that currently the two parallelization schemes can only be combined for the scf, mrcc, and ccsd programs, other executables will be compiled with only OpenMP parallelization even if -pOMP and -pMPI are both set.
Default: no parallelization.

-g

source codes are compiled with debugging option (use this for development purposes)
Default: no debugging option.

-d

source codes are compiled for development, no optimization is performed (use this for development purposes)
Default: codes are compiled with highest level optimization.

-f

specifies the installation folder. Executables, basis set libraries, and test jobs will be copied to directory <folder>. If this flag is not used, you will find the executables, etc. in the directory where you perform the installation.

-l

Mrcc is linked with an external library. The libxc and pcm options for <library> require the installation of the Libxc and the PCMSolver libraries, respectively. See notes below. Other values for -l<library> are passed to the linker unchanged.

-L

Adds <lspath> to the library search path while linking.

-nomkl

Turns off linking with Intel MKL library when using the Intel compiler. If this option is used, the BLAS and LAPACK libraries must be specified manually using -l, see example below. It is recommended to use Intel MKL library for best performance.

-s

Mrcc is linked statically. Non-MPI-parallel executables are linked entirely statically, while MPI-parallel executables link Intel-provided libraries statically.

Notes:

  1. 1.

    After the installation please do not forget to add the directory where the Mrcc executables are located to your PATH environment variable. This is the <folder> directory if you used the -f flag at the installation, otherwise the directory where you executed the build.mrcc script.

  2. 2.

    The build.mrcc script has been tested on several platforms with several versions of the supported compilers and libraries. Nevertheless you may need to customize the compiler flags, names of libraries, etc. These data can be found in the build.mrcc.config file, please edit this file if necessary. Please do not change build.mrcc.

  3. 3.

    To ensure the best performance of the software, the use of Intel compiler is recommended. The current release of Mrcc has been tested with the 2021.10 version of the Intel compiler (freely available from Intel).

  4. 4.

    If you use Mrcc together with Molpro, you can also use the Molpro installer to install Mrcc, please follow the instructions in the Molpro manual (www.molpro.net).

  5. 5.

    If Mrcc is linked with the Libxc library, Libxc must be installed before starting the installation of Mrcc. Note that you must compile Libxc with the same Fortran compiler as used for the installation of Mrcc. At the installation of Libxc, it is recommended to set the installation path of Libxc (--prefix=<install dir> option of configure) to the directory where the installation of Mrcc is carried out, otherwise please set the LIBS_LIBXC environment variable to the installation path of Libxc (i.e. export LIBS_LIBXC=<install dir> in bash, where <install dir> contains lib/libxcf03.a and lib/libxc.a) before running build.mrcc. See the manual for the Libxc project for details [56], as well as the examples below. The current release of Mrcc has been tested with the 6.2.2 version of Libxc.

  6. 6.

    If Mrcc is linked with the PCMSolver library, PCMSolver must be installed before starting the installation of Mrcc. At the installation of PCMSolver, it is recommended to set the installation path of PCMSolver (--prefix=<install dir> option of setup.py) to the directory where the installation of Mrcc is carried out, otherwise please set the LIBS_PCM environment variable to the installation path of PCMSolver (i.e., export LIBS_PCM=<install dir> in bash, where <install dir> contains lib/libpcm.so, or lib/libpcm.a in the case of static linking) before running build.mrcc. See the manual for the PCMSolver project for details [1], as well as the examples below. The current release of Mrcc has been tested with the 1.3.0 version of PCMSolver.

  7. 7.

    If the program is compiled for multi-node parallel execution, MPI-parallel executables (*_mpi) are generated. The compilation is performed by mpiifort when <MPI implementation> = IntelMPI is set, otherwise the mpifort compiler wrapper is used.

  8. 8.

    For the compilation of MPI-parallel executables, a working MPI installation is necessary. Currently Open MPI version 4 and Intel MPI (2017 or later) implementations are supported. Open MPI has to be patched with commits 07830d0 and 51acbf7 from https://github.com/open-mpi/ompi. Please, consult Sect. 9.3 for additional MPI settings required at runtime.

Examples:

  1. 1.

    Compile Mrcc for OpenMP parallel execution with the Intel compiler (recommended):
    build.mrcc Intel -pOMP

  2. 2.

    Compile Mrcc for OpenMP parallel execution with the Intel compiler and install it to the /prog/mrcc directory (recommended):
    build.mrcc Intel -pOMP -f/prog/mrcc

  3. 3.

    Compile Mrcc for OpenMP and combined OpenMP-MPI parallel execution with the Intel compiler and Intel MPI, and link with the Libxc and the PCMSolver libraries supposing that Mrcc is compiled in the /prog/mrcc directory. This will enable all features of Mrcc and is highly recommended.

    1. (a)

      Installation of the Libxc library:
      Download the Libxc library (libxc*.tar.gz) from the homepage of the Libxc project [56].
      tar xvzf libxc*.tar.gz
      cd libxc*
      ./configure --prefix=/prog/mrcc/ FC=ifort --enable-kxc
      make
      make check
      make install
      cd /prog/mrcc

    2. (b)

      Installation of the PCMSolver library:
      curl -L https://github.com/PCMSolver/pcmsolver/archive/v1.3.0.tar.gz | tar -xz
      cd pcmsolver-1.3.0
      Replace -openmp by -qopenmp in file cmake/downloaded/autocmake_omp.cmake
      cp cmake/custom/compilers/Intel/C.make
       cmake/custom/compilers/Intel/C.cmake

      ./setup.py --cxx=icpc --cc=icc --fc=ifort --int64 --omp --prefix=/prog/mrcc/
      cd build
      make
      make install
      export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/prog/mrcc/lib
      cd /prog/mrcc
      cp pcmsolver-1.3.0/api/pcmsolver.f90 .

    3. (c)

      Compiling and linking Mrcc:
      build.mrcc Intel -pOMP -pMPI=IntelMPI -llibxc -lpcm

  4. 4.

    Compile Mrcc for serial execution with the Intel compiler:
    build.mrcc Intel

  5. 5.

    Compile Mrcc for parallel execution using MPI environment with the Intel compiler for 32-bit machines:
    build.mrcc Intel -i32 -pMPI

  6. 6.

    Compile Mrcc with the Intel compiler for parallel execution using OpenMP and MPI parallelization through the Open MPI library:
    build.mrcc Intel -pOMP -pMPI=OpenMPI

  7. 7.

    Compile Mrcc with the Intel compiler using BLAS and LAPACK libraries other than Intel MKL installed in a standard directory:
    build.mrcc Intel -nomkl -lblas -llapack

  8. 8.

    Compile Mrcc with the Intel compiler using BLAS and LAPACK libraries other than Intel MKL, the shared object files libblas.so and liblapack.so can be found in <path-to-blas> and <path-to-lapack>, respectively:
    build.mrcc Intel -nomkl -L<path-to-lapack> -llapack -L<path-to-blas> -lblas

Installation under Windows

Under the Windows operating system the pre-built binaries cannot be directly executed, and the direct compilation of the source code has not been attempted so far. For Windows users we recommend the use of virtualization software packages, such as VirtualBox, which allow Linux as a guest operating system. In that environment Mrcc can be installed in the normal way as described in the previous subsections.