If you have problems during the execution of MRCC, please attach the output with an adequate description of your case as well as the followings:
  • the way mrcc was invoked
  • the way build.mrcc was invoked
  • the output of build.mrcc
  • compiler version (for example: ifort -V, gfortran -v)
  • blas/lapack versions
  • as well as gcc and glibc versions

This information really helps us during troubleshooting :)

"Fatal error in exec scf" when running DF-HF in OpenMP parallelization

  • benshi97
  • Topic Author
  • Offline
  • New Member
  • New Member
More
3 years 3 months ago - 3 years 3 months ago #1140 by benshi97
Dear MRCC forum,

I am getting "Fatal error in exec scf" when I try to run a calculation with OpenMP parallelization. I give the example MINP file below. This has been tested for both the binary provided as well as MRCC compiled from source with Intel compilers (see module list provided below) and get error for both cases. When I run in serial or with MPI, the calculation works with no problem, only when I run with OpenMP. Also, the calculations seem to work with def2-SVP basis set. I would like to be able to utilise OpenMP parallelization since these calculations feed into LNO-CCSD(T) calculations, which only work with OpenMP parallelization.

MINP file:
Code:
basis=def2-tzvpp calc=DF-HF verbosity=3 mem=180GB symm=off unit=angs geom=xyz 63 O 0.41149866 -3.54971760 -5.02622591 O -4.10245668 0.43920180 -3.30149767 O 2.12678527 1.18836288 -0.15965552 O -0.46521979 -2.62434118 3.46453371 Ti 2.75342486 1.91656851 4.23758392 Ti -3.65466742 1.50079535 -1.90899086 Ti -2.36333933 -2.24479374 -0.57161278 O 2.44176490 -2.29429454 3.24874923 O 2.11698590 0.15097409 -3.32256471 Ti 0.34878308 0.32307484 -3.70034396 Ti 0.10500208 1.15109776 4.95331706 O -1.98558870 -0.36162157 2.62105580 O -3.31599212 -2.32985355 0.97447693 Ti -1.16532671 2.31237542 2.16012229 Ti -0.67943339 -2.09742586 -4.69158776 Ti -1.58301716 -1.35931877 4.12120770 Ti 0.92328641 -2.59757049 2.30183120 Ti 3.75691297 3.14417612 1.93273894 Ti -0.47855835 2.48785705 -2.16106337 Ti 0.58820717 -0.07095477 -0.06290282 Ti -3.09862075 -1.10786150 -3.56012194 Ti 2.92119664 -1.05493993 -2.29413979 Ti 1.47020095 -3.46470046 -3.53593352 Ti 0.36694026 -3.11335456 -0.87882217 Ti 1.18155092 2.95536183 0.61706722 Ti -3.58861400 -1.18572230 2.33821198 Ti 3.87694090 0.66419537 0.11885077 Ti 2.71158588 -0.54834276 2.88761082 O -2.87836838 2.51420049 1.58563851 O -3.32055239 -1.90117824 3.99497339 O -1.46593142 -0.50256243 -3.39548001 O 2.60964489 4.06679390 0.90064513 O 4.74744767 2.12337585 0.78994506 O 0.30248915 -2.16944272 -2.99132259 O 1.50405956 2.18210780 5.56580142 O -3.07568820 0.49465007 -0.47332856 O 1.19677239 0.42546501 3.79545646 O -0.16942849 1.33133343 0.93984884 O -4.85586149 2.17245617 -0.71133418 O -0.03692702 0.78687025 -1.72673525 O 3.91367760 3.26907195 3.75176176 O 3.67000008 0.38434556 4.17335802 O -0.94676937 -0.30513420 5.44434993 O 0.52728895 3.44122604 -0.99656506 O -3.37820240 -2.04505324 -2.05174942 O 3.83027790 -0.50394052 1.45750002 O -0.74308697 -1.38035106 -0.30408063 O 2.41548687 1.96034626 2.30264135 O 1.18631986 -4.32406572 -1.98243106 O 0.04713925 2.09431512 -3.85309076 O 0.18060146 -0.64399198 -5.27450139 O -0.08346750 3.66574656 1.74981756 O 4.28996787 -0.23496556 -1.41233080 O 3.03946367 -2.55781136 -3.32600100 O -1.00413012 2.14979224 3.95123768 O 1.72479135 -1.44874777 -0.94165136 O -1.38074405 -3.68975095 -0.91385063 O -2.28204184 2.65568891 -2.15992444 O 0.86711976 -3.59287865 0.79084493 O 1.11351918 -0.92452821 1.63720098 Ti -4.05141286 1.43328133 0.74981270 O -2.50590495 -2.22680752 -4.89877439 O -4.72244110 0.16399352 1.87324268
 

The SLURM submission script used was:
Code:
#!/bin/bash #SBATCH -J tio2 #SBATCH -A T2-CS146-CPU #SBATCH --nodes=1 #SBATCH --ntasks=2 #SBATCH --time=2:00:00 #SBATCH --mail-type=NONE #SBATCH --cpus-per-task=28 #SBATCH -p cclake numnodes=$SLURM_JOB_NUM_NODES numtasks=$SLURM_NTASKS . /etc/profile.d/modules.sh                 module purge                              module load rhel7/default-peta4            module unload intel/impi/2017.4/intel export PATH="/home/bxs21/Programs/mrcc/binary:$PATH" export OMP_NUM_THREADS=2 export MKL_NUM_THREADS=2 export OMP_PLACES=cores export OMP_PROC_BIND=spread,close /home/bxs21/Programs/mrcc/binary/dmrcc > mrcc.out


MRCC was compiled with the following modules loaded: 
Code:
1) dot                            6) rhel7/global                  11) intel/libs/tbb/2017.4         16) rhel7/default-peta4   2) slurm                          7) intel/compilers/2017.4        12) intel/libs/ipp/2017.4         17) python/3.6   3) turbovnc/2.0.1                 8) intel/mkl/2017.4              13) intel/libs/daal/2017.4   4) vgl/2.5.1/64                   9) intel/impi/2017.4/intel       14) intel/bundles/complib/2017.4   5) singularity/current           10) intel/libs/idb/2017.4         15) cmake/latest
Last edit: 3 years 3 months ago by benshi97. Reason: Code formatting was messed up

Please Log in or Create an account to join the conversation.

  • benshi97
  • Topic Author
  • Offline
  • New Member
  • New Member
More
3 years 3 months ago #1141 by benshi97
I'm not sure whether this could be the potential issue causing the problem, but the clusters I am using all have glibc version 2.17 and it seems that 2.23 or higher is recommended.

Please Log in or Create an account to join the conversation.

  • kallay
  • Offline
  • Administrator
  • Administrator
  • Mihaly Kallay
More
3 years 3 months ago #1142 by kallay
It is probably caused by a compiler bug.
You will find a patch for source file dfint.f and executable scf in the download area. These will fix your problem.

Best regards,
Mihaly Kallay

Please Log in or Create an account to join the conversation.

  • benshi97
  • Topic Author
  • Offline
  • New Member
  • New Member
More
3 years 3 months ago #1144 by benshi97
Many thanks! It now works without a problem.

Please Log in or Create an account to join the conversation.

Time to create page: 0.046 seconds
Powered by Kunena Forum