If you have problems during the execution of MRCC, please attach the output with an adequate description of your case as well as the followings:
This information really helps us during troubleshooting
- the way mrcc was invoked
- the way build.mrcc was invoked
- the output of build.mrcc
- compiler version (for example: ifort -V, gfortran -v)
- blas/lapack versions
- as well as gcc and glibc versions
This information really helps us during troubleshooting
Problems running MPI version of MRCC after MOLPRO
- kallay
- Offline
- Administrator
- Mihaly Kallay
Less
More
9 years 11 months ago #77
by kallay
Best regards,
Mihaly Kallay
Replied by kallay on topic Problems running MPI version of MRCC after MOLPRO
Dear Valérie,
The problem is that only the mrcc executable should be run with mpirun, that is, the last line of your script should read as
mpirun -np $NCPUS mrcc
The problem is that only the mrcc executable should be run with mpirun, that is, the last line of your script should read as
mpirun -np $NCPUS mrcc
Best regards,
Mihaly Kallay
Please Log in or Create an account to join the conversation.
- valerie_vallet
- Topic Author
- Offline
- New Member
Less
More
- Posts: 7
- Thank you received: 0
9 years 11 months ago #79
by valerie_vallet
Replied by valerie_vallet on topic Problems running MPI version of MRCC after MOLPRO
I changed the script line to
mpirun -np $NCPUS mrcc, but the goldstone error remains!
Can you help me solving this error?
Valérie
mpirun -np $NCPUS mrcc, but the goldstone error remains!
Can you help me solving this error?
Valérie
Please Log in or Create an account to join the conversation.
- kallay
- Offline
- Administrator
- Mihaly Kallay
9 years 11 months ago #80
by kallay
Best regards,
Mihaly Kallay
Replied by kallay on topic Problems running MPI version of MRCC after MOLPRO
Dear Valérie,
Unfortunately I cannot reproduce the problem, the job is running smoothly on our machines. It is probably a compiler-related issue. What compiler do you use?
I would suggest using the precompiled goldstone binary that you can download from this homepage. Just overwrite your goldstone with this binary, and try to run the job. (Please do not change your mrcc executable since it must be compiled for MPI execution, and the mrcc binary provided here is not compiled for MPI.)
Unfortunately I cannot reproduce the problem, the job is running smoothly on our machines. It is probably a compiler-related issue. What compiler do you use?
I would suggest using the precompiled goldstone binary that you can download from this homepage. Just overwrite your goldstone with this binary, and try to run the job. (Please do not change your mrcc executable since it must be compiled for MPI execution, and the mrcc binary provided here is not compiled for MPI.)
Best regards,
Mihaly Kallay
Please Log in or Create an account to join the conversation.
- valerie_vallet
- Topic Author
- Offline
- New Member
Less
More
- Posts: 7
- Thank you received: 0
9 years 11 months ago #82
by valerie_vallet
Replied by valerie_vallet on topic Problems running MPI version of MRCC after MOLPRO
Dear Mihály,
I have now used the goldstone binary from your website. It runs fine.
The MPI execution of mrcc faces an error.
Starting CC iteration...
======================================================================
forrtl: No such file or directory
forrtl: severe (28): CLOSE error, unit 21, file "Unknown"
I am using your latest tarball, and I compiled it with the mpiifort compiler version 14.0.4.174
mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 14.0.3.174 Build 20140422
Copyright (C) 1985-2014 Intel Corporation. All rights reserved.
Can you please check that error?
Another question, do you have benchmark results for the scaling of MRCC with respect to openMP or MPI parallelization. How many processors can one efficiently use?
Thanks for your help,
Valérie
I have now used the goldstone binary from your website. It runs fine.
The MPI execution of mrcc faces an error.
Starting CC iteration...
======================================================================
forrtl: No such file or directory
forrtl: severe (28): CLOSE error, unit 21, file "Unknown"
I am using your latest tarball, and I compiled it with the mpiifort compiler version 14.0.4.174
mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 14.0.3.174 Build 20140422
Copyright (C) 1985-2014 Intel Corporation. All rights reserved.
Can you please check that error?
Another question, do you have benchmark results for the scaling of MRCC with respect to openMP or MPI parallelization. How many processors can one efficiently use?
Thanks for your help,
Valérie
Please Log in or Create an account to join the conversation.
- kallay
- Offline
- Administrator
- Mihaly Kallay
9 years 11 months ago #85
by kallay
Best regards,
Mihaly Kallay
Replied by kallay on topic Problems running MPI version of MRCC after MOLPRO
Dear Valérie,
That is a strange error, and I could not reproduce it, though I tried with another compiler since we do not have that mpiifort version. I am rather puzzled. Probably something is wrong with the read/write permissions on your compute nodes.
Concerning parallelization efficiency: the OpenMP parallelization is quite efficient, especially for perturbative methods. We have tested for up to 48 threads, and the parallelization efficiency was good. With MPI the parallelization efficiency is also fine for perturbative methods, but for the iterative ones it is only satisfactory for very large jobs. Thus for your calculation, which is relatively small, I would not try more than 4 cores with MPI.
That is a strange error, and I could not reproduce it, though I tried with another compiler since we do not have that mpiifort version. I am rather puzzled. Probably something is wrong with the read/write permissions on your compute nodes.
Concerning parallelization efficiency: the OpenMP parallelization is quite efficient, especially for perturbative methods. We have tested for up to 48 threads, and the parallelization efficiency was good. With MPI the parallelization efficiency is also fine for perturbative methods, but for the iterative ones it is only satisfactory for very large jobs. Thus for your calculation, which is relatively small, I would not try more than 4 cores with MPI.
Best regards,
Mihaly Kallay
Please Log in or Create an account to join the conversation.
- valerie_vallet
- Topic Author
- Offline
- New Member
Less
More
- Posts: 7
- Thank you received: 0
9 years 11 months ago #86
by valerie_vallet
Replied by valerie_vallet on topic Problems running MPI version of MRCC after MOLPRO
Dear Mihály,
Tell me which version of intel compiler you use and produces normal results. I can try it on my cluster.
Valérie
Tell me which version of intel compiler you use and produces normal results. I can try it on my cluster.
Valérie
Please Log in or Create an account to join the conversation.
Time to create page: 0.048 seconds