- Posts: 8
- Thank you received: 0
If you run into troubles, it is always a good habit to report the following information:
as well as the value of the relevant environmental variables, such OMP_NUM_THREADS, etc.
This information helps us a lot when figuring out what is going on with your compilation
- the way build.mrcc was invoked
- the output of build.mrcc
- compiler version (for example: ifort -V, gfortran -v)
- blas/lapack versions
- as well as gcc and glibc versions
as well as the value of the relevant environmental variables, such OMP_NUM_THREADS, etc.
This information helps us a lot when figuring out what is going on with your compilation
Compilation issues
- ddatta
- Topic Author
- Offline
- New Member
Less
More
4 years 2 months ago #1000
by ddatta
Compilation issues was created by ddatta
Dear MRCC administrators,
I am writing to seek suggestions on resolving a compilation issue that I have come across. I am trying to compile MRCC with ifort version 17, with the Intel MPI library. To be more specific, here are the options that I chose while running build.mrcc:
./build.mrcc Intel -i64 -pOMP -pMPI=IntelMPI
I see the following error messages:
Space exceeded in Data Dependence Test in hgi_int_out_
Subdivide routine into smaller ones to avoid optimization loss
and
Space exceeded in Data Dependence Test in hgi_int_out_rangesep_
Subdivide routine into smaller ones to avoid optimization loss
Also, I would like to compile MRCC with the GNU compiler (gcc version 8.3.0, for example). But this mode of compilation is met with several errors with the options mentioned above.
I would greatly appreciate suggestions from your side.
Thanks in advance,
Dipayan Datta
I am writing to seek suggestions on resolving a compilation issue that I have come across. I am trying to compile MRCC with ifort version 17, with the Intel MPI library. To be more specific, here are the options that I chose while running build.mrcc:
./build.mrcc Intel -i64 -pOMP -pMPI=IntelMPI
I see the following error messages:
Space exceeded in Data Dependence Test in hgi_int_out_
Subdivide routine into smaller ones to avoid optimization loss
and
Space exceeded in Data Dependence Test in hgi_int_out_rangesep_
Subdivide routine into smaller ones to avoid optimization loss
Also, I would like to compile MRCC with the GNU compiler (gcc version 8.3.0, for example). But this mode of compilation is met with several errors with the options mentioned above.
I would greatly appreciate suggestions from your side.
Thanks in advance,
Dipayan Datta
Please Log in or Create an account to join the conversation.
- Nike
- Offline
- Premium Member
Less
More
- Posts: 97
- Thank you received: 3
4 years 2 months ago #1001
by Nike
Replied by Nike on topic Compilation issues
Hi Dipayan,
I frequently ran into space issues when compiling MRCC, until I started compiling on a development node (a node where I can actually run serious calculations, not just submit jobs). The space and time requirements to compile MRCC sometimes necessitate having access to a fair amount of resources.
I'm not sure how to solve your problem with the GNU compiler though.
By the way it's cool that your question is marked as issue #1000. A new milestone for the MRCC forum!
With best wishes,
Nike
I frequently ran into space issues when compiling MRCC, until I started compiling on a development node (a node where I can actually run serious calculations, not just submit jobs). The space and time requirements to compile MRCC sometimes necessitate having access to a fair amount of resources.
I'm not sure how to solve your problem with the GNU compiler though.
By the way it's cool that your question is marked as issue #1000. A new milestone for the MRCC forum!
With best wishes,
Nike
Please Log in or Create an account to join the conversation.
- ddatta
- Topic Author
- Offline
- New Member
Less
More
- Posts: 8
- Thank you received: 0
4 years 2 months ago #1002
by ddatta
Replied by ddatta on topic Compilation issues
Hello Nike,
I am trying to compile MRCC on a national supercomputing facility. Thus, by no means I have access to a compute node. This is a good suggestion though and I will try to compile on a compute node of a local computer cluster, given the slowness of the compilation process.
Thanks,
Dipayan
I am trying to compile MRCC on a national supercomputing facility. Thus, by no means I have access to a compute node. This is a good suggestion though and I will try to compile on a compute node of a local computer cluster, given the slowness of the compilation process.
Thanks,
Dipayan
Please Log in or Create an account to join the conversation.
- Nike
- Offline
- Premium Member
Less
More
- Posts: 97
- Thank you received: 3
4 years 2 months ago #1003
by Nike
Replied by Nike on topic Compilation issues
Hi Dipayan,
Maybe that's a good idea: You can test it on your local cluster and see if it works. Then you can copy the binary files over to the national HPC centre, or you can at least see how many resource requirements you needed for compiling it on the local cluster.
On a national HPC centre, there might be a few development nodes that are available for testing (but they might be limited to maybe 30 minutes, which I think will not be enough).
You could submit a job to the queue, which will give you access to a compute node, then compile it there. 1 day and 32GB of RAM will be much more than enough.
Another thing you can do is submit an "interactive" job to the queue. On SLURM this can be done with the srun command: hpc-uit.readthedocs.io/en/latest/jobs/interactive.html and on PBS/TORQUE it can be done with qsub -I.
With best wishes!
Nike
Maybe that's a good idea: You can test it on your local cluster and see if it works. Then you can copy the binary files over to the national HPC centre, or you can at least see how many resource requirements you needed for compiling it on the local cluster.
On a national HPC centre, there might be a few development nodes that are available for testing (but they might be limited to maybe 30 minutes, which I think will not be enough).
You could submit a job to the queue, which will give you access to a compute node, then compile it there. 1 day and 32GB of RAM will be much more than enough.
Another thing you can do is submit an "interactive" job to the queue. On SLURM this can be done with the srun command: hpc-uit.readthedocs.io/en/latest/jobs/interactive.html and on PBS/TORQUE it can be done with qsub -I.
With best wishes!
Nike
Please Log in or Create an account to join the conversation.
Time to create page: 0.039 seconds