Page 1 of 3 1 2 3 >
Topic Options
#37402 - 03/20/19 11:55 AM Free version of CHARMM in parallel
pmj Offline
Forum Member

Registered: 01/15/19
Posts: 44
Dear all,

I am using the free version of charmm (v43) which I complied with:

./install.com gnu xlarge x86_64
./install.com gnu M

I am using GBSW solvation model and I was doing some testing to see how fast was the procedure. Also, my system is constrained with EMAP.

I got really strange results, so I assume CHARMM has not complied properly.

When I run it on 4, 8, 12 and 16 cores the CPU time shows it takes more to run the script on 16 cores than on 4:

NP 4 8 12 16
CPU time/min 25.53 27.27 29.09 29.87

I also tried with ./install.com gnu M MPICH ADM64 and some other like ./install.com gnu xlarge ifort x86_64, and than recompile it with mpi option, but it does not work.

I tried to recompile it with em47t, but I could not do it successfully as it was possible to compile it only in the serial version.

Can anyone help me with this issue? I suppose I am just not doing it properly, so I would appreciate the help.

Thank you!

Top
#37403 - 03/20/19 12:03 PM Re: Free version of CHARMM in parallel [Re: pmj]
rmv Online   content

Forum Member

Registered: 09/17/03
Posts: 8364
Loc: 39 03 48 N, 77 06 54 W
Have you run the test case suite?

Are the cores all on a single machine?

It's entirely possible that GBSW and EMAP may not be as well optimized as other parts of the code.

The 'em64t' option requires a licensed Intel compiler; it does work with MPI libraries, but the setup may be a bit different, and an Intel-built version of the mpi.mod flie is required.
_________________________
Rick Venable
computational chemist


Top
#37404 - 03/20/19 12:23 PM Re: Free version of CHARMM in parallel [Re: pmj]
pmj Offline
Forum Member

Registered: 01/15/19
Posts: 44
Dear Prof. Venable,

I run the test cases and there were no failures, unfinished or abnormally finished jobs. (Number of test cases is 168). I am not sure if this good or bad.

I tested the following:

./test.com M 8 gnu

When I tried to do the same with ./parallel-tests it did not really work.

I also tried with ./parallel-test ../pathtocharmm/gnu_M and it showed no apparent error.

All the cores are on the single machine.

Top
#37405 - 03/20/19 12:45 PM Re: Free version of CHARMM in parallel [Re: pmj]
rmv Online   content

Forum Member

Registered: 09/17/03
Posts: 8364
Loc: 39 03 48 N, 77 06 54 W
There are a lot more than 168 test cases, close to 600 in all.

You should probably look at the output from test cases with the strings 'emap' and 'gbsw' in the file name.

The command "./test.com M 8 gnu" should be okay for testing the parallel code in CHARMM, although some tests will be skipped.

The ./parallel-tests script allows running multiple test cases at once, and is not specifically related to the parallel energy evaluation code in CHARMM.
_________________________
Rick Venable
computational chemist


Top
#37406 - 03/20/19 12:47 PM Re: Free version of CHARMM in parallel [Re: pmj]
pmj Offline
Forum Member

Registered: 01/15/19
Posts: 44
Dear Prof. Venable,

I wrote one mistake, there were 168 test cases that have not been run, everything else was okay.

I will check the files as you wrote.

Thank you

Top
#37407 - 03/20/19 01:02 PM Re: Free version of CHARMM in parallel [Re: pmj]
pmj Offline
Forum Member

Registered: 01/15/19
Posts: 44
Dear Prof. Venable,

I checked all the outputs regarding the gbsw and emap and except for few warnings, there were no major errors (normal termination).

However, I run my script without gbsw and emap and I obtained following results:

NP 4 8 12 16
CPU time/min

1.12 min. 37s 340ms 34s 210ms 37s 100ms
57s 830ms 36s 600ms 46s 540ms 35s 480ms

It seems bit odd that they are almost the same for all cases or longer on 16 cores.

Thanks

Top
#37408 - 03/20/19 01:51 PM Re: Free version of CHARMM in parallel [Re: pmj]
rmv Online   content

Forum Member

Registered: 09/17/03
Posts: 8364
Loc: 39 03 48 N, 77 06 54 W
It should be noted that more cores is not always faster; that kind of behavior can be observed for systems with too few atoms. When the energy calculations done by esch core are very fast, the communication between cores starts to dominate, and using more cores becomes slower.
_________________________
Rick Venable
computational chemist


Top
#37409 - 03/20/19 01:57 PM Re: Free version of CHARMM in parallel [Re: pmj]
pmj Offline
Forum Member

Registered: 01/15/19
Posts: 44
Dear Prof. Venable,

the system is rather big as it has approximately 120 000 atoms.

Is there any additional test I could perform to see whether parallel charmm is working properly?

Thank you so much

Top
#37412 - 03/21/19 04:54 AM Re: Free version of CHARMM in parallel [Re: pmj]
SNOW001 Offline
Forum Member

Registered: 11/12/18
Posts: 16
Have you ever check the charmm/build/gnu_M/pref.dat, if it contains term: MPI,PARALLEL,PARAFULL, I think charmm can parallel.
Then I recommend you check the output file with different cores, If the out energy is exactly same after some steps (such as 1ns), I guess each core do same things repeatly instead of each core do part of the run.
Also, you can check the charmm/build/gnu_M/new.log file, If they are some 'Error or Warning', Maybe you compiled it wrong.


Edited by SNOW001 (03/21/19 05:23 AM)

Top
#37414 - 03/21/19 05:51 AM Re: Free version of CHARMM in parallel [Re: SNOW001]
pmj Offline
Forum Member

Registered: 01/15/19
Posts: 44
Hi!

I checked the pref.data file and it containes all three keywords. There are no errors in new.log file as well.

I will try and see if the energies are the same as you said when I compile it with ./install gnu M. Perhaps it has not complied properly.

I was trying to compile CHARMM with ./install.com gnu M MPICH AMD64, but when I was trying to give the input where mpif.h file is located it is not recognised (like /path/mpi/intel64/lib or /path/mpi/intel64/).

Also, where do I change number of cores in the test cases?

Thank you for your help.

Current operating system I am using: Linux-3.16.0-4-amd64(x86_64)


Edited by pmj (03/21/19 06:15 AM)

Top
Page 1 of 3 1 2 3 >

Moderator:  John Legato, lennart