CHARMM Development Project
Dear all,
I succeed in compiling the serial and MPI enabled version of CHARMM c41b2 (nearly) out of the box on behalf of the GNU Compiler collection.
Instead, I am having serious trouble compiling the GPU accelerated version:
Nvcc succeeds building all requested CUDA Kernels. However, the succeeding compilation of the CHARMM code itself fails somehow.
Gfortran complains about missing variable definitions cause of precompiler macro definitions being not set. Finally, gfortran throws strange
"Unclassifiable statement at (1)" error during the compilation of the file enbonda_cff.F90 (which results from preprocessed source of enbonda_cff.src)
as soon as the macro KEY_FLUCQ is set.

Is the GPU enabled version of CHARMM supposed to compile on behalf of gcc/gfortran?

Thanks for your support and attention
Sebastian
Which build procedure, install.com or configure/cmake?

What command line options were used for the build procedure?

Which GPU code? OpenMM? DOMDEC_GPU? The GPU machine type?

What OS and version? Which GCC version?

The free version (charmm), or the paid academic license fee version (CHARMM)?

Hi Rick,

one small correction:
The error was not about file named enbonda_cff.F90/enbonda_cff.src, but rather flucq.F90/flucq.src.

>> Which build procedure, install.com or configure/cmake?
>> What command line options were used for the build procedure?

Both methods:
1) ./install.com gpu (does not work)
2) ./install.com gnu (serial, works)
2) ./configure --without-domdec --without-mkl --without-mpi --without-openmm --with-gn && make (serial, works)

Is there any possibility to build CHARMM with CUDA support (no OpenMM/DomDec) via configure/cmake method?


>> What OS and version? Which GCC version?

Manjaro 18 and gfortran/gcc 8.3.0.


>> The free version (charmm), or the paid academic license fee version (CHARMM)?

Academic license.

Regards
Sebastian
I'm not sure that install.com GPU actually works. Version 41 is two years old now, and you can get c43b1 - the free version contains everything (including GPU support via OpenMM) except domdec.

For installation with GPU support using OpenMM you need this (see openmm.doc):
./install.com gnu openmm

For installation with GPU and domdec see domdec.doc.
I believe the GPU machine type is legacy code that probably should have been removed.

Our lab has made good use of domdec_gpu, but it is a hybrid method and requires a good balance of CPU, GPU, and a fast node interconnect such as Infiniband for optimal use.

OpenMM works well on GPUs, although I've come to prefer the Python interface. It can read CHARMM files, notably topology, parameter, psf, and coord files; it produces CHARMM compatible trajectory files, a.k.a. DCD format. I'm not sure the CHARMM implementation supports some newer features available with the OpenMM Python interface, such as Drude force fields, long range VDW via LJ-PME, or the Nose-Hoover thermostat.

OpenMM is good for producing ensembles, but needs further development to support computing time dependent properties such as relaxation phenomena, diffusion, and viscosity.
Dear Rick, Dear Lennart,
thanks for your suggestions.
Managed to build a charmm binary with OpenMM support from free release c43b2 of CHARMM. Instead, both GPU and OpenMM builds within release c41b2 seem to be horribly borken either for install.com and configure/make installation method.

Best
Sebastian
© CHARMM forums