Previous Thread
Next Thread
Print Thread
domdec-gpu
#37255 01/01/19 07:57 PM
Joined: Jan 2019
Posts: 1
F
Fajer Offline OP
Forum Member
OP Offline
Forum Member
F
Joined: Jan 2019
Posts: 1
I am trying to use SGLD on a workstation equipped with one 8-core CPU (i9) and GTX-1080 card. Domdec-gpu does accelerate simulation as expected, the GPU usage is ~ 15% (as per nvidia-smi) as long as the splitting into subboxes is turned off. As soon as I have splitting on, (with mpirun -n 8) or if I try to add an independent domdec_gpu charmm run, nothing gets executed on GPU. The job which was already executing on GPU stops using it.

Is there a way of executing multiple jobs on a single GPU ?
Thanks
Peter

Re: domdec-gpu
Fajer #37256 01/01/19 08:19 PM
Joined: Sep 2003
Posts: 8,500
rmv Online Content
Forum Member
Online Content
Forum Member
Joined: Sep 2003
Posts: 8,500
DOMDEC-GPU is a hybrid method, that uses the GPU for non-bonded calculations (Coulomb and VDW), and CPU cores for bonded terms and integration. As noted in domdec.doc, each MPI task requires its own GPU; the code was not written to share the GPU with other processes.

Within node parallelism uses OpenMP threads; if that is working properly, the machine should have a load of about 8.0 for 8 cores, with one process showing a load of 800% via top. If that is not the case, try setting the env var OMP_NUM_THREADS to 8 before running CHARMM.

On a single machine, the performance with the GPU ought to be 2-3 times faster than CPU-only with 8 MPI tasks.


Rick Venable
computational chemist


Moderated by  lennart, rmv 

Link Copied to Clipboard
Powered by UBB.threads™ PHP Forum Software 7.7.4
(Release build 20200307)
Responsive Width:

PHP: 5.6.33-0+deb8u1 Page Time: 0.005s Queries: 18 (0.003s) Memory: 0.8926 MB (Peak: 0.9572 MB) Data Comp: Off Server Time: 2020-10-01 19:54:36 UTC
Valid HTML 5 and Valid CSS