I'm not sure what is causing undefined symbols in your case, but I can tell you how I use OpenMM installed from anaconda when I compile charmm.
When I use OpenMM from anaconda, I set the environment variables to subdirectories of ~/anaconda3/pkgs/openmm-7.6.0-py39h8d72adf_0_khronos rather than anaconda3/lib
So you could try ls -d ~/miniconda3/pkgs/*openmm* to find your OpenMM directory and set your environment variables accordingly.
(base) Thu Jan 06 15:41:00 [rossi@debian OpenMM]$cat test.py from openmm.app import * from openmm import * from openmm.unit import * from sys import stdout
Here is the problem. In September 2021, I installed the MDAnalysis (https://www.mdanalysis.org/) software which can be used to analyze trajectories from applications such as OpenMM, CHARMM, Gromacs, etc
The CMAKE procedure used an OpenMM library in the directory /usr/lib/x86_64-linux-gnu/libOpenMM.so placed there by the previous MDAnalysis install procedure.
That is the library that CMAKE procedure used, ALTHOUGH all the needed libraries were defined BEFORE using the CMAKE procedure:
I think that, if the CMAKE procedure had used only the defined environment variables above, then this conflict of an incorrect libOpenMM.so would not have arisen.
off the topic: is there any known compatibility problem for charmm and openmm versions? I am still using openmm 7.3. Thinking to move to new version of openmm (now is 7.7). Thanks.
At least with the OpenMMFortranModule.f90 that gets generated for me when I build OpenMM 7.7.0, there are some errors during the CHARMM compile process.
I edited this file and replaced all occurances of 'long long' which is not a valid fortran type with 'integer'.
The way in which I found the problem involved looking at the files that were generated after ../configure. Looking them over, I was able to discern that the incorrect plugins file was assigned by cmake.
As far as the OpenMM/CHARMM compatibility issue, I don't think there should be much of a problem. Although there is the on-going blending of these two applications as time goes by. By that I mean, periodically, i.e. at least yearly, I know that new options are added to CHARMM. And there are also continuing improvements and additions to OpenMM. It's possible that there might be a slight incompatibility for a brief period, but will correct itself. But, the bread-and-butter issue for me is the ability to perform very fast MD simulations using GPU hardware. And that won't change from one OpenMM version to another.
I did want to discuss conda/miniconda. On Debian Linux, conda isn't available. So, I first installed miniconda, which then creates an anaconda3 directory. From there, I installed OpenMM, but it also resides in the anaconda3 directory. The miniconda installation also places code in my .bashrc file (I don't like that.). Somehow, I think the miniconda installation diverts the normal execution of python commands , i.e. uses python code in the anaconda3 directory, but I can't be sure.
So, to test compatibility, I suppose one could rename the anaconda3 directory to preserve it. And then repeat the process install miniconda then OpenMM creating a whole new OpenMM installation.
While I have used the OpenMM interface in the past, thanks to some help from a very talented post-doc I've moved on to using OpenMM entirely via the Python bindings, using conda environments and a custom package that allows the use of a dyn.py script that bears some resemblance to typical dyn.inp charmm scripts I've used to run MD. The biggest reason is several newer features supported in OpenMM that do not appear to be supported in the charmm interface, at least according to the documentation for the latest code. Those features are:
These feature are important for the work I've been doing, but the Fortran implementation in charmm for the first two listed above is exceedingly slow, esp. the use of the Drude force field. Also, with help from the post-doc, I've worked out a simple means of including a boundary plane restraint, comparable to what one might do with MMFP if it actually worked with domdec (it doesn't).
OpenMM can read native charmm topology and parameter files, PSF files, and coordinate files on its own; the DCD files produced can be analyzed with charmm. So other than the actual MD engine, my workflow for most projects has not substantially changed, except for the use of Python to run simulations. I am by no means a Python maven, but have learned to accept its usefulness and flexibility.
Thanks so much for the very important/useful information. I like!!
Rick, would you consider posting an example of the dyn.py file mentioned in your immediate post into the scripts category? All of the scripts written by you and Lennart have been so helpful not only to me but to many, many others.
Thanks so much.
Warm regards,
Angelo
P.S. I forgive you for your fractured loyalty to CHARMM.
Well, I continue to use CHARMM for model building, equilibration, analysis, visualization, and simulations that require pressure calculations using the atomic virial, something many MD programs lack. I believe GROMACS and LAMMPS offer such pressure calculations, I'm not aware of others; OpenMM, NAMD, and AMBER do not. Plus, I continue to contribute small code fixes and enhancements, since I'm quite comfortable working in Fortran.
I'm not sure the Script Archive forum is the best place for OpenMM python examples, though; I may need to consider some other venue.
I hope you realize that I was only kidding about the CHARMM comment.
But, I think you have a point. It is not clear where OpenMM posts should be placed. Since the CHARMM-GUI output yields input for OpenMM, perhaps that might be a place.
Below is a problem with respect to OpenMM that I can't solve, and if this is an inappropriate place to ask, please delete it
global/u2/a/angelor /global/homes/a/angelor/openmm /usr/lib/python36.zip /usr/lib64/python3.6 /usr/lib64/python3.6/lib-dynload /usr/lib64/python3.6/site-packages /usr/lib64/python3.6/_import_failed /usr/lib/python3.6/site-packages /global/homes/a/angelor/openmm Traceback (most recent call last): File "test2.py", line 5, in from openmm.app import * File "/global/u2/a/angelor/openmm/__init__.py", line 19, in from openmm.openmm import * File "/global/u2/a/angelor/openmm/openmm.py", line 13, in from . import _openmm ImportError: cannot import name '_openmm'
The openmm directory is in my home directory and also in the python path, as seen from the output above:
[Sat Jan 15 10:20:24 angelor@cori04:~ ] $ ls openmm __init__.py __pycache__/ _openmm.cpython-39-x86_64-linux-gnu.so* amd.py app/ mtsintegrator.py openmm.py testInstallation.py unit/ version.py
and
Sat Jan 15 10:20:29 angelor@cori04:~ ] $ env | grep PYTHONPATH PYTHONPATH=/global/homes/a/angelor/openmm:/global/common/cori/software/nwchem/6.6/contrib/python
Why can't the python interpreter find these files? I have had so many problems understanding python. It's like there is this big gap between two extremes: I can compile code and know Linux system stuff; and I can run simulations and analyze results, but I don't know what to do with the python gap in the middle! DUH!
If you can help, I would really appreciate this. Otherwise delete the python component.
There are some conda setup examples on the Software wiki page for our lab cluster. I'll probably post a series of dyn.py examples on that page, but it may be a week or more until I can do that.