Note that we have made changes on our systems that affect the way Gaussian jobs are run.
To facilitate the execution of such jobs, we have dedicated an M9000 server (m9k0002) to Gaussian and equipped it with fast local scratch space. This is expected to speed up Gaussian program runs that depend on I/O to/from scratch, and at the same time take some pressure off our shared NFS file system. In addition, the new scheme has the advantage of automatically cleaning up scratch space, even if the associated Gaussian run fails at some point.
Please modify your Gaussian execution scripts for Grid Engine to include the following lines:
#$ -pe gaussian.pe N . /opt/gaussian/setup.sh
before calling Gaussian. "N" in the pe line stands for the number of processes you use. The crucial difference is the usage of a new parallel environment "gaussian.pe" instead of the standard "shm.pe" that was used before. In addition the sourcing in of the "setup.sh" script is new. This is necessary to re-direct scratch to a local disk area. Note there is a space between "." and "/" in that line.
For a transition period, the old submission scheme will still work, but we are going to "retire" it eventually which means that job scripts that do not use the above approach will not be scheduled.
The latest version of "Gaussian" on our systems is Revision D.01. The program executables are installed in /opt/gaussian/ and may be accessed by registered Gaussian users. To learn details about Gaussian usage on our systems, please read our FAQ.
The usage of the most recent version of Gaussian can be set up by typing
If you need to access an earlier version (for instance, the previous default G09 Rev B1), you can do so by using a usepackage command such as:
For a list of available older versions, use the listing command
For a list of capabilities of G09, please see http://gaussian.com/g_prod/g09_glance.htm
For a list of new features of this current version, see http://www.gaussian.com/g_tech/rel_notes.pdf
If you encounter any problems or need assistance, please contact us.