How do I Submit Jobs to other than the default queues?

Please note: The FAQ pages at the HPCVL website are continuously being revised. Some pages might pertain to an older configuration of the system. Please let us know if you encounter problems or inaccuracies, and we will correct the entries.

Our main production environment consists of 8 Sun Enterprise M9000 servers. When you submit jobs, by default this is the set of machines on which your job will run. The associated queue is production.q

However, we have two other clusters, namely the Victoria Falls cluster and the Software (sw) cluster. Both of these have their own queues, vf.q and abaqus.q, respectively.

The Victoria Falls cluster consists of highly multithreaded nodes with 16 cores and up to 128 hardware-supported threads. It runs on the Solaris/Sparc platform. The software cluster consists of x86 machines running Linux, and requires re-compilation of user software, or a specific version of pre-compiled applications.

The SW (Linux) cluster consists of X86-based Dell and IBM nodes that have 12 or 40 compute cores and between 32 GB and 1 TB of shared memory. These nodes run on the Linux platform, and therefore require re-compilation of user software, or specific versions of pre-compiled applications.

It is possible that code compiled on the login node (and therefore optimized for the US IV+ chip) will not run efficiently on the Niagara 2 chips of the VF cluster, or on the Sparc64-VII chips of the M9000 cluster. See our Parallel Programming FAQ for suggestions on how to optimize code for architectures other than US IV+.