Execution limits for the three main clusters at HPCVL have been implemented to provide greater flexibility in scheduling user applications.
Users are limited to twelve (12) executing jobs at any one time. This means that across the production clusters (M9000 and VictoriaFalls) each user can run up to 12 (total) production jobs simultaneously. Submitted jobs above that limit will remain in the queue until a job slot comes free.
To reflect the differences in processor slots, number of machines, CPU speed and memory available, the total maximum number of processes (threads) that can be run at a given time are as follows:
Thus, with up to 12 executing jobs, a total of 624 threads/processes are possible at any given time when spread over the queues.
Important Note: In some cases users require more resources, especially in terms of thread/process numbers, and may fall outside of these parameters. We will work with these users to ensure appropriate access. Any requests for enhanced access should include specifications for number of processors required, total amount of memory, and the expected maximum runtime(s). Contact the User Support group to ensure that this is arranged using the most appropriate resource.
If extended resources are required over a substantial time period, an application to the Resource Allocation Committee via Compute Canada is required. Calls for such applications are issued once a year in the fall for the upcoming calendar year. A short explanation can be found here. To get a further idea, you can check out last year's Call for Proposals.
Please note that scheduling of jobs using the commercial software packages Fluent and Abaqus involves a license check and must therefore remain subject to additional limits, i.e. 4 jobs/user, 36(Fluent on Solaris) and 20 (Abaqus version 6.5 on Solaris) processes per job. Users of Abaqus version 6.7 (or higher) on the Abaqus Mini Cluster (sw0001-4)are limited to 12 threads/processes as that is the number of single threaded processors available on the standard nodes. The same restrictions hold for Fluent runs on the SW Linux cluster.
These changes makes the utilization of our resources more efficient, while allowing researchers to get their work done or to expand their research and address new problems.