<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
-----BEGIN PGP SIGNED MESSAGE-----<br>
Hash: SHA1<br>
<br>
Hi Amirhossein,<br>
<br>
Did you specify the memory usage in your job script or at command
line and what parameters did you use exactly?<br>
<br>
Doing a quick search I believe that the following will solve the
problem although I haven't tested myself:<br>
<br>
$ qsub -l mem_free=MEM_NEEDED,h_vmem=MEM_MAX yourjob.sh<br>
<br>
Here, MEM_NEEDED and MEM_MAX are the lower and upper bounds for your
job's memory requirements.<br>
<br>
HTH,<br>
<br>
~Justin<br>
<br>
On 7/22/64 2:59 PM, Amirhossein Kiani wrote:<br>
<span style="white-space: pre;">> Dear Star Cluster users,<br>
><br>
> I'm using Star Cluster to set up an SGE and when I ran my job
list, although I had specified the memory usage for each job, it
submitted too many jobs on my instance and my instance started
going out of memory and swapping.<br>
> I wonder if anyone knows how I could tell the SGE the max
memory to consider when submitting jobs to each node so that it
doesn't run the jobs if there is not enough memory available on a
node.<br>
><br>
> I'm using the Cluster GPU Quadruple Extra Large instances.<br>
><br>
> Many thanks,<br>
> Amirhossein Kiani</span><br>
<br>
-----BEGIN PGP SIGNATURE-----<br>
Version: GnuPG v1.4.11 (Darwin)<br>
Comment: Using GnuPG with Mozilla - <a class="moz-txt-link-freetext" href="http://enigmail.mozdev.org/">http://enigmail.mozdev.org/</a><br>
<br>
iEYEARECAAYFAk65MvgACgkQ4llAkMfDcrl4TACeNxwd6SWRNeEc14NE0MbXn+4M<br>
r6gAoJL+MWdLet1LILxfaesTGhXfVyNs<br>
=dcOo<br>
-----END PGP SIGNATURE-----<br>
<br>
</body>
</html>