[StarCluster] AWS instance runs out of memory and swaps

Justin Riley jtriley at MIT.EDU
Tue Nov 8 08:47:40 EST 2011


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Amirhossein,

Did you specify the memory usage in your job script or at command line
and what parameters did you use exactly?

Doing a quick search I believe that the following will solve the problem
although I haven't tested myself:

$ qsub -l mem_free=MEM_NEEDED,h_vmem=MEM_MAX yourjob.sh

Here, MEM_NEEDED and MEM_MAX are the lower and upper bounds for your
job's memory requirements.

HTH,

~Justin

On 7/22/64 2:59 PM, Amirhossein Kiani wrote:
> Dear Star Cluster users,
>
> I'm using Star Cluster to set up an SGE and when I ran my job list,
although I had specified the memory usage for each job, it submitted too
many jobs on my instance and my instance started going out of memory and
swapping.
> I wonder if anyone knows how I could tell the SGE the max memory to
consider when submitting jobs to each node so that it doesn't run the
jobs if there is not enough memory available on a node.
>
> I'm using the Cluster GPU Quadruple Extra Large instances.
>
> Many thanks,
> Amirhossein Kiani

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk65MvgACgkQ4llAkMfDcrl4TACeNxwd6SWRNeEc14NE0MbXn+4M
r6gAoJL+MWdLet1LILxfaesTGhXfVyNs
=dcOo
-----END PGP SIGNATURE-----

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20111108/6485b67f/attachment.htm


More information about the StarCluster mailing list