[StarCluster] Many jobs stuck in "t state"

Ying Sonia Ting sonia810 at uw.edu
Tue Feb 24 15:08:44 EST 2015


Hi all,

This might be more of a SGE issue than Starcluster issue but I'd really
appreciate any comments.

I have a bunch of jobs running on AWS spot instances using
starcluster. *Most of
them would stuck in "t state" for hours and then finally execute (in the r
state). *For instance, 50% of the jobs now that are not in qw are in "t
state".

The same program/script/AMI have been used frequently and this is the worse
ever. The only difference is the jobs this time are processing bigger files
(~6G each, 90 of them) located on a NFS shared gp2 volume. Jobs were
divided into tasks to ensure that only 4-5 jobs are processing the same
file at once. The memory were not even close to be overloaded (only used 5G
out of 240G each node). The long stuck in "t state" is wasting money and
CPU hours.

Have any of you seen this issue before? Is there anyway I can fix / work
around this issue?

Thanks a lot,
Sonia



-- 
Ying S. Ting
Ph.D. Candidate, MacCoss Lab
Department of Genome Sciences, University of Washington
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20150224/a56be03c/attachment.htm


More information about the StarCluster mailing list