[StarCluster] Easy way to delete more than 100k jobs

Rayson Ho raysonlogin at gmail.com
Mon Feb 23 03:03:51 EST 2015


Is your local cluster using classic or BerkeleyDB spooling? If it is
classic over NFS, then qdel can be very slow.

One quick workaround is to hide the job spooling files manually, just move
the spooled jobs from $SGE_ROOT/$SGE_CELL/spool/qmaster/jobs to a private
backup directory.

Rayson

==================================================
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html



On Sun, Feb 22, 2015 at 8:31 PM, Jacob Barhak <jacob.barhak at gmail.com>
wrote:

> Hi to SGE experts,
>
> This is an SGE question rather than StarCluster related. I am actually
> having this issue on a local clyster. And I did raise thulis issue a while
> ago. So sorry for repetition. And if you know of another list that can
> help,  please direct me there.
>
> The qdel command does not respond well with a large number of jobs. More
> than 100k jobs makes things intollerable.
>
> It takes a long time and consumes too much memory if trying to delete all
> jobs.
>
> Is there a shortcut someone is aware of to clear the enite queue without
> waiting for many hours or the server running out of memory?
>
> Will removing the StarCluser server and reinstalling it work?  If so,  how
> to bypass long configuration? Are there several files that can do the trick
> if handled properly?
>
> I hope someone has a quick solution.
>
>           Jacob
>
> _______________________________________________
> StarCluster mailing list
> StarCluster at mit.edu
> http://mailman.mit.edu/mailman/listinfo/starcluster
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20150223/32a915c1/attachment.htm


More information about the StarCluster mailing list