[StarCluster] SGE clarification and 2 questions
Justin Riley
jtriley at MIT.EDU
Mon Dec 20 20:54:13 EST 2010
On 12/20/10 7:46 PM, Boris Fain wrote:
> For SGE we tried ami ami-a5c42dcc but only once, so we don't know
> whether the error is reproducible.
Hmmm OK not sure what the issue might have been. Let me know if it
happens again.
> I noticed you had an s3fs wrapper in the repository. Is this something
> that will be controllable in the future
> from the config file? Next release?
I hadn't planned on it, it's just a convenience script for when I use
s3fs. Haven't used s3fs in a while. Not sure if it's generally useful or
not. I would like to add a put/get sort of command for transferring
files to/from clusters so maybe this could also upload/download from s3?
> And also the load balancing. It says it's already available in the
> newest snapshot. Do you have an example config file
> that controls the max/min number?
The config hasn't been implemented yet but you can control max/min nodes
at the command line:
$ starcluster loadbalance --help
StarCluster - (http://web.mit.edu/starcluster) (v. 0.9999)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster at mit.edu
Usage: loadbalance <cluster_tag>
Start the SGE Load Balancer.
Options:
-h, --help show this help message and exit
-p, --plot Plot usage data at each iteration
-i INTERVAL, --interval=INTERVAL
Polling interval for load balancer
-m MAX_NODES, --max_nodes=MAX_NODES
Maximum # of nodes in cluster
-w WAIT_TIME, --job_wait_time=WAIT_TIME
Maximum wait time for a job before adding nodes,
seconds
-a ADD_PI, --add_nodes_per_iter=ADD_PI
Number of nodes to add per iteration
-k KILL_AFTER, --kill_after=KILL_AFTER
Minutes after which a node can be killed
-s STAB, --stabilization_time=STAB
Seconds to wait before cluster stabilizes
-l LOOKBACK_WIN, --lookback_window=LOOKBACK_WIN
Minutes to look back for past job history
-n MIN_NODES, --min_nodes=MIN_NODES
Minimum number of nodes in cluster
$ starcluster loadbalance mycluster -m 10 -n 3
HTH,
~Justin
More information about the StarCluster
mailing list