<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi Chris,<br>
<br>
Cheers - "schedule on demand" should give us an improvement, I'll
look into that. As to bringing down nodes 15 mins after the
peak, these are nodes that have already run for 50 mins. The
trick is to empty nodes that are between 50 mins (or the default
of 45 mins) past the hour and the hour. I don't have any magic
for this, but the load formula below (host_rank-load_short*256) is
pretty good at emptying nodes when they can be and I'm relying on
a random distribution of start times to find some that can be
brought down. It would be great if gridEngine had a binary flag
to say if had been up for 0-50 or 50-60 mins which could then go
in load_formula, but I'm not going to hack it in.<br>
<br>
Thanks very much for your feedback.<br>
<br>
<br>
Tony<br>
<br>
On 14/06/16 13:38, Chris Dagdigian wrote:<br>
</div>
<blockquote cite="mid:575FFAD3.4060701@bioteam.net" type="cite">
<br>
<br>
Looks like you are using the Grid Engine scheduler setting that
invokes a sort/schedule/dispatch cycle every 5 seconds
<br>
<br>
That is an improvement from the default SGE setting of 15 seconds
but there may be a better setting for you if you really do have a
ton of bursty jobs that run for short periods of time (90 seconds)
<br>
<br>
The SGE feature is called "schedule on demand" and what it does is
instead of a cyclical loop every N seconds (which can be
overhead/wasteful for very short jobs) the new setting will
trigger a sort/dispatch run eveload_formulary time a new job is
submitted or a running job transitions state and exits.
<br>
<br>
Most people with high numbers of short jobs tend to use on-demand
scheduling with Grid Engine rather than the periodic cycle that is
the default.
<br>
<br>
Like your work with tuning and the load formula stuff!
<br>
<br>
And remember -- it can be "wasteful" to bring down nodes in 15
minutes if you have already paid for a full hour of EC2 instance
time!
<br>
<br>
-Chris
<br>
<br>
<br>
<blockquote type="cite">Tony Robinson
<a class="moz-txt-link-rfc2396E" href="mailto:tonyr@speechmatics.com"><mailto:tonyr@speechmatics.com></a>
<br>
June 13, 2016 at 10:17 PM
<br>
I'm mostly there with my loadbalancer, so I thought that it
would be worth a write up for those that are interested.
<br>
<br>
Just to restate my aims, I have a lot of jobs with duration of
about 90s and these are very bursty. So I can easily have 400
jobs queued to run and *not* want to bring up any new nodes.
I've somewhat arbitrarily picked 240s as the maximum time I want
any job to wait before running.
<br>
<br>
The first thing I needed to do was to reduce the poll time. As
my jobs take about 90s I need to poll much more frequent than
that (basic sampling theorem) - I picked 30s although I am
tempted to reduce this.
<br>
<br>
So the first improvement was to make the polling time a real
polling time, not a wait time. This is just setting a timer
at the start of the polling loop and sleeping for the
appropriate time at the end.
<br>
<br>
Also, to speed things up I eliminated all settle time. We
already know how many nodes are up and how many jobs are
running, so it was a simple matter to assume that the difference
would run soon and ignore these at the start of the job queue.
<br>
<br>
As noted earlier in the thread, the existing code doesn't
properly sample the past job durations, so I simply reload the
whole lookback_window (default 3 hours) every time. It doesn't
take long and if speed were an issue then there's lots of code
that changes between date formats that could be integers on
reading.
<br>
<br>
I also read the job name. This allows me to calculate the mean
and variance of each job type. I am estimating the job
duration as mean + 0.5 * sqrt(var / njob), so as I get hundreds
of jobs it's pretty much the mean.
<br>
<br>
With all of this in place I can estimate how long it's going to
take to run the queue. At the moment I'm ignoring the timings
of running jobs and assuming queued jobs come from the same
distribution, so the time taken to run is the job duration
(calculated above) divided by the number of slots available. I
know the how long each job has been waiting and how long I
expect each job to wait, so I can see if any job would increase
my maximum job wait time. If it does, I assume that I've got
another node up instantly (clearly false, they take more than my
240s to come up) and rerun the calculation until I know how many
nodes I need to add. In practice, it takes so long to boot
nodes that there's no point trying to bring more than 2 or 4
up. This is because they come up serially, all of this will
change when I update startcluster.
<br>
<br>
The other important change is to always load the most loaded
machine, with a slight preference for low node numbers when all
are empty (so that master gets loaded). This is really
important with such variable load you've got to bring nodes down
at the end of the hour and so need as many that are empty as
possible.
<br>
algorithm default
<br>
schedule_interval 0:0:05
<br>
maxujobs 0
<br>
queue_sort_method load
<br>
job_load_adjustments np_load_avg=0.50
<br>
load_adjustment_decay_time 0:7:30
<br>
load_formula host_rank-load_short*256
<br>
<br>
So that's about it. Just recently I had 496 slots running, the
above algorithm brought up just enough nodes to cope with the
load and brought some of them down 15 mins later when the load
decreased.
<br>
<br>
Overall we've doubled our efficiency using this algorithm, that
is we can provide the same quality service at half the variable
cost.
<br>
<br>
The code is not in a state to be shared publicly, but I'm happy
to share privately. It really needs a starcluster upgrade
(thanks Mich for the email) so that nodes can be brought up in
parallel, we should do this shortly and then I'll work on this
some more.
<br>
<br>
<br>
Tony //
<br>
<br>
On 03/04/16 13:41, Tony Robinson wrote:
<br>
<br>
<br>
_______________________________________________
<br>
StarCluster mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:StarCluster@mit.edu">StarCluster@mit.edu</a>
<br>
<a class="moz-txt-link-freetext" href="http://mailman.mit.edu/mailman/listinfo/starcluster">http://mailman.mit.edu/mailman/listinfo/starcluster</a>
<br>
Tony Robinson <a class="moz-txt-link-rfc2396E" href="mailto:tonyr@speechmatics.com"><mailto:tonyr@speechmatics.com></a>
<br>
April 3, 2016 at 8:41 AM
<br>
Okay, I've found another bug with the load balancer which
explains why avg_job_duration() was getting shorter and shorter.
<br>
<br>
get_qatime() initially loads the whole (3 hours) history, but
after that sets temp_lookback_window = self.polling_interval
<br>
<br>
The problem with this is self.polling_interval has to be much
shorter than a job duration (it's got to be able to keep up) and
the -b option to qacct sets "The earliest start time for jobs to
be summarized,", so it only selects jobs that have been started
recently and finished (so that they get into qacct) - hence they
must be the very short jobs. Hence the cache is originally
populated quite reasonably but then only gets updated with very
short jobs, all the long ones never get into the cache.
<br>
<br>
As I say below, I don't think any of this code is used anyway so
it doesn't matter too much that it's all broken.
<br>
<br>
I'll progress with my (weekend and part time) clean up and
implementation of a true predictive load balancer. I have both
(a) mean and variance for all job types and (b) working code
assuming that avg_job_duration() is correct, so it's probably
only another days work to get solid (or a month or two of
elapsed time, I'm done for this weekend).
<br>
<br>
<br>
Tony
<br>
<br>
On 01/04/16 17:01, Tony Robinson wrote:
<br>
<br>
<br>
_______________________________________________
<br>
StarCluster mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:StarCluster@mit.edu">StarCluster@mit.edu</a>
<br>
<a class="moz-txt-link-freetext" href="http://mailman.mit.edu/mailman/listinfo/starcluster">http://mailman.mit.edu/mailman/listinfo/starcluster</a>
<br>
Tony Robinson <a class="moz-txt-link-rfc2396E" href="mailto:tonyr@speechmatics.com"><mailto:tonyr@speechmatics.com></a>
<br>
April 1, 2016 at 12:01 PM
<br>
On 01/04/16 16:22, Rajat Banerjee wrote:
<br>
<blockquote type="cite">Regarding:
<br>
How about we just call qacct every 5 mins, or if the qacct
buffer is empty.
<br>
calling qacct and getting the job stats is the first part of
the load balancers loop to see what the cluster is up to. I
prioritized knowing the current state, and keeping the LB
running it's loop as fast as possible (2-10 seconds), so it
could run in a 1-minute loop and stay roughly on-schedule.
It's easy to run the whole LB loop with 5 minutes between
loops with the command line arg polling_interval, if that
suits your workload better. I do not mean to sound dismissive,
but the command line options (with reasonable defaults)are
there so you can test and tweak to your work load.
<br>
</blockquote>
<br>
Ah, I wasn't very clear. What I mean is that we only update
the qacct stats every 5 minutes. I run the main loop every
30s.
<br>
<br>
But calling qacct doesn't' take any time - we could do it every
polling interval:
<br>
<br>
root@master:~# date
<br>
Fri Apr 1 16:54:31 BST 2016
<br>
root@master:~# echo qacct -j -b `date +%y%m%d`$((`date +%H` -
3))`date +%m`
<br>
qacct -j -b 1604011304
<br>
root@master:~# time qacct -j -b `date +%y%m%d`$((`date +%H` -
3))`date +%m` | wc
<br>
99506 224476 3307423
<br>
<br>
real 0m0.588s
<br>
user 0m0.560s
<br>
sys 0m0.076s
<br>
root@master:~#
<br>
<br>
<br>
If calling qacct is slow then the update could be run at the end
of the loop so it would have all of the loop wait time to
complete in.
<br>
<br>
<blockquote type="cite">Regarding:
<br>
Three sorts of jobs, all of which should occur in the same
numbers,
<br>
Have you tried testing your call to qacct to see if it's
returning what you want? You could modify it in your source if
it's not representative of your jobs:
<br>
<a class="moz-txt-link-freetext" href="https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528">https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528</a>
<br>
qacct_cmd = 'qacct -j -b ' + qatime
<br>
</blockquote>
<br>
Yes, thanks, I'm comparing to running qacct outside of the load
balancer.
<br>
<br>
<blockquote type="cite">Obviously one size doesn't fit all here,
but if you find a set of args for qacct that work better for
you, let me know.
<br>
</blockquote>
<br>
At the moment I don't think that the output of qacct is used at
all is it? I thought it was only used to give job stats, I
don't think it's really used to bring nodes up/down.
<br>
<br>
<br>
Tony
<br>
<br>
_______________________________________________
<br>
StarCluster mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:StarCluster@mit.edu">StarCluster@mit.edu</a>
<br>
<a class="moz-txt-link-freetext" href="http://mailman.mit.edu/mailman/listinfo/starcluster">http://mailman.mit.edu/mailman/listinfo/starcluster</a>
<br>
Rajat Banerjee <a class="moz-txt-link-rfc2396E" href="mailto:rajatb@post.harvard.edu"><mailto:rajatb@post.harvard.edu></a>
<br>
April 1, 2016 at 11:22 AM
<br>
Regarding:
<br>
How about we just call qacct every 5 mins, or if the qacct
buffer is empty.
<br>
calling qacct and getting the job stats is the first part of the
load balancers loop to see what the cluster is up to. I
prioritized knowing the current state, and keeping the LB
running it's loop as fast as possible (2-10 seconds), so it
could run in a 1-minute loop and stay roughly on-schedule. It's
easy to run the whole LB loop with 5 minutes between loops with
the command line arg polling_interval, if that suits your
workload better. I do not mean to sound dismissive, but the
command line options (with reasonable defaults)are there so you
can test and tweak to your work load.
<br>
<br>
Regarding:
<br>
Three sorts of jobs, all of which should occur in the same
numbers,
<br>
Have you tried testing your call to qacct to see if it's
returning what you want? You could modify it in your source if
it's not representative of your jobs:
<br>
<a class="moz-txt-link-freetext" href="https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528">https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528</a>
<br>
qacct_cmd = 'qacct -j -b ' + qatime
<br>
<br>
Obviously one size doesn't fit all here, but if you find a set
of args for qacct that work better for you, let me know.
<br>
<br>
Thanks,
<br>
Raj
<br>
<br>
<br>
_______________________________________________
<br>
StarCluster mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:StarCluster@mit.edu">StarCluster@mit.edu</a>
<br>
<a class="moz-txt-link-freetext" href="http://mailman.mit.edu/mailman/listinfo/starcluster">http://mailman.mit.edu/mailman/listinfo/starcluster</a>
<br>
Tony Robinson <a class="moz-txt-link-rfc2396E" href="mailto:tonyr@speechmatics.com"><mailto:tonyr@speechmatics.com></a>
<br>
April 1, 2016 at 11:08 AM
<br>
Hi Raj and all,
<br>
<br>
I think that there is another problem as well, one that I
haven't tracked down yet. I have three sorts of jobs, all of
which should occur in the same numbers, but when I measure
what's in the cache then one job name is massively under
represented.
<br>
<br>
We have:
<br>
<br>
lookback_window = 3
<br>
<br>
which means we pull in three hours of history (by default).
How about we just call qacct every 5 mins, or if the qacct
buffer is empty. I don't think every 5 mins is a big overhead
and the "if empty" means that we can power up a new cluster and
it'll just be a bit slower before it populates the job stats
(but not that much slower as it's parsing an empty buffer).
Also I don't see the need to continually be recalculating stats
- these could be done every time qacct is called and stored.
If this is going to break something then do let me know.
<br>
<br>
I don't know when I'll next get time for this but when I get it
working I'll report back my findings (I have an AWS cluster
where nodes are brought up or down every few minutes so there is
plenty of data to try this out on).
<br>
<br>
<br>
Tony
<br>
<br>
On 01/04/16 15:44, Rajat Banerjee wrote:
<br>
<br>
<br>
_______________________________________________
<br>
StarCluster mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:StarCluster@mit.edu">StarCluster@mit.edu</a>
<br>
<a class="moz-txt-link-freetext" href="http://mailman.mit.edu/mailman/listinfo/starcluster">http://mailman.mit.edu/mailman/listinfo/starcluster</a>
<br>
</blockquote>
<br>
</blockquote>
<br>
<br>
<div class="moz-signature">-- <br>
Speechmatics is a trading name of Cantab Research Limited<br>
We are hiring: <a href="http:www.speechmatics.com/careers">www.speechmatics.com/careers</a><br>
Dr A J Robinson, Founder, Cantab Research Ltd<br>
Phone direct: 01223 794096, office: 01223 794497<br>
Company reg no GB 05697423, VAT reg no 925606030<br>
51 Canterbury Street, Cambridge, CB4 3QG, UK<br>
</div>
</body>
</html>