<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">I'm mostly there with my loadbalancer,
so I thought that it would be worth a write up for those that are
interested.<br>
<br>
Just to restate my aims, I have a lot of jobs with duration of
about 90s and these are very bursty. So I can easily have 400
jobs queued to run and *not* want to bring up any new nodes.
I've somewhat arbitrarily picked 240s as the maximum time I want
any job to wait before running.<br>
<br>
The first thing I needed to do was to reduce the poll time. As
my jobs take about 90s I need to poll much more frequent than that
(basic sampling theorem) - I picked 30s although I am tempted to
reduce this.<br>
<br>
So the first improvement was to make the polling time a real
polling time, not a wait time. This is just setting a timer at
the start of the polling loop and sleeping for the appropriate
time at the end.<br>
<br>
Also, to speed things up I eliminated all settle time. We
already know how many nodes are up and how many jobs are running,
so it was a simple matter to assume that the difference would run
soon and ignore these at the start of the job queue.<br>
<br>
As noted earlier in the thread, the existing code doesn't properly
sample the past job durations, so I simply reload the whole
lookback_window (default 3 hours) every time. It doesn't take
long and if speed were an issue then there's lots of code that
changes between date formats that could be integers on reading.<br>
<br>
I also read the job name. This allows me to calculate the mean
and variance of each job type. I am estimating the job duration
as mean + 0.5 * sqrt(var / njob), so as I get hundreds of jobs
it's pretty much the mean.<br>
<br>
With all of this in place I can estimate how long it's going to
take to run the queue. At the moment I'm ignoring the timings of
running jobs and assuming queued jobs come from the same
distribution, so the time taken to run is the job duration
(calculated above) divided by the number of slots available. I
know the how long each job has been waiting and how long I expect
each job to wait, so I can see if any job would increase my
maximum job wait time. If it does, I assume that I've got
another node up instantly (clearly false, they take more than my
240s to come up) and rerun the calculation until I know how many
nodes I need to add. In practice, it takes so long to boot nodes
that there's no point trying to bring more than 2 or 4 up. This
is because they come up serially, all of this will change when I
update startcluster.<br>
<br>
The other important change is to always load the most loaded
machine, with a slight preference for low node numbers when all
are empty (so that master gets loaded). This is really important
with such variable load you've got to bring nodes down at the end
of the hour and so need as many that are empty as possible.<br>
<pre>algorithm default
schedule_interval 0:0:05
maxujobs 0
queue_sort_method load
job_load_adjustments np_load_avg=0.50
load_adjustment_decay_time 0:7:30
load_formula host_rank-load_short*256
</pre>
<br>
So that's about it. Just recently I had 496 slots running, the
above algorithm brought up just enough nodes to cope with the load
and brought some of them down 15 mins later when the load
decreased.<br>
<br>
Overall we've doubled our efficiency using this algorithm, that is
we can provide the same quality service at half the variable cost.<br>
<br>
The code is not in a state to be shared publicly, but I'm happy to
share privately. It really needs a starcluster upgrade (thanks
Mich for the email) so that nodes can be brought up in parallel,
we should do this shortly and then I'll work on this some more.<br>
<br>
<br>
Tony <em> </em><br>
<br>
On 03/04/16 13:41, Tony Robinson wrote:<br>
</div>
<blockquote cite="mid:57010F83.6020907@speechmatics.com" type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div class="moz-cite-prefix">Okay, I've found another bug with the
load balancer which explains why avg_job_duration() was getting
shorter and shorter.<br>
<br>
get_qatime() initially loads the whole (3 hours) history, but
after that sets temp_lookback_window = self.polling_interval<br>
<br>
The problem with this is self.polling_interval has to be much
shorter than a job duration (it's got to be able to keep up) and
the -b option to qacct sets "The earliest start time for jobs to
be summarized,", so it only selects jobs that have been started
recently and finished (so that they get into qacct) - hence they
must be the very short jobs. Hence the cache is originally
populated quite reasonably but then only gets updated with very
short jobs, all the long ones never get into the cache.<br>
<br>
As I say below, I don't think any of this code is used anyway so
it doesn't matter too much that it's all broken.<br>
<br>
I'll progress with my (weekend and part time) clean up and
implementation of a true predictive load balancer. I have both
(a) mean and variance for all job types and (b) working code
assuming that avg_job_duration() is correct, so it's probably
only another days work to get solid (or a month or two of
elapsed time, I'm done for this weekend).<br>
<br>
<br>
Tony<br>
<br>
On 01/04/16 17:01, Tony Robinson wrote:<br>
</div>
<blockquote cite="mid:56FE9B54.6080207@speechmatics.com"
type="cite">
<div class="moz-cite-prefix">On 01/04/16 16:22, Rajat Banerjee
wrote:<br>
</div>
<blockquote
cite="mid:CAAEsPuc=aOO75vH=yVrXd0HGaP=PQsShPf6Lfg+FOR_05wjezQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>Regarding:<br>
How about we just call qacct every 5 mins, or if
the qacct buffer is empty. <br>
</div>
<div>calling qacct and getting the job stats is
the first part of the load balancers loop to see
what the cluster is up to. I prioritized knowing
the current state, and keeping the LB running
it's loop as fast as possible (2-10 seconds), so
it could run in a 1-minute loop and stay roughly
on-schedule. It's easy to run the whole LB loop
with 5 minutes between loops with the command
line arg <span class="">polling_interval, if
that suits your workload better. I do not mean
to sound dismissive, but the command line
options (with reasonable defaults)are there so
you can test and tweak to your work load.<br>
</span></div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
Ah, I wasn't very clear. What I mean is that we only update
the qacct stats every 5 minutes. I run the main loop every
30s. <br>
<br>
But calling qacct doesn't' take any time - we could do it every
polling interval:<br>
<br>
root@master:~# date<br>
Fri Apr 1 16:54:31 BST 2016<br>
root@master:~# echo qacct -j -b `date +%y%m%d`$((`date +%H` -
3))`date +%m`<br>
qacct -j -b 1604011304<br>
root@master:~# time qacct -j -b `date +%y%m%d`$((`date +%H` -
3))`date +%m` | wc<br>
99506 224476 3307423<br>
<br>
real 0m0.588s<br>
user 0m0.560s<br>
sys 0m0.076s<br>
root@master:~# <br>
<br>
<br>
If calling qacct is slow then the update could be run at the end
of the loop so it would have all of the loop wait time to
complete in.<br>
<br>
<blockquote
cite="mid:CAAEsPuc=aOO75vH=yVrXd0HGaP=PQsShPf6Lfg+FOR_05wjezQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>Regarding:<br>
</div>
Three sorts of jobs, all of which should occur in
the same numbers,<br>
</div>
Have you tried testing your call to qacct to see if
it's returning what you want? You could modify it in
your source if it's not representative of your jobs:<br>
<a moz-do-not-send="true"
href="https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528">https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528</a><br>
qacct_cmd <span class="">=</span> <span class=""><span
class="">'</span><tt>qacct -j -b </tt><span
class="">'</span></span> <span class="">+</span>
qatime<br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
Yes, thanks, I'm comparing to running qacct outside of the load
balancer.<br>
<br>
<blockquote
cite="mid:CAAEsPuc=aOO75vH=yVrXd0HGaP=PQsShPf6Lfg+FOR_05wjezQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div>Obviously one size doesn't fit all here, but if you
find a set of args for qacct that work better for you,
let me know.<br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
At the moment I don't think that the output of qacct is used at
all is it? I thought it was only used to give job stats, I
don't think it's really used to bring nodes up/down.<br>
<br>
<br>
Tony<br>
<br>
<div class="moz-signature">-- <br>
Speechmatics is a trading name of Cantab Research Limited<br>
We are hiring: <a moz-do-not-send="true"
href="http:www.speechmatics.com/careers">www.speechmatics.com/careers</a><br>
Dr A J Robinson, Founder, Cantab Research Ltd<br>
Phone direct: 01223 794096, office: 01223 794497<br>
Company reg no GB 05697423, VAT reg no 925606030<br>
51 Canterbury Street, Cambridge, CB4 3QG, UK<br>
</div>
</blockquote>
<br>
<br>
<div class="moz-signature">-- <br>
Speechmatics is a trading name of Cantab Research Limited<br>
We are hiring: <a moz-do-not-send="true"
href="https://www.speechmatics.com/careers">www.speechmatics.com/careers</a><br>
Dr A J Robinson, Founder, Cantab Research Ltd<br>
Phone direct: 01223 794096, office: 01223 794497<br>
Company reg no GB 05697423, VAT reg no 925606030<br>
51 Canterbury Street, Cambridge, CB4 3QG, UK</div>
</blockquote>
<br>
<br>
<div class="moz-signature">-- <br>
Speechmatics is a trading name of Cantab Research Limited<br>
We are hiring: <a href="https://www.speechmatics.com/careers">www.speechmatics.com/careers</a><br>
Dr A J Robinson, Founder, Cantab Research Ltd<br>
Phone direct: 01223 794096, office: 01223 794497<br>
Company reg no GB 05697423, VAT reg no 925606030<br>
51 Canterbury Street, Cambridge, CB4 3QG, UK</div>
</body>
</html>