[StarCluster] Is StarCluster still under active development?

Tony Robinson tonyr at speechmatics.com
Thu Jun 16 08:40:10 EDT 2016


Hi Chris,

Cheers - "schedule on demand" should give us an improvement, I'll look 
into that.   As to bringing down nodes 15 mins after the peak, these are 
nodes that have already run for 50 mins.   The trick is to empty nodes 
that are between 50 mins (or the default of 45 mins) past the hour and 
the hour.   I don't have any magic for this, but the load formula below 
(host_rank-load_short*256) is pretty good at emptying nodes when they 
can be and I'm relying on a random distribution of start times to find 
some that can be brought down.  It would be great if gridEngine had a 
binary flag to say if had been up for 0-50 or 50-60 mins which could 
then go in load_formula, but I'm not going to hack it in.

Thanks very much for your feedback.


Tony

On 14/06/16 13:38, Chris Dagdigian wrote:
>
>
> Looks like you are using the Grid Engine scheduler setting that 
> invokes a sort/schedule/dispatch cycle every 5 seconds
>
> That is an improvement from the default SGE setting of 15 seconds but 
> there may be a better setting for you if you really do have a ton of 
> bursty jobs that run for short periods of time (90 seconds)
>
> The SGE feature is called "schedule on demand" and what it does is 
> instead of a cyclical loop every N seconds (which can be 
> overhead/wasteful for very short jobs) the new setting will trigger a 
> sort/dispatch run eveload_formulary time a new job is submitted or a 
> running job transitions state and exits.
>
> Most people with high numbers of short jobs tend to use on-demand 
> scheduling with Grid Engine rather than the periodic cycle that is the 
> default.
>
> Like your work with tuning and the load formula stuff!
>
> And remember -- it can be "wasteful" to bring down nodes in 15 minutes 
> if you have already paid for a full hour of EC2 instance time!
>
> -Chris
>
>
>> Tony Robinson <mailto:tonyr at speechmatics.com>
>> June 13, 2016 at 10:17 PM
>> I'm mostly there with my loadbalancer, so I thought that it would be 
>> worth a write up for those that are interested.
>>
>> Just to restate my aims, I have a lot of jobs with duration of about 
>> 90s and these are very bursty.   So I can easily have 400 jobs queued 
>> to run and *not* want to bring up any new nodes. I've somewhat 
>> arbitrarily picked 240s as the maximum time I want any job to wait 
>> before running.
>>
>> The first thing I needed to do was to reduce the poll time.   As my 
>> jobs take about 90s I need to poll much more frequent than that 
>> (basic sampling theorem) - I picked 30s although I am tempted to 
>> reduce this.
>>
>> So the first improvement was to make the polling time a real polling 
>> time, not a wait time.    This is just setting a timer at the start 
>> of the polling loop and sleeping for the appropriate time at the end.
>>
>> Also, to speed things up I eliminated all settle time.   We already 
>> know how many nodes are up and how many jobs are running, so it was a 
>> simple matter to assume that the difference would run soon and ignore 
>> these at the start of the job queue.
>>
>> As noted earlier in the thread, the existing code doesn't properly 
>> sample the past job durations, so I simply reload the whole 
>> lookback_window (default 3 hours) every time.   It doesn't take long 
>> and if speed were an issue then there's lots of code that changes 
>> between date formats that could be integers on reading.
>>
>> I also read the job name.   This allows me to calculate the mean and 
>> variance of each job type.    I am estimating the job duration as 
>> mean + 0.5 * sqrt(var / njob), so as I get hundreds of jobs it's 
>> pretty much the mean.
>>
>> With all of this in place I can estimate how long it's going to take 
>> to run the queue.   At the moment I'm ignoring the timings of running 
>> jobs and assuming queued jobs come from the same distribution, so the 
>> time taken to run is the job duration (calculated above) divided by 
>> the number of slots available.   I know the how long each job has 
>> been waiting and how long I expect each job to wait, so I can see if 
>> any job would increase my maximum job wait time.   If it does, I 
>> assume that I've got another node up instantly (clearly false, they 
>> take more than my 240s to come up) and rerun the calculation until I 
>> know how many nodes I need to add.   In practice, it takes so long to 
>> boot nodes that there's no point trying to bring more than 2 or 4 
>> up.    This is because they come up serially, all of this will change 
>> when I update startcluster.
>>
>> The other important change is to always load the most loaded machine, 
>> with a slight preference for low node numbers when all are empty (so 
>> that master gets loaded).   This is really important with such 
>> variable load you've got to bring nodes down at the end of the hour 
>> and so need as many that are empty as possible.
>> algorithm default
>> schedule_interval 0:0:05
>> maxujobs 0
>> queue_sort_method load
>> job_load_adjustments np_load_avg=0.50
>> load_adjustment_decay_time 0:7:30
>> load_formula host_rank-load_short*256
>>
>> So that's about it.   Just recently I had 496 slots running, the 
>> above algorithm brought up just enough nodes to cope with the load 
>> and brought some of them down 15 mins later when the load decreased.
>>
>> Overall we've doubled our efficiency using this algorithm, that is we 
>> can provide the same quality service at half the variable cost.
>>
>> The code is not in a state to be shared publicly, but I'm happy to 
>> share privately.   It really needs a starcluster upgrade (thanks Mich 
>> for the email) so that nodes can be brought up in parallel, we should 
>> do this shortly and then I'll work on this some more.
>>
>>
>> Tony //
>>
>> On 03/04/16 13:41, Tony Robinson wrote:
>>
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>> Tony Robinson <mailto:tonyr at speechmatics.com>
>> April 3, 2016 at 8:41 AM
>> Okay, I've found another bug with the load balancer which explains 
>> why avg_job_duration() was getting shorter and shorter.
>>
>> get_qatime() initially loads the whole (3 hours) history, but after 
>> that sets  temp_lookback_window = self.polling_interval
>>
>> The problem with this is self.polling_interval has to be much shorter 
>> than a job duration (it's got to be able to keep up) and the -b 
>> option to qacct sets "The earliest start time for jobs to be 
>> summarized,", so it only selects jobs that have been started recently 
>> and finished (so that they get into qacct) - hence they must be the 
>> very short jobs.   Hence the cache is originally populated quite 
>> reasonably but then only gets updated with very short jobs, all the 
>> long ones never get into the cache.
>>
>> As I say below, I don't think any of this code is used anyway so it 
>> doesn't matter too much that it's all broken.
>>
>> I'll progress with my (weekend and part time) clean up and 
>> implementation of a true predictive load balancer.  I have both (a) 
>> mean and variance for all job types and (b) working code assuming 
>> that avg_job_duration() is correct, so it's probably only another 
>> days work to get solid (or a month or two of elapsed time, I'm done 
>> for this weekend).
>>
>>
>> Tony
>>
>> On 01/04/16 17:01, Tony Robinson wrote:
>>
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>> Tony Robinson <mailto:tonyr at speechmatics.com>
>> April 1, 2016 at 12:01 PM
>> On 01/04/16 16:22, Rajat Banerjee wrote:
>>> Regarding:
>>> How about we just call qacct every 5 mins, or if the qacct buffer is 
>>> empty.
>>> calling qacct and getting the job stats is the first part of the 
>>> load balancers loop to see what the cluster is up to. I prioritized 
>>> knowing the current state, and keeping the LB running it's loop as 
>>> fast as possible (2-10 seconds), so it could run in a 1-minute loop 
>>> and stay roughly on-schedule. It's easy to run the whole LB loop 
>>> with 5 minutes between loops with the command line arg 
>>> polling_interval, if that suits your workload better. I do not mean 
>>> to sound dismissive, but the command line options (with reasonable 
>>> defaults)are there so you can test and tweak to your work load.
>>
>> Ah, I wasn't very clear.   What I mean is that we only update the 
>> qacct stats every 5 minutes.   I run the main loop every 30s.
>>
>> But calling qacct doesn't' take any time - we could do it every 
>> polling interval:
>>
>> root at master:~# date
>> Fri Apr  1 16:54:31 BST 2016
>> root at master:~# echo qacct -j -b `date +%y%m%d`$((`date +%H` - 
>> 3))`date +%m`
>> qacct -j -b 1604011304
>> root at master:~# time  qacct -j -b `date +%y%m%d`$((`date +%H` - 
>> 3))`date +%m` | wc
>>   99506  224476 3307423
>>
>> real    0m0.588s
>> user    0m0.560s
>> sys    0m0.076s
>> root at master:~#
>>
>>
>> If calling qacct is slow then the update could be run at the end of 
>> the loop so it would have all of the loop wait time to complete in.
>>
>>> Regarding:
>>> Three sorts of jobs, all of which should occur in the same numbers,
>>> Have you tried testing your call to qacct to see if it's returning 
>>> what you want? You could modify it in your source if it's not 
>>> representative of your jobs:
>>> https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528 
>>>
>>> qacct_cmd = 'qacct -j -b ' + qatime
>>
>> Yes, thanks, I'm comparing to running qacct outside of the load 
>> balancer.
>>
>>> Obviously one size doesn't fit all here, but if you find a set of 
>>> args for qacct that work better for you, let me know.
>>
>> At the moment I don't think that the output of qacct is used at all 
>> is it?   I thought it was only used to give job stats, I don't think 
>> it's really used to bring nodes up/down.
>>
>>
>> Tony
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>> Rajat Banerjee <mailto:rajatb at post.harvard.edu>
>> April 1, 2016 at 11:22 AM
>> Regarding:
>> How about we just call qacct every 5 mins, or if the qacct buffer is 
>> empty.
>> calling qacct and getting the job stats is the first part of the load 
>> balancers loop to see what the cluster is up to. I prioritized 
>> knowing the current state, and keeping the LB running it's loop as 
>> fast as possible (2-10 seconds), so it could run in a 1-minute loop 
>> and stay roughly on-schedule. It's easy to run the whole LB loop with 
>> 5 minutes between loops with the command line arg polling_interval, 
>> if that suits your workload better. I do not mean to sound 
>> dismissive, but the command line options (with reasonable 
>> defaults)are there so you can test and tweak to your work load.
>>
>> Regarding:
>> Three sorts of jobs, all of which should occur in the same numbers,
>> Have you tried testing your call to qacct to see if it's returning 
>> what you want? You could modify it in your source if it's not 
>> representative of your jobs:
>> https://github.com/jtriley/StarCluster/blob/develop/starcluster/balancers/sge/__init__.py#L528 
>>
>> qacct_cmd = 'qacct -j -b ' + qatime
>>
>> Obviously one size doesn't fit all here, but if you find a set of 
>> args for qacct that work better for you, let me know.
>>
>> Thanks,
>> Raj
>>
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>> Tony Robinson <mailto:tonyr at speechmatics.com>
>> April 1, 2016 at 11:08 AM
>> Hi Raj and all,
>>
>> I think that there is another problem as well, one that I haven't 
>> tracked down yet.  I have three sorts of jobs, all of which should 
>> occur in the same numbers, but when I measure what's in the cache 
>> then one job name is massively under represented.
>>
>> We have:
>>
>> lookback_window = 3
>>
>> which means we pull in three hours of history (by default). How about 
>> we just call qacct every 5 mins, or if the qacct buffer is empty.   I 
>> don't think every 5 mins is a big overhead and the "if empty" means 
>> that we can power up a new cluster and it'll just be a bit slower 
>> before it populates the job stats (but not that much slower as it's 
>> parsing an empty buffer). Also I don't see the need to continually be 
>> recalculating stats - these could be done every time qacct is called 
>> and stored. If this is going to break something then do let me know.
>>
>> I don't know when I'll next get time for this but when I get it 
>> working I'll report back my findings (I have an AWS cluster where 
>> nodes are brought up or down every few minutes so there is plenty of 
>> data to try this out on).
>>
>>
>> Tony
>>
>> On 01/04/16 15:44, Rajat Banerjee wrote:
>>
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>


-- 
Speechmatics is a trading name of Cantab Research Limited
We are hiring: www.speechmatics.com/careers 
<http:www.speechmatics.com/careers>
Dr A J Robinson, Founder, Cantab Research Ltd
Phone direct: 01223 794096, office: 01223 794497
Company reg no GB 05697423, VAT reg no 925606030
51 Canterbury Street, Cambridge, CB4 3QG, UK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20160616/f6b52076/attachment-0001.html


More information about the StarCluster mailing list