[StarCluster] GPU load sensor support
François-Michel L'Heureux
fmlheureux at datacratic.com
Tue Dec 18 11:40:15 EST 2012
Hi Mark!
This is more a question for the OGS mailing list (
users-request at gridengine.org) than the StarCluster one, but here is my
answer anyway.
I don't think it can be done by default. I would do it by adding your gpus
to complex variables. First, do
qconf -mc
There you may create your gpus as complex values. I would go with
gpu gpu INT <= YES YES {default?} 0
You can replace default with the number of gpus you want your requests to
take by default. Then, for each of your nodes, you must do
qconf -me {node name}
Then on the line "complex_values", you may add "gpu={num_gpus}". (Remove
the NONE string if it's present.) Of course, you have to replace "num_gpus"
with your count of gpus.
Then, when making your requests, you can add "-l gpu={num gpu requested}"
to your command. Taking your question as an example, you would do "qsub -l
gpu=2 ...". IMPORTANT: If you set 0 as default in qconf -mc, all your
requests using gpus must use the "-l gpu=" option, otherwise OGS will
assume they don't use any.
I hope this helps!
Mich
From: Mark Ebersole <markeber at gmail.com>
To: starcluster at MIT.EDU
Cc:
Date: Mon, 17 Dec 2012 15:27:59 -0700
Subject: [StarCluster] GPU load sensor support
Does the the Sun Grid Engine used in StarCluster support the use of load
sensors for the GPUs in a GPU-compute AMI? For example, can I do a qsub
and request a node(s) that has two free GPUs?
I searched through the features/bugs/mail archive, but don't see any info
on this.
Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20121218/56793186/attachment.htm
More information about the StarCluster
mailing list