<div dir="ltr">Rayson,<div><br></div><div>Iīve finally got the development version of StarCluster working. Now itīs time to move on to VPCīs tests.</div><div>If someone wants to have the development version to play with that, thatīs the steps needed:</div>
<div><br></div><div>1. Provision an Ubuntu Server t1.micro instance</div><div>2. Update libs: $ sudo apt-get update</div><div>3. Install gcc compiler - $ sudo apt-get install gcc</div><div>4. Install phyton dev libs - $ sudo apt-get install python-dev</div>
<div>5. Install git - $ sudo apt-get install git</div><div>6. Sync git wih StarCluster - $ git clone git://<a href="http://github.com/jtriley/StarCluster.git">github.com/jtriley/StarCluster.git</a></div><div>7. Compile it; </div>
<div>7.1 - $ cd StarCluster</div><div>7.2 - $ sudo python distribute_setup.py</div><div>7.3 - $ sudo python setup.py install</div><div>8. You should now have StarCluster 0.999 running - $ starcluster --version</div><div><br>
</div><div>Regarding running StarCluster on a VPC - Do you have any tutorial or doc to help me on that?<br></div><div><br></div><div>All best.</div><div><br>Sergio</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Sat, Nov 30, 2013 at 6:09 PM, Rayson Ho <span dir="ltr"><<a href="mailto:raysonlogin@gmail.com" target="_blank">raysonlogin@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
If you are using the release version, then VPC should work.<br>
<br>
I think I referred you to issue 21<br>
(<a href="https://github.com/jtriley/StarCluster/issues/21" target="_blank">https://github.com/jtriley/StarCluster/issues/21</a>) a while back, but<br>
IIRC, it has not been merged into 0.94.x yet.<br>
<div class="im HOEnZb"><br>
Rayson<br>
<br>
==================================================<br>
Open Grid Scheduler - The Official Open Source Grid Engine<br>
<a href="http://gridscheduler.sourceforge.net/" target="_blank">http://gridscheduler.sourceforge.net/</a><br>
<a href="http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html" target="_blank">http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html</a><br>
<br>
<br>
</div><div class="HOEnZb"><div class="h5">On Sat, Nov 30, 2013 at 2:26 PM, Sergio Mafra <<a href="mailto:sergiohmafra@gmail.com">sergiohmafra@gmail.com</a>> wrote:<br>
> Rayson,<br>
><br>
> Got little confused of what you said: Does StarCluster runs or not in a VPC?<br>
> It seems a lot of discussions going on and saw a change in the config file<br>
> (github) that seems to take this advantage to StarCluster.<br>
> Can you explain the status of this?<br>
><br>
><br>
> On Mon, Nov 25, 2013 at 3:42 PM, Rayson Ho <<a href="mailto:raysonlogin@gmail.com">raysonlogin@gmail.com</a>> wrote:<br>
>><br>
>> Thanks Sergio, I check the docs the day when c3 was announced, there's<br>
>> no mention of placement group support for c3. I think I was reading<br>
>> the older version of the doc.<br>
>><br>
>> Yuichi, we can't take advantage of SR-IOV (yet) as we don't run in a VPC.<br>
>><br>
>> Rayson<br>
>><br>
>> ==================================================<br>
>> Open Grid Scheduler - The Official Open Source Grid Engine<br>
>> <a href="http://gridscheduler.sourceforge.net/" target="_blank">http://gridscheduler.sourceforge.net/</a><br>
>> <a href="http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html" target="_blank">http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html</a><br>
>><br>
>><br>
>> On Mon, Nov 25, 2013 at 9:39 AM, Yoshiara, Yuichi <<a href="mailto:yoshiara@amazon.co.jp">yoshiara@amazon.co.jp</a>><br>
>> wrote:<br>
>> ><br>
>> > Hi Sergio,<br>
>> ><br>
>> ><br>
>> ><br>
>> > There is a new feature called Enhanced Networking which is available in<br>
>> > C3 instances.<br>
>> ><br>
>> > Please read through the link below and see if it helps you.<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html" target="_blank">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html</a><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > <a href="http://aws.amazon.com/ec2/faqs/#What_networking_capabilities_are_included_in_this_feature" target="_blank">http://aws.amazon.com/ec2/faqs/#What_networking_capabilities_are_included_in_this_feature</a><br>
>> ><br>
>> ><br>
>> ><br>
>> > Yuichi Yoshiara<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > From: <a href="mailto:starcluster-bounces@mit.edu">starcluster-bounces@mit.edu</a> [mailto:<a href="mailto:starcluster-bounces@mit.edu">starcluster-bounces@mit.edu</a>]<br>
>> > On Behalf Of Sergio Mafra<br>
>> > Sent: Monday, November 25, 2013 8:16 PM<br>
>> > To: Rayson Ho; <a href="mailto:starcluster@mit.edu">starcluster@mit.edu</a><br>
>> > Subject: Re: [StarCluster] MIT StarCluster and the new C3 instances<br>
>> ><br>
>> ><br>
>> ><br>
>> > Hi Rayson,<br>
>> ><br>
>> ><br>
>> ><br>
>> > I noticed that StarCluster didnīt allocate a placement group for C3<br>
>> > instances and did it by myself in the code.<br>
>> ><br>
>> > But this was not helpfull since C3 instances uses high throughput insted<br>
>> > of the 10 giga network.<br>
>> ><br>
>> > In my tests, using C3 or CC2 in a cluster config where MPI is required<br>
>> > will be equal to same processing time.<br>
>> ><br>
>> > Hope that AWS put them on 10 Giga net in the near future.<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > Compute optimized<br>
>> ><br>
>> > c3.8xlarge<br>
>> ><br>
>> > 64-bit<br>
>> ><br>
>> > Intel Xeon E5-2680<br>
>> ><br>
>> > 32<br>
>> ><br>
>> > 108<br>
>> ><br>
>> > 60.00<br>
>> ><br>
>> > 640 GB (2 x 320 SSD)<br>
>> ><br>
>> > -<br>
>> ><br>
>> > 240<br>
>> ><br>
>> > High<br>
>> ><br>
>> > Compute optimized<br>
>> ><br>
>> > cc2.8xlarge<br>
>> ><br>
>> > 64-bit<br>
>> ><br>
>> > Intel Xeon E5-2670<br>
>> ><br>
>> > 32<br>
>> ><br>
>> > 88<br>
>> ><br>
>> > 60.50<br>
>> ><br>
>> > 3370 GB (4 x 840)<br>
>> ><br>
>> > -<br>
>> ><br>
>> > 240<br>
>> ><br>
>> > 10 Gigabit<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > On Mon, Nov 18, 2013 at 7:13 PM, Sergio Mafra <<a href="mailto:sergiohmafra@gmail.com">sergiohmafra@gmail.com</a>><br>
>> > wrote:<br>
>> ><br>
>> > Hi Rayson,<br>
>> ><br>
>> ><br>
>> ><br>
>> > Iīll try to do that.<br>
>> ><br>
>> ><br>
>> ><br>
>> > All best,<br>
>> ><br>
>> ><br>
>> ><br>
>> > Sergio<br>
>> ><br>
>> ><br>
>> ><br>
>> > On Mon, Nov 18, 2013 at 1:27 PM, Rayson Ho <<a href="mailto:raysonlogin@gmail.com">raysonlogin@gmail.com</a>><br>
>> > wrote:<br>
>> ><br>
>> > It's in the development branch:<br>
>> ><br>
>> > <a href="https://github.com/jtriley/StarCluster/pull/325" target="_blank">https://github.com/jtriley/StarCluster/pull/325</a><br>
>> ><br>
>> > It's a few lines of code, and you can merge the changes into your<br>
>> > local version very easily.<br>
>> ><br>
>> > Rayson<br>
>> ><br>
>> > ==================================================<br>
>> > Open Grid Scheduler - The Official Open Source Grid Engine<br>
>> > <a href="http://gridscheduler.sourceforge.net/" target="_blank">http://gridscheduler.sourceforge.net/</a><br>
>> > <a href="http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html" target="_blank">http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html</a><br>
>> ><br>
>> ><br>
>> ><br>
>> > On Mon, Nov 18, 2013 at 8:29 AM, Sergio Mafra <<a href="mailto:sergiohmafra@gmail.com">sergiohmafra@gmail.com</a>><br>
>> > wrote:<br>
>> > > Hi everyone,<br>
>> > ><br>
>> > > I attended AWS Re:Invent this year and liked a lot. Right place to go.<br>
>> > > AWS announced there the new C3 family of HPC instances.<br>
>> > > Iīm anxious to test them in my models but I understand that<br>
>> > > StarCluster must<br>
>> > > take aware of them in the code.<br>
>> > > So when will be the C3s available?<br>
>> > ><br>
>> > > All the best,<br>
>> > ><br>
>> > > Sergio Mafra<br>
>> > ><br>
>> ><br>
>> > > _______________________________________________<br>
>> > > StarCluster mailing list<br>
>> > > <a href="mailto:StarCluster@mit.edu">StarCluster@mit.edu</a><br>
>> > > <a href="http://mailman.mit.edu/mailman/listinfo/starcluster" target="_blank">http://mailman.mit.edu/mailman/listinfo/starcluster</a><br>
>> > ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
><br>
><br>
</div></div></blockquote></div><br></div>