[StarCluster] MIT StarCluster and the new C3 instances

Sergio Mafra sergiohmafra at gmail.com
Tue Dec 10 08:06:47 EST 2013


Rayson,

I´ve finally got the development version of StarCluster working. Now it´s
time to move on to VPC´s tests.
If someone wants to have the development version to play with that, that´s
the steps needed:

1. Provision an Ubuntu Server t1.micro instance
2. Update libs:  $ sudo apt-get update
3. Install gcc compiler - $ sudo apt-get install gcc
4. Install phyton dev libs - $ sudo apt-get install python-dev
5. Install git - $ sudo apt-get install git
6. Sync git wih StarCluster - $ git clone git://
github.com/jtriley/StarCluster.git
7. Compile it;
7.1 - $ cd StarCluster
7.2 - $ sudo python distribute_setup.py
7.3 - $ sudo python setup.py install
8. You should now have StarCluster 0.999 running - $ starcluster --version

Regarding running StarCluster on a VPC - Do you have any tutorial or doc to
help me on that?

All best.

Sergio


On Sat, Nov 30, 2013 at 6:09 PM, Rayson Ho <raysonlogin at gmail.com> wrote:

> If you are using the release version, then VPC should work.
>
> I think I referred you to issue 21
> (https://github.com/jtriley/StarCluster/issues/21) a while back, but
> IIRC, it has not been merged into 0.94.x yet.
>
> Rayson
>
> ==================================================
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
>
>
> On Sat, Nov 30, 2013 at 2:26 PM, Sergio Mafra <sergiohmafra at gmail.com>
> wrote:
> > Rayson,
> >
> > Got little confused of what you said: Does StarCluster runs or not in a
> VPC?
> > It seems a lot of discussions going on and saw a change in the config
> file
> > (github) that seems to take this advantage to StarCluster.
> > Can you explain the status of this?
> >
> >
> > On Mon, Nov 25, 2013 at 3:42 PM, Rayson Ho <raysonlogin at gmail.com>
> wrote:
> >>
> >> Thanks Sergio, I check the docs the day when c3 was announced, there's
> >> no mention of placement group support for c3. I think I was reading
> >> the older version of the doc.
> >>
> >> Yuichi, we can't take advantage of SR-IOV (yet) as we don't run in a
> VPC.
> >>
> >> Rayson
> >>
> >> ==================================================
> >> Open Grid Scheduler - The Official Open Source Grid Engine
> >> http://gridscheduler.sourceforge.net/
> >> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
> >>
> >>
> >> On Mon, Nov 25, 2013 at 9:39 AM, Yoshiara, Yuichi <
> yoshiara at amazon.co.jp>
> >> wrote:
> >> >
> >> > Hi Sergio,
> >> >
> >> >
> >> >
> >> > There is a new feature called Enhanced Networking which is available
> in
> >> > C3 instances.
> >> >
> >> > Please read through the link below and see if it helps you.
> >> >
> >> >
> >> >
> >> >
> >> >
> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html
> >> >
> >> >
> >> >
> >> >
> >> >
> http://aws.amazon.com/ec2/faqs/#What_networking_capabilities_are_included_in_this_feature
> >> >
> >> >
> >> >
> >> > Yuichi Yoshiara
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > From: starcluster-bounces at mit.edu [mailto:starcluster-bounces at mit.edu
> ]
> >> > On Behalf Of Sergio Mafra
> >> > Sent: Monday, November 25, 2013 8:16 PM
> >> > To: Rayson Ho; starcluster at mit.edu
> >> > Subject: Re: [StarCluster] MIT StarCluster and the new C3 instances
> >> >
> >> >
> >> >
> >> > Hi Rayson,
> >> >
> >> >
> >> >
> >> > I noticed that StarCluster didn´t allocate a placement group for C3
> >> > instances and did it by myself in the code.
> >> >
> >> > But this was not helpfull since C3 instances uses high throughput
> insted
> >> > of the 10 giga network.
> >> >
> >> > In my tests, using C3 or CC2 in a cluster config where MPI is required
> >> > will be equal to same processing time.
> >> >
> >> > Hope that AWS put them on 10 Giga net in the near future.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > Compute optimized
> >> >
> >> > c3.8xlarge
> >> >
> >> > 64-bit
> >> >
> >> > Intel Xeon E5-2680
> >> >
> >> > 32
> >> >
> >> > 108
> >> >
> >> > 60.00
> >> >
> >> > 640 GB (2 x 320 SSD)
> >> >
> >> > -
> >> >
> >> > 240
> >> >
> >> > High
> >> >
> >> > Compute optimized
> >> >
> >> > cc2.8xlarge
> >> >
> >> > 64-bit
> >> >
> >> > Intel Xeon E5-2670
> >> >
> >> > 32
> >> >
> >> > 88
> >> >
> >> > 60.50
> >> >
> >> > 3370 GB (4 x 840)
> >> >
> >> > -
> >> >
> >> > 240
> >> >
> >> > 10 Gigabit
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > On Mon, Nov 18, 2013 at 7:13 PM, Sergio Mafra <sergiohmafra at gmail.com
> >
> >> > wrote:
> >> >
> >> > Hi Rayson,
> >> >
> >> >
> >> >
> >> > I´ll try to do that.
> >> >
> >> >
> >> >
> >> > All best,
> >> >
> >> >
> >> >
> >> > Sergio
> >> >
> >> >
> >> >
> >> > On Mon, Nov 18, 2013 at 1:27 PM, Rayson Ho <raysonlogin at gmail.com>
> >> > wrote:
> >> >
> >> > It's in the development branch:
> >> >
> >> > https://github.com/jtriley/StarCluster/pull/325
> >> >
> >> > It's a few lines of code, and you can merge the changes into your
> >> > local version very easily.
> >> >
> >> > Rayson
> >> >
> >> > ==================================================
> >> > Open Grid Scheduler - The Official Open Source Grid Engine
> >> > http://gridscheduler.sourceforge.net/
> >> > http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
> >> >
> >> >
> >> >
> >> > On Mon, Nov 18, 2013 at 8:29 AM, Sergio Mafra <sergiohmafra at gmail.com
> >
> >> > wrote:
> >> > > Hi everyone,
> >> > >
> >> > > I attended AWS Re:Invent this year and liked a lot. Right place to
> go.
> >> > > AWS announced there the new C3 family of HPC instances.
> >> > > I´m anxious to test them in my models but I understand that
> >> > > StarCluster must
> >> > > take aware of them in the code.
> >> > > So when will be the C3s available?
> >> > >
> >> > > All the best,
> >> > >
> >> > > Sergio Mafra
> >> > >
> >> >
> >> > > _______________________________________________
> >> > > StarCluster mailing list
> >> > > StarCluster at mit.edu
> >> > > http://mailman.mit.edu/mailman/listinfo/starcluster
> >> > >
> >> >
> >> >
> >> >
> >> >
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20131210/0adda950/attachment-0001.htm


More information about the StarCluster mailing list