<div dir="ltr">Hi Rayson,<div><br></div><div><span style="color:rgb(51,51,51);font-family:Georgia,Utopia,'Palatino Linotype',Palatino,serif;font-size:14px;line-height:20px">Is there any AMI with the Intel </span><tt style="color:rgb(51,51,51);font-size:14px;line-height:20px">ixgbevf </tt><span style="color:rgb(51,51,51);font-family:Georgia,Utopia,'Palatino Linotype',Palatino,serif;font-size:14px;line-height:20px">Virtual Function drive ready for StarCluster?</span></div>
<div><span style="color:rgb(51,51,51);font-family:Georgia,Utopia,'Palatino Linotype',Palatino,serif;font-size:14px;line-height:20px">I think that this should be part of StarClusterŽs public AMIs.</span></div><div>
<span style="color:rgb(51,51,51);font-family:Georgia,Utopia,'Palatino Linotype',Palatino,serif;font-size:14px;line-height:20px"><br></span></div><div><span style="color:rgb(51,51,51);font-family:Georgia,Utopia,'Palatino Linotype',Palatino,serif;font-size:14px;line-height:20px">All best,</span></div>
<div><span style="color:rgb(51,51,51);font-family:Georgia,Utopia,'Palatino Linotype',Palatino,serif;font-size:14px;line-height:20px"><br></span></div><div><span style="color:rgb(51,51,51);font-family:Georgia,Utopia,'Palatino Linotype',Palatino,serif;font-size:14px;line-height:20px">Sergio</span></div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jan 8, 2014 at 3:52 PM, Rayson Ho <span dir="ltr"><<a href="mailto:raysonlogin@gmail.com" target="_blank">raysonlogin@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The Enhanced Networking feature provides much better latency for all<br>
instance types that support it. For the c3.8xlarge instance type,<br>
Enhanced Networking allows the instance to utilize over 95% of the<br>
10Gbit Ethernet bandwidth:<br>
<br>
<a href="http://blogs.scalablelogic.com/2013/12/enhanced-networking-in-aws-cloud.html" target="_blank">http://blogs.scalablelogic.com/2013/12/enhanced-networking-in-aws-cloud.html</a><br>
<a href="http://blogs.scalablelogic.com/2014/01/enhanced-networking-in-aws-cloud-part-2.html" target="_blank">http://blogs.scalablelogic.com/2014/01/enhanced-networking-in-aws-cloud-part-2.html</a><br>
<br>
Rayson<br>
<br>
==================================================<br>
Open Grid Scheduler - The Official Open Source Grid Engine<br>
<a href="http://gridscheduler.sourceforge.net/" target="_blank">http://gridscheduler.sourceforge.net/</a><br>
<a href="http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html" target="_blank">http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html</a><br>
<br>
<br>
<br>
On Mon, Nov 25, 2013 at 12:42 PM, Rayson Ho <<a href="mailto:raysonlogin@gmail.com">raysonlogin@gmail.com</a>> wrote:<br>
> Thanks Sergio, I check the docs the day when c3 was announced, there's<br>
> no mention of placement group support for c3. I think I was reading<br>
> the older version of the doc.<br>
><br>
> Yuichi, we can't take advantage of SR-IOV (yet) as we don't run in a VPC.<br>
><br>
> Rayson<br>
><br>
> ==================================================<br>
> Open Grid Scheduler - The Official Open Source Grid Engine<br>
> <a href="http://gridscheduler.sourceforge.net/" target="_blank">http://gridscheduler.sourceforge.net/</a><br>
> <a href="http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html" target="_blank">http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html</a><br>
><br>
><br>
> On Mon, Nov 25, 2013 at 9:39 AM, Yoshiara, Yuichi <<a href="mailto:yoshiara@amazon.co.jp">yoshiara@amazon.co.jp</a>> wrote:<br>
>><br>
>> Hi Sergio,<br>
>><br>
>><br>
>><br>
>> There is a new feature called Enhanced Networking which is available in C3 instances.<br>
>><br>
>> Please read through the link below and see if it helps you.<br>
>><br>
>><br>
>><br>
>> <a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html" target="_blank">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html</a><br>
>><br>
>><br>
>><br>
>> <a href="http://aws.amazon.com/ec2/faqs/#What_networking_capabilities_are_included_in_this_feature" target="_blank">http://aws.amazon.com/ec2/faqs/#What_networking_capabilities_are_included_in_this_feature</a><br>
>><br>
>><br>
>><br>
>> Yuichi Yoshiara<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> From: <a href="mailto:starcluster-bounces@mit.edu">starcluster-bounces@mit.edu</a> [mailto:<a href="mailto:starcluster-bounces@mit.edu">starcluster-bounces@mit.edu</a>] On Behalf Of Sergio Mafra<br>
>> Sent: Monday, November 25, 2013 8:16 PM<br>
>> To: Rayson Ho; <a href="mailto:starcluster@mit.edu">starcluster@mit.edu</a><br>
>> Subject: Re: [StarCluster] MIT StarCluster and the new C3 instances<br>
>><br>
>><br>
>><br>
>> Hi Rayson,<br>
>><br>
>><br>
>><br>
>> I noticed that StarCluster didnŽt allocate a placement group for C3 instances and did it by myself in the code.<br>
>><br>
>> But this was not helpfull since C3 instances uses high throughput insted of the 10 giga network.<br>
>><br>
>> In my tests, using C3 or CC2 in a cluster config where MPI is required will be equal to same processing time.<br>
>><br>
>> Hope that AWS put them on 10 Giga net in the near future.<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> Compute optimized<br>
>><br>
>> c3.8xlarge<br>
>><br>
>> 64-bit<br>
>><br>
>> Intel Xeon E5-2680<br>
>><br>
>> 32<br>
>><br>
>> 108<br>
>><br>
>> 60.00<br>
>><br>
>> 640 GB (2 x 320 SSD)<br>
>><br>
>> -<br>
>><br>
>> 240<br>
>><br>
>> High<br>
>><br>
>> Compute optimized<br>
>><br>
>> cc2.8xlarge<br>
>><br>
>> 64-bit<br>
>><br>
>> Intel Xeon E5-2670<br>
>><br>
>> 32<br>
>><br>
>> 88<br>
>><br>
>> 60.50<br>
>><br>
>> 3370 GB (4 x 840)<br>
>><br>
>> -<br>
>><br>
>> 240<br>
>><br>
>> 10 Gigabit<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> On Mon, Nov 18, 2013 at 7:13 PM, Sergio Mafra <<a href="mailto:sergiohmafra@gmail.com">sergiohmafra@gmail.com</a>> wrote:<br>
>><br>
>> Hi Rayson,<br>
>><br>
>><br>
>><br>
>> IŽll try to do that.<br>
>><br>
>><br>
>><br>
>> All best,<br>
>><br>
>><br>
>><br>
>> Sergio<br>
>><br>
>><br>
>><br>
>> On Mon, Nov 18, 2013 at 1:27 PM, Rayson Ho <<a href="mailto:raysonlogin@gmail.com">raysonlogin@gmail.com</a>> wrote:<br>
>><br>
>> It's in the development branch:<br>
>><br>
>> <a href="https://github.com/jtriley/StarCluster/pull/325" target="_blank">https://github.com/jtriley/StarCluster/pull/325</a><br>
>><br>
>> It's a few lines of code, and you can merge the changes into your<br>
>> local version very easily.<br>
>><br>
>> Rayson<br>
>><br>
>> ==================================================<br>
>> Open Grid Scheduler - The Official Open Source Grid Engine<br>
>> <a href="http://gridscheduler.sourceforge.net/" target="_blank">http://gridscheduler.sourceforge.net/</a><br>
>> <a href="http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html" target="_blank">http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html</a><br>
>><br>
>><br>
>><br>
>> On Mon, Nov 18, 2013 at 8:29 AM, Sergio Mafra <<a href="mailto:sergiohmafra@gmail.com">sergiohmafra@gmail.com</a>> wrote:<br>
>> > Hi everyone,<br>
>> ><br>
>> > I attended AWS Re:Invent this year and liked a lot. Right place to go.<br>
>> > AWS announced there the new C3 family of HPC instances.<br>
>> > IŽm anxious to test them in my models but I understand that StarCluster must<br>
>> > take aware of them in the code.<br>
>> > So when will be the C3s available?<br>
>> ><br>
>> > All the best,<br>
>> ><br>
>> > Sergio Mafra<br>
>> ><br>
>><br>
>> > _______________________________________________<br>
>> > StarCluster mailing list<br>
>> > <a href="mailto:StarCluster@mit.edu">StarCluster@mit.edu</a><br>
>> > <a href="http://mailman.mit.edu/mailman/listinfo/starcluster" target="_blank">http://mailman.mit.edu/mailman/listinfo/starcluster</a><br>
>> ><br>
>><br>
>><br>
>><br>
>><br>
</blockquote></div><br></div>