[StarCluster] Using a hi-I/O instance as the master node in a StarCluster Cluster

Oppe, Thomas C ERDC-RDE-ITL-MS Contractor Thomas.C.Oppe at erdc.dren.mil
Fri Jan 4 18:33:32 EST 2013


Dustin,

As always, thank you very much for your advice.  I was always wanting to know what the command was to find out what volumes were available to be mounted, and I did not know about "fdisk -l".  I will use that command to see if the SSD's are available.  I did not see the "/mnt" disk when I did "df -h", but it may show up when I do "fdisk -l".

I tried running HYCOM using the "/mnt" disk on the master node.  Since I do not know how to redirect output files to a different filesystem, I had to put all input files on the master node's "/mnt".  HYCOM reads its input files and writes its output files to the same directory.  I also had to put four of the smaller input files on each node's "/mnt" disk.  The executable could be placed either on every node's "/mnt" disk or on a shared EBS volume.  This approach works fine for the serial I/O runs where the big output files are written by MPI rank 0 which resides on the "master" node.  Unfortunately, I did not see an improvement in I/O performance over using a standard EBS volume.   I would have thought that a local disk would be faster than a NFS-shared EBS volume.  So I am experimenting with other ideas.  Actually, I am overdrawn on my AWS account, so it may be awhile before I can experiment again.

Best wishes.

Tom Oppe

________________________________________
From: Dustin Machi [dmachi at vbi.vt.edu]
Sent: Friday, January 04, 2013 10:44 AM
To: Oppe, Thomas C ERDC-RDE-ITL-MS Contractor
Cc: starcluster at mit.edu
Subject: Re: [StarCluster] Using a hi-I/O instance as the master node in a StarCluster Cluster

I've never messed with them on AWS, but I assume that these get mounted
wherever the local disks get mounted normally.../mnt.  The hi1.4xlarge
should have two 1tb volumes mounted
(http://aws.typepad.com/aws/2012/07/new-high-io-ec2-instance-type-hi14xlarge.html)
or available to mount (fdisk -l).  The don't think the IOPS provisioning
will matter much here (for the the read/writes to those volumes anyway),
though they may impact the performance of the EBS volume itself of
course.

Dustin

On 4 Jan 2013, at 11:06, Oppe, Thomas C ERDC-RDE-ITL-MS Contractor
wrote:

> Dear Sir:
>
>
>
> I was wondering if anyone has tried using a high-performance I/O
> instance (e.g., "hi1.4xlarge") as the master node in a StarCluster
> cluster, with the other nodes being Sandy Bridge "cc2.8xlarge"
> instances.  When I bring up a single "hi1.4xlarge" instance outside of
> StarCluster, there is an option to attach one or two 1-TB SSD disks,
> but when I bring up a cluster with "hi1.4xlarge" as the master node,
> the SSD disks are nowhere to be found.  Is a plug-in necessary to ask
> for the SSD disks to be available?  I have a code that needs the
> fastest I/O available for writes with a combined size of 100GB during
> the run.  I have tried single Standard and Provisioned IOPS EBS
> volumes, but the I/O performance to these volumes is poor, even with
> "pre-warming".  Has anyone written a plug-in for using striped volumes
> in StarCluster?  I would appreciate any comments or pointers to
> information.
>
>
>
> Tom Oppe
> _______________________________________________
> StarCluster mailing list
> StarCluster at mit.edu
> http://mailman.mit.edu/mailman/listinfo/starcluster


More information about the StarCluster mailing list