[StarCluster] need scratch space

Jian Feng freedafeng at yahoo.com
Tue Dec 9 12:42:54 EST 2014


Thanks Jennifer and Jin!

You are definitely right. The svdaa is one of the two ssds and it is mounted to /mnt. The other ssd (/dev/svdab) is in the list, but not mounted to anywhere. I am wondering if there is a way to add the other svdab disk to /mnt to make a 37*2 GB disk, (as Jin mentioned). really appreciated your help.



________________________________
 From: Jennifer Staab <jstaab at cs.unc.edu>
To: Jian Feng <freedafeng at yahoo.com> 
Cc: "starcluster at mit.edu" <starcluster at mit.edu> 
Sent: Tuesday, December 9, 2014 8:24 AM
Subject: Re: [StarCluster] need scratch space
 


Submit command "lsblk" and you will see the instance/ephemeral storage.  In my experience the instance/ephemeral storage will be smaller in size ( for me 40 GB is typically 37.5/37 GB in size) and usually one disk is automatically mounted as "/mnt".  Seems like in your case it is likely "/dev/xvdaa" is one of the instance/ephemeral disks.  If you don't see the other with "lsblk" command, it is likely when you created the EC2 you forgot to indicate you wanted to use both instance storage disks when you added storage to your EC2.

Good Luck.

- Jennifer 

On 12/9/14 9:48 AM, Jin Yu wrote:

You can use these two 40G SSD to make a raid0 volume of 80G, and then mount it to /scratch. 
>
>
>
>-Jin
>
>
>On Mon, Dec 8, 2014 at 1:55 PM, Jian Feng <freedafeng at yahoo.com> wrote:
>
>Dear starcluster community,
>>
>>
>>I created an ec2 cluster using m3.xlarge instance (2*40GB SSD). I did not see any scratch space or /scratch folder at all. Here is the disk space layout. 
>>
>>
>>root at node001:~# df -h
>>Filesystem        Size  Used Avail Use% Mounted on
>>/dev/xvda1         20G  5.6G   14G  30% /
>>udev              7.4G  8.0K  7.4G   1% /dev
>>tmpfs             3.0G  176K  3.0G   1% /run
>>none              5.0M     0  5.0M   0% /run/lock
>>none              7.4G     0  7.4G   0% /run/shm
>>/dev/xvdaa         37G  177M   35G   1% /mnt
>>master:/home       20G  5.6G   14G  30% /home
>>master:/opt/sge6   20G  5.6G   14G  30% /opt/sge6
>>
>>
>>In my application, I need a scatch folder on each node that has about 50G space. Is there a way to do that? I don't really need /home or /opt/sge6 stuff. And I don't run mpi applications. 
>>
>>
>>Maybe I should recreate an AMI?
>>
>>
>>Thanks!
>>
>>
>>
>>
>>
>>
>>_______________________________________________
>>StarCluster mailing list
>>StarCluster at mit.edu
>>http://mailman.mit.edu/mailman/listinfo/starcluster
>>
>>
>>
>>
>>
>>
>
>
>
>
>
>_______________________________________________
StarCluster mailing list StarCluster at mit.edu http://mailman.mit.edu/mailman/listinfo/starcluster 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20141209/c96e328f/attachment-0001.htm


More information about the StarCluster mailing list