[StarCluster] need scratch space

Jian Feng freedafeng at yahoo.com
Tue Dec 9 15:39:58 EST 2014


That is exactly what I wanted. Thank you Jin! :)


________________________________
 From: Jin Yu <yujin2004 at gmail.com>
To: Jian Feng <freedafeng at yahoo.com> 
Cc: Jennifer Staab <jstaab at cs.unc.edu>; "starcluster at mit.edu" <starcluster at mit.edu> 
Sent: Tuesday, December 9, 2014 12:29 PM
Subject: Re: [StarCluster] need scratch space
 


you are welcome. 

you can refer following codes to make a raid 0 volume

#You can list the devices 
lsblk 
#(RAID 0 only) To create a RAID 0 array, execute the following command (note the --level=stripe option to stripe the array):  
sudo mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=$number_of_volumes $device_name1 $device_name2 
#For example, to create an ext4 file system, execute the following command: 

sudo mkfs.ext4 /dev/md0 



On Tue, Dec 9, 2014 at 11:42 AM, Jian Feng <freedafeng at yahoo.com> wrote:


>
>
>Thanks Jennifer and Jin!
>
>
>You are definitely right. The svdaa is one of the two ssds and it is mounted to /mnt. The other ssd (/dev/svdab) is in the list, but not mounted to anywhere. I am wondering if there is a way to add the other svdab disk to /mnt to make a 37*2 GB disk, (as Jin mentioned). really appreciated your help.
>
>
>
>
>
>________________________________
> From: Jennifer Staab <jstaab at cs.unc.edu>
>To: Jian Feng <freedafeng at yahoo.com> 
>Cc: "starcluster at mit.edu" <starcluster at mit.edu> 
>Sent: Tuesday, December 9, 2014 8:24 AM
>Subject: Re: [StarCluster] need scratch space
> 
>
>
>Submit command "lsblk" and you will see the instance/ephemeral storage.  In my experience the instance/ephemeral storage will be smaller in size ( for me 40 GB is typically 37.5/37 GB in size) and usually one disk is automatically mounted as "/mnt".  Seems like in your case it is likely "/dev/xvdaa" is one of the instance/ephemeral disks.  If you don't see the other with "lsblk" command, it is likely when you created the EC2 you forgot to indicate you wanted to use both instance storage disks when you added storage to your EC2.
>
>Good Luck.
>
>- Jennifer 
>
>On 12/9/14 9:48 AM, Jin Yu wrote:
>
>You can use these two 40G SSD to make a raid0 volume of 80G, and then mount it to /scratch. 
>>
>>
>>
>>-Jin
>>
>>
>>On Mon, Dec 8, 2014 at 1:55 PM, Jian Feng <freedafeng at yahoo.com> wrote:
>>
>>Dear starcluster community,
>>>
>>>
>>>I created an ec2 cluster using m3.xlarge instance (2*40GB SSD). I did not see any scratch space or /scratch folder at all. Here is the disk space layout. 
>>>
>>>
>>>root at node001:~# df -h
>>>Filesystem        Size  Used Avail Use% Mounted on
>>>/dev/xvda1         20G  5.6G   14G  30% /
>>>udev              7.4G  8.0K  7.4G   1% /dev
>>>tmpfs             3.0G  176K  3.0G   1% /run
>>>none              5.0M     0  5.0M   0% /run/lock
>>>none              7.4G     0  7.4G   0% /run/shm
>>>/dev/xvdaa         37G  177M   35G   1% /mnt
>>>master:/home       20G  5.6G   14G  30% /home
>>>master:/opt/sge6   20G  5.6G   14G  30% /opt/sge6
>>>
>>>
>>>In my application, I need a scatch folder on each node that has about 50G space. Is there a way to do that? I don't really need /home or /opt/sge6 stuff. And I don't run mpi applications. 
>>>
>>>
>>>Maybe I should recreate an AMI?
>>>
>>>
>>>Thanks!
>>>
>>>
>>>
>>>
>>>
>>>
>>>_______________________________________________
>>>StarCluster mailing list
>>>StarCluster at mit.edu
>>>http://mailman.mit.edu/mailman/listinfo/starcluster
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>>
>>
>>_______________________________________________
StarCluster mailing list StarCluster at mit.edu http://mailman.mit.edu/mailman/listinfo/starcluster 
>
>
>
>
>_______________________________________________
>StarCluster mailing list
>StarCluster at mit.edu
>http://mailman.mit.edu/mailman/listinfo/starcluster
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20141209/bdaea8eb/attachment-0001.htm


More information about the StarCluster mailing list