[StarCluster] newbie problems

Manal Helal manal.helal at gmail.com
Tue May 22 14:14:16 EDT 2012


Thank you CB, Ron, and Rayson,

I deleted the previous volume I created from the AWS online console, and
used the steps in:

http://web.mit.edu/star/cluster/docs/latest/manual/volumes.html#create-and-format-a-new-ebs-volume

as Rayson suggested, and I managed this time to attached the volume from
the AWS online console after using starcluster to start the instance. I
have no idea why it is not attached automatically, but it finally worked
this way,

thanks again very much,

Kind Regards,

On 22 May 2012 06:34, CB <cbalways at gmail.com> wrote:

> Hi Manal,
>
> I did tried a similar thing last week and experienced a similar
> failure  to mount an EBS disk via StarCluster.
>
> If you take a look at the log file, you will see what had happened.  In my
> case, the debug log was located at ~/.starcluster/logs/debug.log
>
> It appears that when StarCluster attached an EBS volume, the actual
> partition name it got was different than what StarCluster expected to mount.
>
> For example, if you look at /proc/partitions table, you will see the
> actual partition.
> Then, take a look at the debug.log and see what device StarCluster was
> trying to mount.
>
> I think this issue happened if you manually tried to mount an EBS volume
> several times and then, switched to StarCluster to make it automated.
> I haven't tried but if you start it from a fresh image, it would work.
>
> Regards,
> - Chansup
>
> On Mon, May 21, 2012 at 3:44 PM, Ron Chen <ron_chen_123 at yahoo.com> wrote:
>
>> Manal,
>>
>> Note: I am only a developer of Open Grid Scheduler, the open source Grid
>> Engine. I am not exactly a EC2 developer yet, and may be there are better
>> ways to do it in StarCluster.
>>
>> Did you format your EBS? Like a new harddrive, you need to fdisk & format
>> it before you can use it.
>>
>> - So first, logon to the EC2 Management Console. Then go to your EBS
>> Volumes.
>>
>> - Then check the state, if it is in-use then it is already attached to an
>> instance. If it is available, then StarCluster has not attached it yet.
>>
>> - After you are sure it is attached, the Attachment section should show
>> something similar to the following:
>>
>>    Attachment: i-39586e5f (master):/dev/sdf1 (attached)
>>
>>
>>
>> And now you need to partition the disk.
>>
>> - If you see /dev/sdf1 above, you need to partition /dev/xvdf as the AMIs
>> have the xvd drivers:
>>
>> # fdisk /dev/xvdf
>>
>>
>> Then you can format the disk using mkfs.
>>
>> # mkfs -t ext4 /dev/xvdf1
>>
>>
>> So finally, you can mount the disk, and if you specify the volume in the
>> StarCluster config correctly, then it will be mounted next time you boot
>> StarCluster.
>>
>>  -Ron
>>
>>
>>
>>
>>
>> ________________________________
>> From: Manal Helal <manal.helal at gmail.com>
>> To: Justin Riley <jtriley at mit.edu>
>> Cc: starcluster at mit.edu
>> Sent: Monday, May 21, 2012 7:41 AM
>> Subject: Re: [StarCluster] newbie problems
>>
>>
>> Hello,
>> I hate being a headache, but this didn't go smooth as I was hoping, and I
>> appreciate your support to get moving,
>>
>> I finally successfully attached the volume I created, but didn't see
>> where it should be on the cluster, and how my data will be saved from
>> session to session,
>>
>> The volume I created is a 30 GB, I first mounted it to /mydata, and
>> didn't see this when I started the cluster, this is what I get:
>>
>> root at ip-10-16-3-102:/dev# fdisk -l
>>
>> Disk /dev/sda: 8589 MB, 8589934592 bytes
>> 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>>    Device Boot      Start         End      Blocks   Id  System
>> /dev/sda1   *       16065    16771859     8377897+  83  Linux
>>
>> Disk /dev/xvdb: 901.9 GB, 901875499008 bytes
>> 255 heads, 63 sectors/track, 109646 cylinders, total 1761475584 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>> Disk /dev/xvdb doesn't contain a valid partition table
>>
>> Disk /dev/xvdc: 901.9 GB, 901875499008 bytes
>> 255 heads, 63 sectors/track, 109646 cylinders, total 1761475584 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x00000000
>>
>> Disk /dev/xvdc doesn't contain a valid partition table
>>
>> root at ip-10-16-3-102:/dev# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/sda1             7.9G  5.1G  2.5G  68% /
>> udev                   12G  4.0K   12G   1% /dev
>> tmpfs                 4.5G  216K  4.5G   1% /run
>> none                  5.0M     0  5.0M   0% /run/lock
>> none                   12G     0   12G   0% /run/shm
>> /dev/xvdb             827G  201M  785G   1% /mnt
>>
>>
>>
>> no 30GB volume attached, then I terminated and followed the suggestions
>> in this page:
>>
>> http://web.mit.edu/star/cluster/docs/latest/manual/configuration.html
>>
>> making it mount to /home thinking it will be used in place of the /home
>> folder, and this way all my installations and downloads will be saved after
>> I terminate the session,
>>
>> however, when I started the cluster this is what I get:
>>
>> root at ip-10-16-24-98:/home# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/sda1             7.9G  5.1G  2.5G  68% /
>> udev                   12G  4.0K   12G   1% /dev
>> tmpfs                 4.5G  216K  4.5G   1% /run
>> none                  5.0M     0  5.0M   0% /run/lock
>> none                   12G     0   12G   0% /run/shm
>> /dev/xvdb             827G  201M  785G   1% /mnt
>>
>>
>> There is no 30 GB volume as well, and neither / nor /mnt are getting
>> bigger,
>>
>> here is what I am having in my config file:
>>
>> [cluster mycluster]
>> VOLUMES = mydata
>>
>>
>> [volume mydata]
>> # attach vol-c9999999 to /home on master node and NFS-shre to worker nodes
>> VOLUME_ID = vol-c9999999 #(used the volume ID I got from the AWS console)
>> MOUNT_PATH = /home  #(not sure if this is true or not, I used /mydata in
>> the first run and didn't work as well)
>>
>> also when I was running before attaching the volume, I had starcluster
>> put and starcluster get commands working very well. After attaching the
>> volume, I had them working and saying 100% complete on my local machine,
>> but when I log in to the cluster, I find the paths where I was uploading
>> the files to, empty, no files went through! I am not sure if this is
>> related to attaching the volume and whether there should be anything I need
>> to do
>> P.S. I noticed in the ec2 command line tools to attach a volume to an
>> instance, I should define the volume ID, the instance ID and the device ID
>> (/dev/sdf), same as found in the aws online console. However, the mount
>> path in the starcluster configuration file, doesn't seem to be a device ID
>> that should have been (/dev/sdf) for linux as far as I understand. Not sure
>> where to define this in starcluster if this is the missing point,
>>
>> I appreciate your help  very much,
>>
>> thanks again,
>>
>> Manal
>>
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>>
>> _______________________________________________
>> StarCluster mailing list
>> StarCluster at mit.edu
>> http://mailman.mit.edu/mailman/listinfo/starcluster
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20120523/422789bf/attachment.htm


More information about the StarCluster mailing list