[StarCluster] newbie problems
Justin Riley
jtriley at MIT.EDU
Tue May 22 14:49:55 EDT 2012
Manal,
StarCluster chooses which device to attach external EBS volumes on
automatically - you do not and should not need to specify this in your
config. Assuming you use 'createvolume' and update your config correctly
things should "just work".
You should not have to use the AWS console to attach volumes manually
and if you're having to do this then I'd like to figure out why so we
can fix it. This is a core feature of StarCluster and many users are
using external EBS with StarCluster without issue so I'm extremely
curious why you're having issues...
With that said I'm having trouble pulling out all of the details I need
from this long thread so I'll ask direct questions instead:
1. Which AMI are you using? Did you create the AMI yourself? If so how
did you go about creating the AMI and did you have any external EBS
volumes attached while creating the AMI?
2. How did you create the volume you were having issues mounting with
StarCluster? StarCluster expects your volume to either be completely
unpartitioned (format entire device) or only contain a single partition.
If this isn't the case you should see an error when starting a cluster.
3. Did you add your volume to your cluster config correctly according to
the docs? (ie add your volume to the VOLUMES list in your cluster
config?)
4. StarCluster should be spitting out errors when creating the cluster
if it fails to attach/mount/NFS-share any external EBS volumes - did you
notice any errors? Can you please attach the complete screen output of a
failed StarCluster run? Also it would extremely useful if you could send
me your ~/.starcluster/logs/debug.log for a failed run so that I can
take a look.
5. Would you mind sending me a copy of your config with all of the
sensitive data removed? I just want to make sure you've configured
things as expected.
Thanks,
~Justin
On Wed, May 23, 2012 at 04:14:16AM +1000, Manal Helal wrote:
> Thank you CB, Ron, and Rayson,
> I deleted the previous volume I created from the AWS online console, and
> used the steps in:
> [1]http://web.mit.edu/star/cluster/docs/latest/manual/volumes.html#create-and-format-a-new-ebs-volume
>
> as Rayson suggested, and I managed this time to attached the volume from
> the AWS online console after using starcluster to start the instance. I
> have no idea why it is not attached automatically, but it finally worked
> this way,
> thanks again very much,
> Kind Regards,
> On 22 May 2012 06:34, CB <[2]cbalways at gmail.com> wrote:
>
> Hi Manal,
> I did tried a similar thing last week and experienced a similar
> failure to mount an EBS disk via StarCluster.
> If you take a look at the log file, you will see what had happened. In
> my case, the debug log was located at ~/.starcluster/logs/debug.log
> It appears that when StarCluster attached an EBS volume, the actual
> partition name it got was different than what StarCluster expected to
> mount.
> For example, if you look at /proc/partitions table, you will see the
> actual partition.
> Then, take a look at the debug.log and see what device StarCluster was
> trying to mount.
> I think this issue happened if you manually tried to mount an EBS volume
> several times and then, switched to StarCluster to make it automated.
> I haven't tried but if you start it from a fresh image, it would work.
> Regards,
> - Chansup
>
> On Mon, May 21, 2012 at 3:44 PM, Ron Chen <[3]ron_chen_123 at yahoo.com>
> wrote:
>
> Manal,
>
> Note: I am only a developer of Open Grid Scheduler, the open source
> Grid Engine. I am not exactly a EC2 developer yet, and may be there
> are better ways to do it in StarCluster.
>
> Did you format your EBS? Like a new harddrive, you need to fdisk &
> format it before you can use it.
>
> - So first, logon to the EC2 Management Console. Then go to your EBS
> Volumes.
>
> - Then check the state, if it is in-use then it is already attached to
> an instance. If it is available, then StarCluster has not attached it
> yet.
>
> - After you are sure it is attached, the Attachment section should
> show something similar to the following:
>
> Attachment: i-39586e5f (master):/dev/sdf1 (attached)
>
> And now you need to partition the disk.
>
> - If you see /dev/sdf1 above, you need to partition /dev/xvdf as the
> AMIs have the xvd drivers:
>
> # fdisk /dev/xvdf
>
> Then you can format the disk using mkfs.
>
> # mkfs -t ext4 /dev/xvdf1
>
> So finally, you can mount the disk, and if you specify the volume in
> the StarCluster config correctly, then it will be mounted next time
> you boot StarCluster.
>
> -Ron
>
> ________________________________
> From: Manal Helal <[4]manal.helal at gmail.com>
> To: Justin Riley <[5]jtriley at mit.edu>
> Cc: [6]starcluster at mit.edu
> Sent: Monday, May 21, 2012 7:41 AM
> Subject: Re: [StarCluster] newbie problems
>
> Hello,
> I hate being a headache, but this didn't go smooth as I was hoping,
> and I appreciate your support to get moving,
>
> I finally successfully attached the volume I created, but didn't see
> where it should be on the cluster, and how my data will be saved from
> session to session,
>
> The volume I created is a 30 GB, I first mounted it to /mydata, and
> didn't see this when I started the cluster, this is what I get:
>
> root at ip-10-16-3-102:/dev# fdisk -l
>
> Disk /dev/sda: 8589 MB, [7]8589934592 bytes
> 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 16065 16771859 8377897+ 83 Linux
>
> Disk /dev/xvdb: 901.9 GB, 901875499008 bytes
> 255 heads, 63 sectors/track, 109646 cylinders, total 1761475584
> sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/xvdb doesn't contain a valid partition table
>
> Disk /dev/xvdc: 901.9 GB, 901875499008 bytes
> 255 heads, 63 sectors/track, 109646 cylinders, total 1761475584
> sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/xvdc doesn't contain a valid partition table
>
> root at ip-10-16-3-102:/dev# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 7.9G 5.1G 2.5G 68% /
> udev 12G 4.0K 12G 1% /dev
> tmpfs 4.5G 216K 4.5G 1% /run
> none 5.0M 0 5.0M 0% /run/lock
> none 12G 0 12G 0% /run/shm
> /dev/xvdb 827G 201M 785G 1% /mnt
>
> no 30GB volume attached, then I terminated and followed the
> suggestions in this page:
>
> [8]http://web.mit.edu/star/cluster/docs/latest/manual/configuration.html
>
> making it mount to /home thinking it will be used in place of the
> /home folder, and this way all my installations and downloads will be
> saved after I terminate the session,
>
> however, when I started the cluster this is what I get:
>
> root at ip-10-16-24-98:/home# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda1 7.9G 5.1G 2.5G 68% /
> udev 12G 4.0K 12G 1% /dev
> tmpfs 4.5G 216K 4.5G 1% /run
> none 5.0M 0 5.0M 0% /run/lock
> none 12G 0 12G 0% /run/shm
> /dev/xvdb 827G 201M 785G 1% /mnt
>
> There is no 30 GB volume as well, and neither / nor /mnt are getting
> bigger,
>
> here is what I am having in my config file:
>
> [cluster mycluster]
> VOLUMES = mydata
>
> [volume mydata]
> # attach vol-c9999999 to /home on master node and NFS-shre to worker
> nodes
> VOLUME_ID = vol-c9999999 #(used the volume ID I got from the AWS
> console)
> MOUNT_PATH = /home #(not sure if this is true or not, I used /mydata
> in the first run and didn't work as well)
>
> also when I was running before attaching the volume, I had starcluster
> put and starcluster get commands working very well. After attaching
> the volume, I had them working and saying 100% complete on my local
> machine, but when I log in to the cluster, I find the paths where I
> was uploading the files to, empty, no files went through! I am not
> sure if this is related to attaching the volume and whether there
> should be anything I need to do
> P.S. I noticed in the ec2 command line tools to attach a volume to an
> instance, I should define the volume ID, the instance ID and the
> device ID (/dev/sdf), same as found in the aws online console.
> However, the mount path in the starcluster configuration file, doesn't
> seem to be a device ID that should have been (/dev/sdf) for linux as
> far as I understand. Not sure where to define this in starcluster if
> this is the missing point,
>
> I appreciate your help very much,
>
> thanks again,
>
> Manal
>
> _______________________________________________
> StarCluster mailing list
> [9]StarCluster at mit.edu
> [10]http://mailman.mit.edu/mailman/listinfo/starcluster
>
> _______________________________________________
> StarCluster mailing list
> [11]StarCluster at mit.edu
> [12]http://mailman.mit.edu/mailman/listinfo/starcluster
>
> References
>
> Visible links
> 1. http://web.mit.edu/star/cluster/docs/latest/manual/volumes.html#create-and-format-a-new-ebs-volume
> 2. mailto:cbalways at gmail.com
> 3. mailto:ron_chen_123 at yahoo.com
> 4. mailto:manal.helal at gmail.com
> 5. mailto:jtriley at mit.edu
> 6. mailto:starcluster at mit.edu
> 7. file:///tmp/tel:8589934592
> 8. http://web.mit.edu/star/cluster/docs/latest/manual/configuration.html
> 9. mailto:StarCluster at mit.edu
> 10. http://mailman.mit.edu/mailman/listinfo/starcluster
> 11. mailto:StarCluster at mit.edu
> 12. http://mailman.mit.edu/mailman/listinfo/starcluster
> _______________________________________________
> StarCluster mailing list
> StarCluster at mit.edu
> http://mailman.mit.edu/mailman/listinfo/starcluster
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
Url : http://mailman.mit.edu/pipermail/starcluster/attachments/20120522/6a8da738/attachment-0001.bin
More information about the StarCluster
mailing list