[StarCluster] New Option for Extra EBS Volumes?

Lyn Gerner schedulerqueen at gmail.com
Fri Oct 4 14:26:39 EDT 2013


Hi Developers, All,

I am struggling operationally with the snapshots and volumes that
starcluster creates when I attach extra (non-root) EBS volumes to the
 cluster nodes that I launch.

I have read everything I can find that's relevant in the mailing list -- in
particular, this <http://star.mit.edu/cluster/mlarchives/1368.html> -- and
understand that this is just how it's designed to work.

As background for anyone unfamiliar with this, currently, starcluster only
will NFS-share volumes that are specified in the cluster template.  It
doesn't share volumes that are already existing devices in an AMI.

I have also experimented with the Delete on Termination setting for volumes
associated with my AMIs.  The volume has to be built into an AMI (as a
block device), and  has to be attached to an instance when it's terminated,
in order for the deletion to take place.

Starcluster terminate operation umounts devices and detaches  volumes
before terminating cluster instances.  This negates the Delete on
Termination operation for such volumes.

I would like to be able to have starcluster share such additional volumes
that are already present as block devices in the AMI.  This would allow me
to flip the bit in the AMI that would cause delete-on-termination for the
volume.

Could we have a new option, basically a "--share-also=<dev>:<mount_point>"?
 I think you would also need to *not* detach such extra volumes (though
you'd of course umount them) in the cluster termination process, in order
for the Delete-on-Termination setting to take effect.

I acknowledge that it's more complex than I've described, because the
"volumes" attached to the instances are actually snapshots, in the way that
starcluster (and AWS) functions.  I haven't found anything in AWS-land that
auto-deletes snapshots on termination.

This is all pretty confusing and frustrating.

I'd appreciate it if Justin and others would discuss the feasibility of
such a "--share-also" function, any reasons why it could not work
correctly, or would foul up the Starcluster design or operations.

And if anyone has overcome the operational issue of volumes and snapshots
piling up -- that is, if you've automated recognition and deletion of
temporary copies of volumes and snapshopts -- I'd love to hear about your
approach.

Thanks to all,
Lyn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20131004/24374698/attachment.htm


More information about the StarCluster mailing list