[StarCluster] root volumes after termination

Steve Heistand steve.heistand at nasa.gov
Wed Sep 4 12:23:07 EDT 2013


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

yeah its marked as no for delete on termination.
Will change that.

thanks

steve

On 09/04/2013 09:12 AM, Rayson Ho wrote:
> OK, start a 1-node EBS StarCluster (t1.micro is good) and make sure to use your AMI
> instead of the standard StarCluster AMI. Then go to the AWS management Console,
> select the instance of the 1-node cluster, and click the "Root Device:" (eg. Root
> Device: sda1) link, and you will see information about the root device like:
> 
> Root device type: ebs Attachment time: 2012-12-12 Block device status: attached 
> Delete on termination: Yes
> 
> Rayson
> 
> ================================================== Open Grid Scheduler - The Official
> Open Source Grid Engine http://gridscheduler.sourceforge.net/ 
> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
> 
> 
> On Wed, Sep 4, 2013 at 11:54 AM, Steve Heistand <steve.heistand at nasa.gov> wrote: Im
> on starcluster v0.94, I started with a public ami from AWS that was centos/ebs/hvm
> based. Have since tweaked it and saved it as one of my own AMIs so Im not sure which
> image it was originally. Looking at their list of public images but none of them seem
> like the one I started with.
> 
> Could have these amis lost the 'deleteontermination' flag? trying to find where this
> is set.
> 
> steve
> 
> 
> On 09/04/2013 08:41 AM, Rayson Ho wrote:
>>>> The standard EBS-based StarCluster AMIs should be fine - I have never
>>>> encountered cleanup issues. Note that by default the root EBS volumes are
>>>> marked as "DeleteOnTermination", so when the instances terminate, AWS
>>>> automatically cleans them up.
>>>> 
>>>> Which AMI & StarCluster versions are you using, BTW??
>>>> 
>>>> Rayson
>>>> 
>>>> ================================================== Open Grid Scheduler - The
>>>> Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ 
>>>> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
>>>> 
>>>> 
>>>> 
>>>> On Tue, Sep 3, 2013 at 3:00 PM, Steve Heistand <steve.heistand at nasa.gov>
>>>> wrote:
>>>>> Hi folks,
>>>>> 
>>>>> so it seems that when I go and terminate any of the EC2 instances the root
>>>>> volumes that the instance uses doesnt get deleted. Starting up new instances
>>>>> always creates new volumes so I get lots of left over unused volumes wasting
>>>>> money. Is this normal? The termination process doesnt mention anything about
>>>>> getting rid of volumes: .. Terminate EBS cluster hos (y/n)? y
>>>>>>>> Running plugin starcluster.clustersetup.DefaultClusterSetup Detaching
>>>>>>>> volume vol-0b450162 from master Terminating node: master (i-42d97475)
>>>>>>>> Terminating node: node001 (i-41d97476) Waiting for cluster to
>>>>>>>> terminate... Removing @sc-hos placement group Removing @sc-hos security
>>>>>>>> group...
>>>>> 
>>>>> thanks
>>>>> 
>>>>> steve
>>>>> 
>>>>> -- ************************************************************************
>>>>> Steve Heistand                           NASA Ames Research Center SciCon
>>>>> Group Mail Stop 258-6 steve.heistand at nasa.gov  (650) 604-4369  Moffett Field,
>>>>> CA 94035-1000 
>>>>> ************************************************************************
>>>>> "Any opinions expressed are those of our alien overlords, not my own."
>>>>> 
>>>>> # For Remedy                        # #Action: Resolve                    # 
>>>>> #Resolution: Resolved               # #Reason: No Further Action Required #
>>>>> #Tier1: User Code                   # #Tier2: Other                       #
>>>>> #Tier3: Assistance                  # #Notification: None                 #
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________ StarCluster mailing list 
>>>>> StarCluster at mit.edu http://mailman.mit.edu/mailman/listinfo/starcluster
>>>>> 
> 

- -- 
************************************************************************
 Steve Heistand                          NASA Ames Research Center
 email: steve.heistand at nasa.gov          Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369                      Bldg. 258, Rm. 232-5
 Scientific & HPC Application            P.O. Box 1
 Development/Optimization                Moffett Field, CA 94035-0001
************************************************************************
 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy                        #
#Action: Resolve                    #	
#Resolution: Resolved               #
#Reason: No Further Action Required #
#Tier1:	User Code                   #
#Tier2:	Other                       #
#Tier3:	Assistance                  #
#Notification: None                 #
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlInXmsACgkQoBCTJSAkVrG9sACg0q1VQCAbaCnjQZSWq/cODw5j
NeQAoI+K5m09rhLHb5KSCq99XbtV9GL5
=mr6T
-----END PGP SIGNATURE-----


More information about the StarCluster mailing list