<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<br>
-----BEGIN PGP SIGNED MESSAGE-----<br>
Hash: SHA1<br>
<br>
Hi Adam,<br>
<br>
Sorry for the late response.<br>
<br>
There is some magic that occurs when creating users in order to
avoid<br>
having to chmod /home folders which might contain hundreds of<br>
gigabytes of data. Basically, StarCluster inspects the top level<br>
folders under the /home folder and if the CLUSTER_USER's home
folder<br>
already exists, then the CLUSTER_USER is created with the same
uid/gid<br>
as the existing home folder to avoid a recursive chmod. Otherwise,<br>
StarCluster looks at the uid/gid of the other directories in /home
and<br>
chooses the highest gid/uid combo plus 1 to be the uid/gid for the<br>
CLUSTER_USER. If that calculation ends up with a uid/gid less than<br>
1000 then it defaults to 1000 for the gid/uid of CLUSTER_USER.<br>
<br>
A couple questions that might help me to understand what happened:<br>
<br>
1. I'm assuming you must have had MOUNT_PATH=/home for the volume
in<br>
your cluster template's VOLUME list, correct?<br>
<br>
2. Did your volume already contain a 'sgeadmin' folder at the root
of<br>
the volume?<br>
<br>
3. What does "ls -l" look like on the root of the volume that
exhibits<br>
this behavior?<br>
<br>
Also, you will find useful information about the uid/gid chosen by<br>
StarCluster for the CLUSTER_USER in your debug file:<br>
<br>
/tmp/starcluster-debug-<your_username>.log<br>
<br>
(if you're on mac, this file will be in the directory returned by<br>
"python -c 'import tempfile; print tempfile.gettempdir()'")<br>
<br>
Just grepping for gid or uid in your log file(s) should print out
the<br>
relevant messages: "grep -ri gid /tmp/starcluster-*"<br>
<br>
~Justin<br>
<br>
<br>
On 10/3/10 5:19 PM, Adam Marsh wrote:<br>
<span style="white-space: pre;">><br>
> I've had some challenges to getting SC running correctly
when EBS<br>
> volumes are mounted to the head node during configuration.
I<br>
> initially setup the EBS volumes with database files by
first<br>
> configuring and mounting the volume on any available EC2 VM
I had<br>
> running at the time. By default, most of the time I was
working as<br>
> user 'ubuntu'. However, whenever an EBS volume with files
or folders<br>
> having 'ubuntu' as the owner and group were included in the
VOLUMES<br>
> list of the SC config file and were mounted during setup to
the head<br>
> node, two odd things occurred:<br>
> 1. when the cluster_user account was setup by SC (like
'sgeadmin'),<br>
> the owner and user of the /sgeadmin folder under /home was
'ubuntu';<br>
> 2. connecting via ssh to the sgeadmin account always
defaulted to<br>
> logging in to the 'ubuntu' user account.<br>
><br>
> I worked around the problem by changing own/grp settings on
all EBS<br>
> folders/files to the cluster_user name used in the config
file.<br>
> All works fine now.<br>
><br>
> Is this just a rare instance of SC system behavior? If not,
is there<br>
> a better way to prepare EBS volumes for use with SC to
avoid own/grp<br>
> conflicts?<br>
><br>
> Thanks,<br>
><br>
> <Adam<br>
> <br>
></span><br>
<br>
-----BEGIN PGP SIGNATURE-----<br>
Version: GnuPG v2.0.16 (GNU/Linux)<br>
Comment: Using GnuPG with Mozilla - <a class="moz-txt-link-freetext" href="http://enigmail.mozdev.org/">http://enigmail.mozdev.org/</a><br>
<br>
iEYEARECAAYFAky3bvoACgkQ4llAkMfDcrnhgQCeNx/PPR9pg01D626krxXQcv8L<br>
M9cAn2vXyBmjMUMHqGU0PT94+ffR2xm4<br>
=VX9F<br>
-----END PGP SIGNATURE-----<br>
<br>
</body>
</html>