I tried submitting a bunch of jobs using qsub with a script that works fine on another (non-Amazon) cluster's configuration of SGE. But on a cluster configured with StarCluster, only the first 8 (on a cluster of c1.xlarge nodes, so 8 cores each) enter the queue without error (all of those are immediately executed on the master node). Even if I delete one of the jobs on the master node, another one never takes its place. I have a cluster of 8 c1.xlarge nodes. Here is the output of qconf -ssconf:<div>
<br></div><div><div>algorithm default</div><div>schedule_interval 0:0:15</div><div>maxujobs 0</div><div>queue_sort_method load</div><div>job_load_adjustments np_load_avg=0.50</div>
<div>load_adjustment_decay_time 0:7:30</div><div>load_formula np_load_avg</div><div>schedd_job_info false</div><div>flush_submit_sec 0</div><div>flush_finish_sec 0</div>
<div>params none</div><div>reprioritize_interval 0:0:0</div><div>halftime 168</div><div>usage_weight_list cpu=1.000000,mem=0.000000,io=0.000000</div>
<div>compensation_factor 5.000000</div><div>weight_user 0.250000</div><div>weight_project 0.250000</div><div>weight_department 0.250000</div><div>weight_job 0.250000</div>
<div>weight_tickets_functional 0</div><div>weight_tickets_share 0</div><div>share_override_tickets TRUE</div><div>share_functional_shares TRUE</div><div>max_functional_jobs_to_schedule 200</div>
<div>report_pjob_tickets TRUE</div><div>max_pending_tasks_per_job 50</div><div>halflife_decay_list none</div><div>policy_hierarchy OFS</div><div>weight_ticket 0.010000</div>
<div>weight_waiting_time 0.000000</div><div>weight_deadline 3600000.000000</div><div>weight_urgency 0.100000</div><div>weight_priority 1.000000</div><div>
max_reservation 0</div><div>default_duration INFINITY</div></div><div><br></div><div>I can't figure out how to change schedd_job_info to true to find out more about the error message...</div>