[StarCluster] Fast shared or local storage? (Cedar McKay)
Steve Darnell
darnells at dnastar.com
Mon May 12 17:12:02 EDT 2014
Hi Cedar,
I completely agree with David. We routinely use blast in a software pipeline build on top of EC2. We started by using an NFS share, but we are currently transitioning to ephemeral storage.
Our plan is to put the nr database (and other file-based data libraries) on local SSD ephemeral storage for each node in the cluster. You may want to consider pre-packaging the compressed libraries on a custom StarCluster AMI, then use a plug-in to mount ephemeral storage and decompress the blast libraries into ephemeral storage. The avoids the download from S3 each time you start a node, which added 10-20 minutes in our case. Plus, it eliminates one more possible point of failure during cluster initialization. To us, it is worth the extra cost of maintaining a custom AMI and the extra size of the AMI itself.
Best regards,
Steve
From: starcluster-bounces at mit.edu [mailto:starcluster-bounces at mit.edu] On Behalf Of David Stuebe
Sent: Friday, May 09, 2014 12:49 PM
To: starcluster at mit.edu
Subject: Re: [StarCluster] Fast shared or local storage? (Cedar McKay)
Hi Cedar
Beware of using NFS - it may not be posix compliant in ways that seem minor but have caused problems for HDF5 files. I don't know what the blast db file structure is or how they organize their writes, but it can be a problem in some circumstances.
I really like the suggestions of using the ephemeral storage. I suggest you create a plugin that moves the data to the drive from S3 on startup when you add a node. That should be simpler than the on demand caching which although elegant may take you some time to implement.
David
Thanks for the very useful reply. I think I'm going to go with the s3fs option and cache to local ephemeral drives. A big blast database is split into many parts, and I'm pretty sure that every file in a blast db isn't read every time, so this way blasting can proceed immediately. The parts of the blast database download from s3 on demand, and cached locally. If there was much writing, I'd probably be reluctant to use this approach because the s3 eventual consistency model seems to require tolerance of write fails at the application level. I'll write my results to a shared nfs volume.
I thought about mpiBlast and will probably explore it, but I read some reports that it's xml output isn't exactly the same as the official NCBI blast output, and may break biopython parsing. I haven't confirmed this, and will probably compare the two techniques.
Thanks again!
Cedar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/pipermail/starcluster/attachments/20140512/bcd6502d/attachment.htm
More information about the StarCluster
mailing list