MARS
Massive Archive and Restore System
(Pilot usage phase to start in May/2011 with a select group of users. Deployment and configuration are still a work in progress)
The MARS deployment at SciNet is an effort to offer a more efficient way to offoad/archive data from the most active file systems (scratch and project) than our current TSM-HSM solution, still without having to deal directly with the tape library or "tape commands"
The system is a combination of the underlaying hardware infrastructure, 3 software components, HPSS, HSI and HTAR, plus some environment customization.
- HPSS: the main component, best described as a very scalable "blackbox" running in the background to support the Archive and Restore operations. High Performance Storage System - HPSS is the result of over a decade of collaboration among five Department of Energy laboratories and IBM, with significant contributions by universities and other laboratories worldwide. For now the best way for SciNet users to understand HPSS may be to compare it with our existing HSM-TSM implementation.
- HSI: it may be best understood as a supercharged ftp interface, specially designed by Gleicher Enterprises to act as a front-end for HPSS, gathering some of the best features you would encounter on a shell, rsync and GridFTP (and a few more). It enables users to transfer whole directory trees from /project and /scratch, therefore freeing up space. HSI is most suitable when those directory trees do not contain too many small files to start with, or when you already have a series of tarballs.
- HTAR: similarly, htar is sort of a "super-tar" application, also specially designed by Gleicher Enterprises to interact with HPSS, allowing users to build and automatically transfer tarballs to HPSS on the fly. HTAR is most suitable to aggregate whole directory trees. When HTAR creates the TAR file, it also builds an index file, with a ".idx" suffix added, which is stored in the same directory as the TAR file.
General guide lines
- IN/OUT transfers to HPSS using HSI is bound to maximum of about 4 files/second. Therefore do not attempt to transfer directories with too many small files inside. Not only it will take "forever", it will induce a lot of ware and tear on the library's robot mechanism as well as the tapes themselves in case of recalls. Instead use HTAR, so the files are aggregated while being sent to HPSS
- The maximum size that an individual file can have inside an HTAR is 68GB. Please be sure to identify and "fish out" those files that are larger than 68GB from the directories and transfer them with HSI
- The maximum size of a tar file that HPSS will take is 1TB. Please do not generate tarballs that large.
- The maximum number of files in a htar is 1 million. Please, break-up your htar segments as required.
- These guide lines may be strict, but for as long as they are followed the system will perform reasonably well
Performance considerations
- Files are kept on disk-cache for as long as possible, so as to avoid tape operations during recalls.
- Average transfer rates with HSI
No small files, average > 1MB/file: * write: 100-130MB/s * read: 450-600MB/s (IF no recall from tapes required)
- NOTE: do not use HSI with small files (< 1MB/file). It would take over 1 week to transfer 1 TB. If we find that you are abusing the system we'll suspend your privileges
- Average transfer rates with HTAR
Average files size > 1MB * write: 120MB/s * read: 480MB/s (IF no recall from tapes required)
Not too many small files, average > 100KB/file: * write: 64MB/s * read: 170MB/s (IF no recall from tapes required)
- Average transfer rates from tapes, if stage is required (add to the above estimates)
* read: 80MB/s per tape drive. * maximum of 4 drives may be used per hsi/htar session
Quick Reference
- To use HSI or HTAR and access HPSS please login to the gpc-archive01 node.
- AUTHENTICATION: done automatically
- To browse the contents of your HPSS archive just type hsi on a shell to get the hsi prompt. Then use simple commands such as ls, pwd, cd to navigate your way around. You may also use help from the hsi prompt.
- Files are organized inside HPSS in the same fashion as in /project. Users in the same group have read permissions to each other's archives.
[HSI]/archive/<group>/<user>
- There is also provision to have files migrated to a "mirrored set of tapes" (the 2nd set may be kept off-site). You'll have request and justify the need. The default is a "single tape set".
[HSI]/archive-dual-copy/<group>/<user>
- Pilot users: DURING THE TESTING PHASE DO NOT DELETE THE ORIGINAL FILES FROM /scratch OR /project
Using HSI
- Interactively put a subdirectory subdirb and all its contents recursively. You may use '-u' option to resume a previously disrupted session (as rsync would do).
hsi <RETURN> [HSI] prompt [HSI] mput -R -u subdirb
- Interactively descend into the "Source" directory and move all files which end in ".h" into a sibling directory (ie, a directory at the same level in the tree as "Source") named "Include":
hsi <RETURN> [HSI] cd Source [HSI] mv *.h ../Include
- Delete all files beginning with "m" and ending with 9101 (note that this is an interactive request, not a one-liner request, so the wildcard path does not need quotes to preserve it):
hsi <RETURN> [HSI] delete m*9101
- Interactively delete all files beginning with H and ending with a digit, and ask for verification before deleting each such file.
hsi <RETURN> [HSI] mdel H*[0-9]
- Save your local files that begin with the letter "c" (let the UN*X shell resolve the wild-card path pattern in terms of your local files by not enclosing it in quotes:
hsi put c*
- Get all files in the subdirectory subdira which begin with the letters "b" or "c" (surrounding the wildcard path in single quotes prevents shells on UNIX systems from processing the wild card pattern):
hsi get ’subdira/[bc]*’
- Save a "tar file" of C source programs and header files:
tar cf - *.[ch] | hsi put - : source.tar
Note: the ":" operator which separates the local and HPSS pathnames must be surrounded by whitespace (one or more space characters)
- Restore the tar file source kept above and extract all files:
hsi get - : source.tar | tar xf -
- The commands below are equivalent (the default HSI directory placement is /archive/<group>/<user>/):
hsi put source.tar hsi put source.tar : /archive/<group>/<user>/source.tar
- Using the "mirrored set of tapes" provision (you'll need authorization and a directory placement in /archive-dual-copy)
hsi put source.tar : /archive-dual-copy/<group>/<user>/source.tar OR hsi <RETURN> [HSI] cd /archive-dual-copy/<group>/<user> [HSI] put source.tar
- For more details please check the HSI Introduction or the HSI Man Page online
Using HTAR
- To write the file1 and file2 files to a new archive called "files.tar" in the default HPSS home directory, enter:
htar -cf files.tar file1 file2 OR htar -cf /archive/<group>/<user>/files.tar file1 file2
- To write the file1 and file2 files to a new archive called "files.tar" on a remote FTP server called "blue.pacific.llnl.gov", creating the tar file in the user’s remote FTP home directory, enter (bonus HTAR functionality to sites outside SciNet):
htar -cf files.tar -F blue.pacific.llnl.gov file1 file2
- To extract all files from the project1/src directory in the Archive file called proj1.tar, and use the time of extraction as the modification time, enter:
htar -xm -f proj1.tar project1/src
- To display the names of the files in the out.tar archive file within the HPSS home directory, enter:
htar -vtf out.tar
- Using the "mirrored set of tapes" provision (you'll need authorization and a directory placement in /archive-dual-copy)
htar -cf /archive-dual-copy/<group>/<user>/files.tar file1 file2
For more details please check the HTAR - Introduction or the HTAR Man Page online
Using the batch queue
- gpc-archive01 is part of the gpc queuing system under torque/moab
- Currently it is setup to share the node with up to 12 jobs at one time
- default parameters ( -l nodes=1:ppn=1,walltime=48:00:00)
qsub -I -q archive showq -w class=archive
- sample data offload
#!/bin/bash #PBS -q archive #PBS -N offload #PBS -j oe #PBS -o hpsslogs/$PBS_JOBNAME.$PBS_JOBID date /usr/local/bin/hsi -v <<EOF mkdir put-away-and-forget cd put-away-and-forget put /scratch/$USER/workarea/finished-job1.tar.gz : finished-job1.tar.gz put /scratch/$USER/workarea/finished-job2.tar.gz : finished-job2.tar.gz EOF date
- sample data list
- Very painful without interactive browsing -Tentative solution: dump all user files to log file and use that as file index
#!/bin/bash #PBS -q archive #PBS -N hpss_dump #PBS -j oe #PBS -o hpsslogs/$PBS_JOBNAME.$PBS_JOBID date echo =========== echo /usr/local/bin/hsi -v <<EOF ls -lUR EOF echo echo =========== date
- sample data restore
#!/bin/bash # This script is named: data-restore.sh #PBS -q archive #PBS -N restore #PBS -j oe #PBS -o hpsslogs/$PBS_JOBNAME.$PBS_JOBID date mkdir -p /scratch/knecht/restored-from-MARS /usr/local/bin/hsi -v << EOF get /scratch/knecht/restored-from-MARS/Jan-2010-jobs.tar.gz : forgotten-from-2010/Jan-2010-jobs.tar.gz get /scratch/knecht/restored-from-MARS/Feb-2010-jobs.tar.gz : forgotten-from-2010/Feb-2010-jobs.tar.gz EOF date
- sample analysis (depends on previous data-restore.sh execution)
gpc04 $ qsub $(qsub data-restore.sh | awk -F '.' '{print "-W depend=afterok:"$1}') job-to-work-on-restored-data.sh