Difference between revisions of "MARS"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
m
 
(66 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=== '''Massive Archive and Restore System''' ===
+
{| style="border-spacing: 8px; width:100%"
 +
| valign="top" style="cellpadding:1em; padding:1em; border:2px solid; background-color:#f6f674; border-radius:5px"|
 +
'''WARNING: SciNet is in the process of replacing this wiki with a new documentation site. For current information, please go to [https://docs.scinet.utoronto.ca https://docs.scinet.utoronto.ca]'''
 +
|}
  
(Pilot usage phase to start in Jun/2011 with a select group of users. Deployment and configuration are still a work in progress)
+
MARS is not more. [https://support.scinet.utoronto.ca/wiki/index.php/HPSS Follow this link]
 
 
The '''MARS''' deployment at SciNet is an effort to offer a more efficient way to offoad/archive data from the most active file systems (scratch and project) than our current TSM-HSM solution, still without having to deal directly with the tape library or "tape commands"
 
 
 
The system is a combination of the underlaying hardware infrastructure, 3 software components, ''HPSS'', ''HSI'' and ''HTAR'', plus some environment customization.
 
 
 
* '''HPSS''': the main software component, best described as a very scalable engine running on a "blackbox" made of disks and tapes, to support the Archive and Restore operations. [http://www.hpss-collaboration.org/index.shtml High Performance Storage System - HPSS] is the result of over a decade of collaboration among five Department of Energy laboratories and IBM, with significant contributions by universities and other laboratories worldwide. For now the best way for SciNet users to [https://support.scinet.utoronto.ca/wiki/index.php/HPSS_compared_to_HSM-TSM understand HPSS] may be to compare it with our existing HSM-TSM implementation.
 
 
 
* '''HSI''': it may be best understood as a supercharged ftp interface, specially designed by [http://www.mgleicher.us/GEL/hsi/ Gleicher Enterprises] to act as a front-end for HPSS, gathering some of the best features you would encounter on a shell, rsync and GridFTP (and a few more). It enables users to transfer whole directory trees from /project and /scratch, therefore freeing up space. HSI is most suitable when those directory trees do not contain too many small files to start with, or when you already have a series of tarballs.
 
 
 
* '''HTAR''': similarly, htar is sort of a "super-tar" application, also specially designed by [http://www.mgleicher.us/GEL/htar/ Gleicher Enterprises] to interact with HPSS, allowing users to build and automatically transfer tarballs to HPSS on the fly. HTAR is most suitable to aggregate whole directory trees. When HTAR creates the TAR file, it also builds an index file, with a ".idx" suffix added, which is stored in the same directory as the TAR file.
 
 
 
=== '''General guide lines''' ===
 
* IN/OUT transfers to HPSS using HSI is bound to maximum of about '''4 files/second'''. Therefore do not attempt to transfer directories with too many small files inside. Not only it will take "forever", it will induce a lot of ware and tear on the library's robot mechanism as well as the tapes themselves in case of recalls. Instead use HTAR, so the files are aggregated while being sent to HPSS
 
* The maximum size that an individual file can have inside an HTAR is '''68GB'''. Please be sure to identify and "fish out" those files that are larger than 68GB from the directories and transfer them with  HSI
 
* The maximum size of a tar file that HPSS will take is '''1TB'''. Please do not generate tarballs that large.
 
* The maximum number of files in a htar is '''1 million'''. Please, break-up your htar segments as required.
 
* These guide lines may be strict, but for as long as they are followed the system will perform reasonably well
 
 
 
=== '''Performance considerations''' ===
 
* Files are kept on disk-cache for as long as possible, so as to avoid tape operations during recalls.
 
* Average transfer rates with '''HSI'''
 
  No small files, average > 1MB/file:
 
  * write: 100-130MB/s
 
  * read:  450-600MB/s (IF no recall from tapes required)
 
 
 
* <span style="color:#CC0000">'''NOTE: do not use HSI with small files (< 1MB/file). It would take over 1 week to transfer 1 TB. If we find that you are abusing the system we'll suspend your privileges'''</span>
 
 
 
* Average transfer rates with '''HTAR'''
 
  Average files size > 1MB
 
  * write: 120MB/s
 
  * read:  480MB/s (IF no recall from tapes required)
 
 
 
  Not too many small files, average > 100KB/file:
 
  * write: 64MB/s
 
  * read:  170MB/s (IF no recall from tapes required)
 
 
 
* Average transfer rates from '''tapes''', if stage is required (add to the above estimates)
 
  * read: 80-100MB/s per tape drive.  
 
  * maximum of 4 drives may be used per hsi/htar session
 
 
 
=== '''Quick Reference''' ===
 
 
 
* Users must request authorization to access MARS@SciNet. To run HSI or HTAR with HPSS please login to the '''gpc-archive01''' node.
 
* To browse the contents of your HPSS archive just type '''hsi''' on a shell to get the hsi prompt. Then use simple commands such as '''ls''', '''pwd''', '''cd''' to navigate your way around. You may also use [https://support.scinet.utoronto.ca/wiki/index.php/HSI_help '''help'''] from the hsi prompt.
 
* Files are organized inside HPSS in the same fashion as in /project. Users in the same group have read permissions to each other's archives.
 
<pre>
 
[HSI]/archive/<group>/<user>
 
</pre>
 
* '''Pilot users:''' <span style="color:#CC0000">DURING THE TESTING PHASE DO NOT DELETE THE ORIGINAL FILES FROM /scratch OR /project</span>
 
 
 
=== '''Using HSI''' ===
 
 
 
* Interactively put a subdirectory ''LargeFiles'' and all its contents recursively. You may use '-u' option to resume a previously disrupted session (as rsync would do).
 
<pre>
 
    hsi <RETURN>
 
    [HSI] prompt
 
    [HSI] mput -R -u LargeFiles</pre>
 
 
 
* Same as above, but from a shell
 
<pre>
 
    hsi "prompt; mput -R -u LargeFiles"
 
</pre>
 
 
 
* Interactively descend into the ''Source'' directory and move all files which end in ".h" into a sibling directory (ie, a directory at the same level in the tree as "Source") named "Include":
 
<pre>
 
    hsi <RETURN>
 
    [HSI] cd Source
 
    [HSI] mv *.h ../Include
 
</pre>
 
 
 
* Delete all files beginning with "m" and ending with 9101 (note that this is an interactive request, not a one-liner request, so the wildcard path does not need quotes to preserve it):
 
<pre>
 
    hsi <RETURN>
 
    [HSI] delete m*9101
 
</pre>
 
 
 
* Interactively delete all files beginning with H and ending with a digit, and ask for verification before deleting each such file.
 
<pre>
 
    hsi <RETURN>
 
    [HSI] mdel H*[0-9]
 
</pre>
 
 
 
* From a shell, save your local files that begin with the letter "c" (let the UN*X shell resolve the wild-card path pattern in terms of your local files by not enclosing it in quotes:
 
<pre>
 
    hsi put c*
 
</pre>
 
 
 
* From a shell, get all files in the subdirectory ''subdirA'' which begin with the letters "b" or "c" (surrounding the wildcard path in single quotes prevents shells on UNIX systems from processing the wild card pattern):
 
<pre>
 
    hsi get ’subdirA/[bc]*’
 
</pre>
 
 
 
* Save a "tar file" of C source programs and header files:
 
<pre>
 
  tar cf - *.[ch] | hsi put - : source.tar
 
</pre>
 
Note: the ":" operator which separates the local and HPSS pathnames must be surrounded by whitespace (one or more space characters)
 
 
 
* Restore the tar file source kept above and extract all files:
 
<pre>
 
    hsi get - : source.tar | tar xf -
 
</pre>
 
 
 
* The commands below are equivalent (the default HSI directory placement is /archive/<group>/<user>/):
 
<pre>
 
    hsi put source.tar
 
    hsi put source.tar : /archive/<group>/<user>/source.tar
 
</pre>
 
 
 
* For more details please check the [http://www.mgleicher.us/GEL/hsi/ HSI Introduction] or the [http://www.mgleicher.us/GEL/hsi/hsi_man_page.html HSI Man Page] online
 
 
 
=== '''Using HTAR''' ===
 
 
 
* To write the ''file1'' and ''file2'' files to a new archive called ''files.tar'' in the default HPSS home directory, enter:
 
<pre>
 
    htar -cf files.tar file1 file2
 
OR
 
    htar -cf /archive/<group>/<user>/files.tar file1 file2
 
</pre>
 
 
 
*  To write the ''file1'' and ''file2'' files to a new archive called ''files.tar'' on a remote FTP server called "blue.pacific.llnl.gov", creating the tar file in the user’s remote FTP home directory, enter (bonus HTAR functionality to sites outside SciNet):
 
<pre>
 
    htar -cf files.tar -F blue.pacific.llnl.gov file1 file2
 
</pre>
 
 
 
* To extract all files from the ''project1/src'' directory in the Archive file called ''proj1.tar'', and use the time of extraction as the modification time, enter:
 
<pre>
 
    htar -xm -f proj1.tar project1/src
 
</pre>
 
 
 
* To display the names of the files in the ''out.tar'' archive file within the HPSS home directory, enter:
 
<pre>
 
    htar -vtf out.tar
 
</pre>
 
 
 
For more details please check the [http://www.mgleicher.us/GEL/htar/ HTAR - Introduction] or the [http://www.mgleicher.us/GEL/htar/htar_man_page.html HTAR Man Page] online
 
 
 
=== '''Using the batch queue''' ===
 
* gpc-archive01 is part of the gpc queuing system under torque/moab
 
* Currently it is setup to share the node with up to 12 jobs at one time
 
* default parameters ( -l nodes=1:ppn=1,walltime=48:00:00)
 
<pre>
 
showq -w class=archive
 
 
 
qsub -I -q archive
 
</pre>
 
 
 
* sample '''data offload'''
 
<pre>
 
#!/bin/bash
 
 
 
# This script is named: data-offload.sh
 
 
 
#PBS -q archive
 
#PBS -N offload
 
#PBS -j oe
 
#PBS -o hpsslogs/$PBS_JOBNAME.$PBS_JOBID
 
 
 
date
 
 
 
# individual tarballs already exist
 
/usr/local/bin/hsi  -v <<EOF
 
mkdir put-away-and-forget
 
cd put-away-and-forget
 
put /scratch/$USER/workarea/finished-job1.tar.gz : finished-job1.tar.gz
 
put /scratch/$USER/workarea/finished-job2.tar.gz : finished-job2.tar.gz
 
EOF
 
 
 
# create a tarball on-the-fly of the finished-job3 directory
 
/usr/local/bin/htar -cf finished-job3.tar /scratch/$USER/workarea/finished-job3/
 
 
 
date
 
</pre>
 
* sample '''data list'''
 
  - Very painful without interactive browsing
 
      -Tentative solution: dump all user files to log file and use that as file index
 
<pre>
 
#!/bin/bash
 
 
 
# This script is named: data-list.sh
 
 
 
#PBS -q archive
 
#PBS -N hpss_dump
 
#PBS -j oe
 
#PBS -o hpsslogs/$PBS_JOBNAME.$PBS_JOBID
 
 
 
date
 
echo ===========
 
echo
 
/usr/local/bin/hsi  -v <<EOF
 
ls -lUR
 
EOF
 
echo
 
echo ===========
 
date
 
</pre>
 
 
 
* sample '''data restore'''
 
<pre>
 
#!/bin/bash
 
 
 
# This script is named: data-restore.sh
 
 
 
#PBS -q archive
 
#PBS -N restore
 
#PBS -j oe
 
#PBS -o hpsslogs/$PBS_JOBNAME.$PBS_JOBID
 
 
 
date
 
 
 
mkdir -p /scratch/$USER/restored-from-MARS
 
 
 
/usr/local/bin/hsi  -v << EOF
 
get /scratch/$USER/restored-from-MARS/Jan-2010-jobs.tar.gz : forgotten-from-2010/Jan-2010-jobs.tar.gz
 
get /scratch/$USER/restored-from-MARS/Feb-2010-jobs.tar.gz : forgotten-from-2010/Feb-2010-jobs.tar.gz
 
EOF
 
 
 
cd /scratch/$USER/restored-from-MARS
 
/usr/local/bin/htar -xf finished-job3.tar
 
 
 
date
 
</pre>
 
 
 
* sample '''analysis''' (depends on previous data-restore.sh execution)
 
<pre>
 
gpc04 $ qsub $(qsub data-restore.sh | awk -F '.' '{print "-W depend=afterok:"$1}') job-to-work-on-restored-data.sh
 
</pre>
 

Latest revision as of 19:37, 31 August 2018

WARNING: SciNet is in the process of replacing this wiki with a new documentation site. For current information, please go to https://docs.scinet.utoronto.ca

MARS is not more. Follow this link