MARS

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search

Massive Archive and Restore System

(Pilot usage phase to start in May/2011 with a select group of users. Deployment and configuration are still a work in progress)

The MARS deployment at SciNet is an effort offer a much more efficient way to offoad/archive data from the most active file systems (scratch and project) then our current TSM-HSM solution, still without necessarily having to deal directly with the tape library or "tape commands"

The system is a combination of 3 software components, HPSS, HSI and HTAR, plus the underlaying hardware infrastructure and some environment customization.

HPSS: the main component, best described as a very scalable "blackbox" engine running in the background to support the Archive and Restore operations. High Performance Storage System - HPSS is the result of over a decade of collaboration among five Department of Energy laboratories and IBM, with significant contributions by universities and other laboratories worldwide. For now the best way for SciNet users to understand HPSS may be to compare it with our existing HSM-TSM implementation.

HSI: it may be best understood as a supercharged ftp interface, specially designed by Gleicher Enterprises to act as a front-end for HPSS, gathering some of the best features you would encounter on a shell, rsync and GridFTP (and a few more). It enables users to transfer whole directory trees from /project and /scratch, therefore freeing up space. HSI is most suitable when those directory trees do not contain too many small files to start with, or when you already have a series of tarballs.

HTAR: similarly, htar is sort of a "super-tar" application, also specially designed by Gleicher Enterprises to interact with HPSS, allowing users to build and automatically transfer tarballs to HPSS on the fly. HTAR is most suitable to aggregate whole directory trees, provided that no individual files exceed 68GB. The maximum size of any htar file should not exceed 1TB either.