MARS

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search

Massive Archive and Restore System

(Pilot testing project to start in May/2010 with a select group of users, and is still a work in progress)

The MARS deployment at SciNet is a combination of 3 software components, HPSS, HSI and HTAR, plus some customization done to our environment. What we want to offer users is a way to offoad/archive data from the most active file systems (scratch and project) without necessarily having to deal directly with the tape library or "tape commands"

HPSS: the main component, best described as a very scalable "blackbox" engine running in the background to support the Archive and Restore operations. High Performance Storage System - HPSS is the result of over a decade of collaboration among five Department of Energy laboratories and IBM, with significant contributions by universities and other laboratories worldwide. For now the best way to understand HPSS is to compare it with our existing HSM-TSM implementation.

HSI: it may be best understood as a supercharged ftp interface, specially designed by Gleicher Enterprises to act as a front-end for HPSS, gathering some of the best features you would encounter on a shell, rsync and GridFTP. It enables users to transfer whole directory trees from /project and /scratch, therefore freeing up space in most active file systems. HSI is most suitable when those directory trees do not contain too many small files to start with, or when you already have a series of tarballs.

HTAR: similarly, htar is sort of a "super-tar" application, also specially designed by Gleicher Enterprises to interact with HPSS, allowing users to auto-magically build and transfer larger tarballs to/from HPSS. HTAR is most suitable to aggregate whole directory trees, provided that no individual files exceed 68GB. The maximum size of any htar file should not exceed 1T either.