User Ramdisk

From oldwiki.scinet.utoronto.ca
Revision as of 14:41, 9 April 2010 by Rzon (talk | contribs) (→‎Ram Disk)
Jump to navigation Jump to search

Ram Disk

On the GPC nodes, a `ram disk' is available. Up to half of the memory on the node may be used as a temporary file system. This is particularly useful for use in the early stages of migrating desktop-computing codes to a High Performance Computing platform such as the GPC, especially those that use a lot of I/O, such as Blast. Using a lot if I/O becomes a bottleneck in large scale computing. One especially suffers a performance penalty on parallel file systems (such as the GPFS used on SciNet), since the files are synchronized across the whole network.

Ramdisk is much faster than real disk, and is especially beneficial for codes which perform a lot of small I/O work, since the ramdisk does not require network traffic. However, each node sees its own ramdisk and cannot see files on that of other nodes.

To use the ramdisk, create and read to / write from files in /dev/shm/.. just as one would to (eg) /scratch/USER/. Only the amount of RAM needed to store the files will be taken up by the temporary file system. Thus if you have 8 serial jobs each requiring 1 GB of RAM, and 1GB is taken up by various OS services, you would still have approximately 7GB available to use as ramdisk on a 16GB node. However, if you were to write 8 GB of data to the RAM disk, this would exceed available memory and your job would likely crash.

Note that when using the ramdisk:

  • At the start of your job, you can copy frequently accessed files to ramdisk. If there are many such files, it is beneficial to put them in a tar file.
  • One would periodically copy the output files to files on /scratch or /project so that they are available after the job has completed.
  • It is very important to delete your files from ram disk at the end of your job. If you do not do this, the next user to use that node will have less RAM available than they might expect, and this might kill their jobs.

A script using the ramdisk in a 1 day openMP job might look like this:

#!/bin/bash
#MOAB/Torque submission script for SciNet GPC 
#PBS -l nodes=1:ppn=8,walltime=1:00:00
#PBS -N ramdisk-test

#Job parameters:
execname=job          # name of the executable
input_tar=input.tar   # tar file with input files and executables
output_tar=out.tar    # file in which to store output
input_subdir=indir    # sub-directory (within input_tar) with input files
output_subdir=outdir  # sub-directory to contain of output files
poll_period=60        # how often check for job completion (in seconds)
save_period=120       # how often to save output (in minutes)

#Copy to ramdisk
echo "Setting up files on ramdisk directory /dev/shm/$USER"
mkdir -p /dev/shm/$USER
mkdir -p /dev/shm/$USER/$output_subdir
cd /dev/shm/$USER
cp $PBS_O_WORKDIR/$input_tar .
tar xf $input_tar
rm -rf $input_tar

#Run on ramdisk
echo "Starting job"
./$execname $input_subdir $output_subdir &
pid=$!

function save_results {    
    echo "Copying from directory $output_subdir to file $PBS_O_WORKDIR/$output_tar"
    tar cf $output_tar $output_subdir/*
    cp $output_tar $PBS_O_WORKDIR
}

function cleanup_ramdisk {
    echo "Cleaning up ramdisk directory /dev/shm/$USER"
    rm -rf /dev/shm/$USER
    echo "done"
}

function trap_term {
    echo "Trapped term (soft kill) signal"
    save_results
    cleanup_ramdisk
    exit
}

function interruptible_sleep {
    # a sleep $1 would not be interruptible
    for m in `seq $1`; do  
        sleep 1
    done
}

function is_running {
    #check if a process is running
    ps -p $1 -o pid= | wc -l
}

trap "trap_term" TERM

#number of pollings per save period (rounded down):
npoll=$(($save_period*60/$poll_period))

#polling and saving loop
running=$(is_running $pid)
while [ $running -gt 0 ]
do
    for n in `seq $npoll`
    do
        interruptible_sleep $poll_period
        running=$(is_running $pid)
        if [ $running -eq 0 ]; then
            break
        fi
    done
    save_results
done

#Done
cleanup_ramdisk

echo "Clean end of job"

Notes:

  • The script assumes that the tar file input.tar contains the executable job and the input files in a subdirectory called indir.
  • The executable is supposed to take the locations of the input and output directory as arguments.
  • The trap comment makes sure that the results gets save and the ramdisk gets flushed even when the jobs gets killed before the end of the script is reached. trap is bash script construction that executes the given command when the job is given, in this case, a TERM signal. The TERM signal is given by the scheduler 30 seconds before you time is up.
  • All files are kept in a subdirectory of /dev/shm. This makes the clean up simpler, and keeps things tidy when doing small test jobs on the development nodes.