Difference between revisions of "Introduction To Performance"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 1: Line 1:
==Serial Performance==
+
==The Concepts of Parallel Performance==
  
Worrying about parallel performance before the code performs well with a single task doesn't make much sense!  Profiling your code when running with one task  allows you to spot serial `hot spots' for optimization, as well as giving you more detailed understanding of where your program spends its time.
+
Parallel computing used to be a very specialized domain; but now even making the best use of your laptop, which almost certainly has multiple independant computing cores, requires understanding the basic concepts of performance in a parallel environment.
 
 
/bin/time
 
  
gprof
+
Most fundamentally, parallel programming allows three possible ways of getting more and better science done:
 +
* '''Running many copies of the same program'''  If you have a program that works in serial, having many processors available to you allows you to run many copies of the same program at once, improving your '''''througput'''''.  This (can be) a sort of trivial use of parallel computing and doesn't require very specialized hardware, but it can be extremely useful for, for instance, running parameter studies or sensitivity studies.  Best of all, this is essentially guaranteed to run efficiently if your serial code runs efficiently!  Because this doesn't require fancy hardware, it is a waste of resources to use the [[TCS_Quickstart|Tightly Coupled System]] for these sorts of tasks and instead they must be run on the [[GPC_Quickstart|General Purpose Cluster]].
 +
* '''Running the same program on many processors'''.  This is what most people think of as parallel computing.  It can take a lot of work to make an existing code run efficiently on many processors, or to design a new code to make use of these resources, but when it works, one can achieve a substantial '''''speedup''''' of individual jobs.  This might mean the difference between a computation running in a feasible length of time for a research project or taking years to complete --- so while it may be a lot of work, it may be your only option.    To determine whether your code runs well on many processors, you need to measure ''speedup'' and ''efficiency''; to see how many processors one should use for a given problem you must run '''''strong scaling tests'''''.
 +
* '''Running larger problems'''.  One achieves speedup by using more processors on the same problem.  But by running your job in parallel you may have access to more resources other than just processors --- for instance, more memory, or more disks.  In this case, you may be able to run problems that simply wouldn't be possible on a single processor or a single computer; one can achieve significant '''''sizeup'''''.  To find how large a problem one can efficiently run, one measures ''efficiency'' and runs '''''weak scaling tests'''''.
  
vtune (Intel)
+
Of course, these aren't exclusive; one can take advantage of any combination of the above.  It may be that your problem runs efficiently on 8 cores but no more; however, you may be able to get use of more processors by running many jobs to explore parameter space, and already on 8 cores you may be able to consider larger problems than you can with just one!
  
peekperf, hpmcount (p6)
+
===Throughput===
 
 
 
 
==Parallel Performance==
 
  
 
===Speedup===
 
===Speedup===
Line 23: Line 21:
  
 
===Strong Scaling===
 
===Strong Scaling===
 +
 +
==Serial Performance==
 +
 +
Worrying about parallel performance before the code performs well with a single task doesn't make much sense!  Profiling your code when running with one task  allows you to spot serial `hot spots' for optimization, as well as giving you more detailed understanding of where your program spends its t
 +
 +
/bin/time
 +
 +
gprof
 +
 +
vtune (Intel)
 +
 +
peekperf, hpmcount (p6)
 +
 +
  
 
===Weak Scaling===
 
===Weak Scaling===
 +
 +
==Parallel Performance Tools==
  
 
===Common OpenMP Performance Problems===
 
===Common OpenMP Performance Problems===

Revision as of 11:06, 3 June 2009

The Concepts of Parallel Performance

Parallel computing used to be a very specialized domain; but now even making the best use of your laptop, which almost certainly has multiple independant computing cores, requires understanding the basic concepts of performance in a parallel environment.

Most fundamentally, parallel programming allows three possible ways of getting more and better science done:

  • Running many copies of the same program If you have a program that works in serial, having many processors available to you allows you to run many copies of the same program at once, improving your througput. This (can be) a sort of trivial use of parallel computing and doesn't require very specialized hardware, but it can be extremely useful for, for instance, running parameter studies or sensitivity studies. Best of all, this is essentially guaranteed to run efficiently if your serial code runs efficiently! Because this doesn't require fancy hardware, it is a waste of resources to use the Tightly Coupled System for these sorts of tasks and instead they must be run on the General Purpose Cluster.
  • Running the same program on many processors. This is what most people think of as parallel computing. It can take a lot of work to make an existing code run efficiently on many processors, or to design a new code to make use of these resources, but when it works, one can achieve a substantial speedup of individual jobs. This might mean the difference between a computation running in a feasible length of time for a research project or taking years to complete --- so while it may be a lot of work, it may be your only option. To determine whether your code runs well on many processors, you need to measure speedup and efficiency; to see how many processors one should use for a given problem you must run strong scaling tests.
  • Running larger problems. One achieves speedup by using more processors on the same problem. But by running your job in parallel you may have access to more resources other than just processors --- for instance, more memory, or more disks. In this case, you may be able to run problems that simply wouldn't be possible on a single processor or a single computer; one can achieve significant sizeup. To find how large a problem one can efficiently run, one measures efficiency and runs weak scaling tests.

Of course, these aren't exclusive; one can take advantage of any combination of the above. It may be that your problem runs efficiently on 8 cores but no more; however, you may be able to get use of more processors by running many jobs to explore parameter space, and already on 8 cores you may be able to consider larger problems than you can with just one!

Throughput

Speedup

LaTeX: S(N,P) = \frac{t(N,P=1)}{t(N,P)}

Efficiency

LaTeX: E = \frac{S}{P}

Strong Scaling

Serial Performance

Worrying about parallel performance before the code performs well with a single task doesn't make much sense! Profiling your code when running with one task allows you to spot serial `hot spots' for optimization, as well as giving you more detailed understanding of where your program spends its t

/bin/time

gprof

vtune (Intel)

peekperf, hpmcount (p6)


Weak Scaling

Parallel Performance Tools

Common OpenMP Performance Problems

Common MPI Performance Problems

Overuse of MPI_BARRIER

Many Small Messages

Typically, a the time it takes for a message of size n to get from one node to another can be expressed in terms of a latency l and a bandwidth b, LaTeX: t_c = l + \frac{n}{b} . For small messages, the latency can dominate the cost of sending (and processing!) the message. By bundling many small messages into one, you can amortize that cost over many messages, reducing the time spent communicating.

Non-overlapping of computation and communications