Difference between revisions of "Gromacs"

From oldwiki.scinet.utoronto.ca
Jump to navigation Jump to search
Line 3: Line 3:
 
Search the mailing list archives: http://oldwww.gromacs.org/swish-e/search/search2.php
 
Search the mailing list archives: http://oldwww.gromacs.org/swish-e/search/search2.php
  
=====Peculiarities of running single node GROMACS jobs on SCINET=====
+
=Peculiarities of running single node GROMACS jobs on SCINET=
 
This is '''VERY IMPORTANT !!!'''
 
This is '''VERY IMPORTANT !!!'''
 
Please read the [[https://support.scinet.utoronto.ca/wiki/index.php/User_Tips#Running_single_node_MPI_jobs relevant user tips section]] for information that is essential for your single node (up to 8 core) MPI GROMACS jobs.
 
Please read the [[https://support.scinet.utoronto.ca/wiki/index.php/User_Tips#Running_single_node_MPI_jobs relevant user tips section]] for information that is essential for your single node (up to 8 core) MPI GROMACS jobs.
Line 9: Line 9:
 
-- [[User:Cneale|cneale]] 14 September 2009
 
-- [[User:Cneale|cneale]] 14 September 2009
  
=====Compiling GROMACS on Scinet =====
+
=Compiling GROMACS on SciNet=
Please refer to the [[compiling_Gromacs|GROMACS compilation page]]
+
Please refer to the [[Compiling_Gromacs|GROMACS compilation page]]
  
=====Things still left to do for GROMACS=====
+
=Submitting GROMACS jobs on SciNet=
 +
Please refer to the [[Running_Gromacs|GROMACS submission page]]
 +
 
 +
=Things still left to do for GROMACS=
  
 
Intel has it's own fast fourier transform library, which we expect to yield improved performance over fftw.
 
Intel has it's own fast fourier transform library, which we expect to yield improved performance over fftw.
Line 18: Line 21:
  
 
-- [[User:Cneale|cneale]] 18 August 2009
 
-- [[User:Cneale|cneale]] 18 August 2009
=====GROMACS benchmarks on Scinet=====
+
=GROMACS benchmarks on Scinet=
  
 
This is a rudimentary list of scaling information.
 
This is a rudimentary list of scaling information.
Line 42: Line 45:
  
 
-- [[User:Cneale|cneale]] 19 August 2009
 
-- [[User:Cneale|cneale]] 19 August 2009
=====Strong scaling for GROMACS on GPC=====
+
=Strong scaling for GROMACS on GPC=
  
 
Requested, and on our list to complete, but not yet available in a complete chart form.
 
Requested, and on our list to complete, but not yet available in a complete chart form.
  
 
-- [[User:Cneale|cneale]] 19 August 2009
 
-- [[User:Cneale|cneale]] 19 August 2009
=====Scientific studies being carried out using GROMACS on GPC=====
+
=Scientific studies being carried out using GROMACS on GPC=
  
 
Requested, but not yet available
 
Requested, but not yet available
  
 
-- [[User:Cneale|cneale]] 19 August 2009
 
-- [[User:Cneale|cneale]] 19 August 2009

Revision as of 16:18, 24 September 2009

Download and general information: http://www.gromacs.org

Search the mailing list archives: http://oldwww.gromacs.org/swish-e/search/search2.php

Peculiarities of running single node GROMACS jobs on SCINET

This is VERY IMPORTANT !!! Please read the [relevant user tips section] for information that is essential for your single node (up to 8 core) MPI GROMACS jobs.

-- cneale 14 September 2009

Compiling GROMACS on SciNet

Please refer to the GROMACS compilation page

Submitting GROMACS jobs on SciNet

Please refer to the GROMACS submission page

Things still left to do for GROMACS

Intel has it's own fast fourier transform library, which we expect to yield improved performance over fftw. We have not yet attempted such a compilation.

-- cneale 18 August 2009

GROMACS benchmarks on Scinet

This is a rudimentary list of scaling information.

I have a 50K atom system running performance on GPC right now. On 56 cores connected with IB I am getting 55 ns/day. I set up 50 such simulations, each with 2 proteins in a bilayer and I'm getting a total of 5.5 us per day. I am using gromacs 4.0.5 and a 5 fs timestep by fixing the bond lengths and all angles involving hydrogen.

I can get about 12 ns/day on 8 cores of the non-IB part of GPC -- also excellent.

As for larger systems, My speedup over saw.sharcnet.ca for a 1e6 atom system is only 1.2x running on 128 cores in single precision. Although saw.sharcnet.ca is composed of xeons, they are running at 2.83 GHz (https://www.sharcnet.ca/my/systems/show/41), which is a faster clock speed than the Scinet 2.5 GHz for Intel's next-generation X86-CPU architecture. While GROMACS is generally not excellent for scaling up to or beyond 128 cores (even for large systems), our benchmarking of this system on saw.sharcnet.ca indicated that it was running at about 65% efficiency. Benchmarking was also done on Scinet for this system, but was not recorded as we were mostly tinkering with the -npme option to mdrun in an attempt to optimize it. My recollection, though, is that the scaling was similar on scinet.

-- cneale 19 August 2009

Strong scaling for GROMACS on GPC

Requested, and on our list to complete, but not yet available in a complete chart form.

-- cneale 19 August 2009

Scientific studies being carried out using GROMACS on GPC

Requested, but not yet available

-- cneale 19 August 2009