PARALLEL COMPUTING - Details and intro to MPI
The first email tells you the job began. The second one tells you it finished, and gives an Exit_status = 0 means you got results,
something else means a problem - e.g. I got 127 when I forgot to compile the program so it did not exist. A list of the meanings of the errors is at:
- The TAMNUN homepage is
here. Press the ENGLISH button if you prefer! Use this to get a personal account, join the mailing list etc. Please note that links below to phycomp will work from anywhere, those to the help webpages on TAMNUN only from within the Technion.
- The students attending the computational physics lecture in 2015 have been allocated an account on TAMNUN. Lets call yours course0x (6 < x < 20), see list handed around in class to see which one you got.
- Now we detour to the PBS link, if I did not fit it into last week.
I would like to do this on aluf so if you are an undergrad please share or
look at the projector.
- At the moment it is possible to use TAMNUN in scalar mode
(one core at a time - this option may be removed if computer gets full),
with software that uses threads or OPENMP on
multiple cores of a single note or with
the MPI message passing interface. We will learn MPI only, although
one example of MATLAB on multiple cores prepared by Adam Levi will be
shown later. I will also give an example of using MPI to submit multiple
jobs with different parameters to different cores.
We do not have a license for parallel MATLAB for codes not on a single node, since there are 12 cores on each node, this was not a priority.
CONTENTS of CLASS PROGRAM PACKAGE:
- Programs you will need include -
to put in your main directory
# This is .cshrc example
if (! $?prompt) exit
# If you need PATH additions, please refer to the TAMNUN team hpc@tx
set path = ( $path $HOME/bin . )
set prompt="%Stamnun [%/] %h >%s "
alias cp 'cp -i'
alias mv 'mv -i'
alias rm 'rm -i'
(Install this file with the commands
cp cshrc-example .cshrc
and it will work)
and some files to put in the working directory we will call test for c files and programs for fortran files,
and two modifiable queue submission scripts
for submission to the regular job queue and graphics queue. Scripts for submission to fancier queues are on the page
Several c and fortran programs that will be explained later are also included.
(the directory will appear when you untar the file).
Today we will use simphony_q only so you have to edit it.
If you get ``permission denied'' on a .sh file (e.g. on cat.sh, you need to write chmod a+x cat.sh to make it executable.
A good PDF on MPI
from the Ohio Supercomputer Center can help you get started.
[local version]. We will work thru part of this together to start, then go to our examples and perhaps return to advanced MPI later.
- More links to MPI courses here. Dr Anne Weill's NANCO MPI course has a good MPI summary, although the machine details are outdated. Finally, the ultimate references are: are the Intel getting started
and reference manual guides.
Lets get started!
- Have your account name and password ready as well as a window to the computer where you are running your browser and one to TAMNUN.
- Go into your TAMNUN account by ssh -X tamnun -l course0x
(passwords to be given in class). For 2015/6 these accounts are ready to start the transfer of a tar file tamprog.tar with the programs you will need.
- Download the tar file (take the save to disk option)
with the programs you need for our practice session on TAMNUN. Download to your local account then do
scp tamprog.tar course0x@tamnun:
to move them over.
- Go into your open tamnun window and do
tar -xvf tamprog.tar
to open up the tar archive. You will get some script and c/fortran programs.
- Write (this is pre-done for 2015/6 class accounts)
cp cshrc-example .cshrc then
- The sample codes practice asking the different cores to send messages saying ``Hello''. The steps you need are:
- Selecting your language and code - I will use c for this example and the code called hello_world.c
- Compiling it
mpicc hello_world.c -o hello_world.ex
- Selecting your queue (we will all use training_q or simphony_q
for now) and no. of nodes and then editing the submission script
mpi_pbs_class.sh - you could start with
cp mpi_pbs_class.sh mpi_pbs_class_0x.sh
with the your email, queue, number of nodes and correct location of your program on your account. YOU MUST change mail and I recommend a tx/t2 account; may not work with gmail. Please also change x to your number!!!
I like to work in a new directory called e.g. test
and move all files except .cshrc to there.
- Submitting the script with the command
Now go to tamclass_c.html - c example for
You will see the program files and find out how to monitor the jobs and see some results.
Then look at the tamclass_f.html - FORTRAN examples for TAMNUN, which are run with the same general instructions but a different compilation command.
The final stage is to return to the MPI slides and continue reading further.
The links here are helpful for the question in Targil 5
There are many other sample programs in the pages about MPI, mentioned above,
Websites with projects using MPI include:
- How to submit multiple realizations with a single submission script - Polina Pine
- Spatial division of a sample for Molecular Dynamics - Pavel Bavli
- Spatial division of an Ising model Monte Carlo simulation - Michael Refaelovich
- Parallelization of the equation solver in a Molecular Dynamics simulation.
- Ofer Filiba
- A new spatial decomposition for a simulation - David Mazvovsky
The first three projects were carried out on NANCO, TAMNUN's predecessor.
The differences are queue names, queue scripts and compiler commands but the same fortran/c runs on TAMNUN. I have adapted the first and third sites to TAMNUN and can help if asked.
The ambitious can look at
tamclass_g.html - GPU example for TAMNUN, with thanks to Igal Rasin.
Cuda_bonus_question.pdf - GPU/MATLAB example for TAMNUN, with thanks to Adam Levi. This is interactive use and today should be combined with submission to the GPU queue. A regular MATLAB submission script is linked here, but a bit more work is needed to combine all this.
This page was last updated in