If any material below is in contradiction to  material put out by the HPC 
group they are correct - these notes were prepared before TAMNUN was 
completely operational.
..............................Joan

These notes will be  part of Joan Adler's Computational Physics 
notes for her course, to be given in Spring Semester, 2012. See links 
 here  for the updated versions.
--------------------------------------------------------------
Two useful references are the Intel  getting started
and  reference manual guides.
----------------------------------------------------------------------------
Before attempting this page you should make sure you either know LINUX or 
have gone thru the files of:

the lINUX preliminary course

If you get stuck following the instructions below, please check out the
LINUX preliminary course before asking for help.
----------------------------------------------------------------------------
If your account is called course0xy so your location is /u/coursexy 
and you should replace the xy 
with your number if you need full paths.
----------------------------------------------------------------------------
The compilers for real work on TAMNUN are the Intel ones: 
mpiicc and mpifort, but in case of 
license issues, the gnu compilers mpif90 and mpicc work for toy problems.
----------------------------------------------------------------------------

Sample file hello_world.c

------------------------------------------
/*
 *	Hewlett-Packard Co., High Performance Systems Division
 *
 *	Function:	- example: simple "hello world"
 *
 *	$Revision: 1.1.2.1 $
 */

#include 
#include 

main(argc, argv)

int			argc;
char			*argv[];

{
	int		rank, size, len;
	char            name[MPI_MAX_PROCESSOR_NAME];
	int to_wait = 0, sleep_diff = 0, max_limit = 0;
        double sleep_start = 0.0, sleep_now = 0.0;

	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &rank);
	MPI_Comm_size(MPI_COMM_WORLD, &size);
	
	MPI_Get_processor_name(name, &len);

	if (argc > 1)
        {
                to_wait = atoi(argv[1]);
        }

	//busy loop for debuging needs
	if (to_wait)
	{	
    		sleep_start=MPI_Wtime();
    		while(1)
    		{
        		max_limit++;
        		if(max_limit > 100000000)
        		{
                		fprintf(stdout,"--------  exit loop, to_wait: %d, \n", to_wait);
                		break;
        		}
                                                                                                                            
        		sleep_now = MPI_Wtime();
        		sleep_diff = (int)(sleep_now - sleep_start);
        		if(sleep_diff >= to_wait)
        		{
                		break;
        		}
    		}
	}

//	if (rank == 0) //only the first will print this message
		printf ("Hello world! I'm %d of %d on %s\n", rank, size, name);
//	}

	MPI_Finalize();
	exit(0);
}

------------------------------------------

compile command:
------------------------------------------
mpicc hello_world.c -o hello_world.ex
-----------------------------------------
A script for running this file in batch mode on  cores is
 try.sh 
(Script prepared by Suzi and Julia)
---------------------------------------------------------------------

Batch submission to the queue nano_h_p  is done with the command:

qsub  try.sh

you get a response like:
435.admin
-------------------------------------------------------------------
The script reads:
#!/bin/sh
#PBS -q  nano_h_p
#PBS -N job_name

#PBS -l select=4:ncpus=12:mpiprocs=12 -l place=scatter

## comment: use 4 chunks,  12 cpus for 12 mpi tasks on each of 4 nodes
## if scatter is not used the PBS will put 24 mpi processes on 2 nodes

. /usr/local/intel/icsxe/2012.0.032/ictvars.sh intel64

## comment: temporary source line for Intel libraries

PBS_O_WORKDIR=$HOME/test
cd $PBS_O_WORKDIR

## comment: working directory definition


mpirun -np 48  ./hello_world.ex

## comment: the "np" must be equal the number of chunks multiplied by the number of "ncpus"
## for MPI programs "mpiprocs" must be equal to "ncpus"
--------------------------------------------------------------------
This script submits ./hello_world.ex to the nano_h_p queue
and it submits to 48 cores
--------------------------------------------------------------------
``435'' is the job number and 
you can view its progress with qstat
to get a  response like this one  (this is for a script called 
pbs_mpi.sh to the queue workq):

Job id            Name             User              Time Use S Queue
----------------  ---------------- ----------------  -------- - -----
25.admin          test             gkoren            00:01:00 E workq           
27.admin          test             gkoren            00:01:00 E workq           
29.admin          pbs_mpi.sh       phr76ja           00:00:00 R workq           
------------------------------------------------------------------
When your 435 job is  finished you will get 2 new files 
job_name.e378 and job_name.o378
------------------------------------------------------------------
The results for the try.sh script can be viewed on the screen with
more  job_name.o435
and you can view these results here.
-------------------------------------------------
A help page for the graphics processors is  here.
This page was prepared with help from Dr. Igal Raisin.
-------------------------------------------------
Revised up to here.
The fortran part still needs to be revised for tamnun

There are fortran commands in the file 
tamlearn_fortran.html, 
still in preparation on 23/5/12 
--------------------------------------------------