OK - grand-daughter gone home

and Beowulf restored to two working nodes

, so here goes.

The following assumes that you have followed the U. of Southampton instructions - including installing the Fortran compiler - and got as far as running the cpi program on two nodes.

I'm focussing on Fortran in this post because I've been working more with that language than C. I'll do a separate post for C later.

1) Go to the folder where you downloaded and extracted the mpich2 sources from Argonne.

2) Go to the extracted folder. In my case, this is mpich2-1.5, which is a later version than Southampton's.

3) Go to the "examples" folder within this. Here you will find the source program for various versions of cpi, in C and Fortran, and many other source code examples. (The Fortran sources are in sub-directories f77 and f90.)

Here is the code for pi3f90.f90, in case you can't find it:

Code: Select all

```
!*****************************************************************
! pi390.f - compute pi by integrating f(x) = 4/(1 + x**2)
!
! (C) 2001 by Argonne National Laboratory.
! See COPYRIGHT in top-level directory.
!
! Each node:
! 1) receives the number of rectangles used in the approximation.
! 2) calculates the areas of it's rectangles.
! 3) Synchronizes for a global summation.
! Node 0 prints the result.
!
! Variables:
!
! pi the calculated result
! n number of points of integration.
! x midpoint of each rectangle's interval
! f function to integrate
! sum,pi area of rectangles
! tmp temporary scratch space for global summation
! i do loop index
!****************************************************************************
program main
use mpi
double precision PI25DT
parameter (PI25DT = 3.141592653589793238462643d0)
double precision mypi, pi, h, sum, x, f, a
integer n, myid, numprocs, i, rc
! function to integrate
f(a) = 4.d0 / (1.d0 + a*a)
call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
print *, 'Process ', myid, ' of ', numprocs, ' is alive'
sizetype = 1
sumtype = 2
do
if ( myid .eq. 0 ) then
write(6,98)
98 format('Enter the number of intervals: (0 quits)')
read(5,99) n
99 format(i10)
endif
call MPI_BCAST(n,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)
! check for quit signal
if ( n .le. 0 ) exit
! calculate the interval size
h = 1.0d0/n
sum = 0.0d0
do i = myid+1, n, numprocs
x = h * (dble(i) - 0.5d0)
sum = sum + f(x)
enddo
mypi = h * sum
! collect all the partial sums
call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0, &
MPI_COMM_WORLD,ierr)
! node 0 prints the answer.
if (myid .eq. 0) then
write(6, 97) pi, abs(pi - PI25DT)
97 format(' pi is approximately: ', F18.16, &
' Error is: ', F18.16)
endif
enddo
call MPI_FINALIZE(rc)
stop
end
```

4) Copy the source code for pi3f90.f90 to a folder of your choice. You can compile from the original folder, of course, but I prefer to make a copy to tinker with, while preserving the original.

5) Don't waste time trying to compile with the standard Fortran compiler - it won't work. (This misunderstanding cost me a day or two!)

6) cd to the mpi_testing folder that was set up during the Southampton installation process.

7) Do: mpif90 -o pi3f90 ~/

*(your-directory)*/pi390.f90

8) A return to your prompt without error messages indicates that the source program has successfully found the MPI libraries and has compiled successfully.

9) Edit machinefile to contain the ip address of your master node.

10) Do: mpiexec -f machinefile -n 1 ~/mpi_testing/pi3f90

11) Enter the required parameter and it runs!

12) To run on multiple nodes, you need to copy the object file to your various RPi's. Assuming that each has the same folder structure and SSH is running, do:

scp pi390 (

*node's ip address*):/home/pi/mpi_testing

(You could also copy over the source code, but I prefer to keep a single copy on the master node -

with local backup - to manage version control of the code.)

13) Edit machinefile appropriately and set n in the mpiexec command line to calculate pi with parallel processes!

Let me know if the foregoing procedure works and I'll replicate it for the C version of the program - and also explain what I've learned about MPICH2 procedure calls!

Regards,

Alan.

IT Background: Honeywell H2000 ... CA Naked Mini ... Sinclair QL ... WinTel ... Linux ... Raspberry Pi.