SGI_MPI training material
description
Transcript of SGI_MPI training material
-
MPIMPI
sgi 2003.7
sgi
-
1.1 1.1
sgi
-
CPU
Data
I/O Web serving:Small, integrated system
Media streaming:Access storage and networking
Signal processing:Networking and compute
Database/CRM/ERP:Storage
Genomics:Compute cycles
Traditional supercomputer:Compute, networking, storage
sgi
-
sgi
-
sgi
1.
2.
3.
-
1.2 1.2
MPP(Massively Parallel Processors)
NUMA(Non-Uniform Memory Access)
Distributed memory
SMP(Symmetric Multiprocessor)
Centralized memory
Individual Address SpaceShare Address Space
sgi
-
CoherentShared Memory
CPU 1 CPU 2
Memory
Easy to ProgramHard to Scale Hardware
DistributedMemory
CPU 1 CPU 2
Memory
Hard to ProgramEasy to Scale Hardware
Memory
sgi
-
Processor
Cache
Processor
Cache
Processor
Cache
Processor
Cache
Bus or Crosssbar Switch
Memory I/O
sgi
-
Processor
Cache
Bus
Memory I/O
sgi
Processor
Cache
Bus
Memory I/O
Processor
Cache
Bus
Memory I/O
Processor
Cache
Bus
Memory I/O
Interconnection Network
-
MIPS
BedrockASIC
MIPS MIPS
MIPS
XIO+ NUMA 3
8 GB Physical Memory
BedrockASIC
XIO+
MIPS
MIPS MIPS
MIPS
8 GB Physical Memory
16 GB Shared Physical Memory
sgi
-
sgi
-
sgi
-
1.3 1.3 sgi
S2
P4
P3
P2
P1
S1
-
S2
P3
sgi
S2
P4
P3
P2
P1
S1
P1 P4
process
S1
P2
thread
-
sgi
S2
P4
P3
P2
P1
S1
S2
P1
S1
S2
P2
S1
S2
P3
S1
S2
P4
S1
process0 process1 process2 process3
-
sgi
-
SPMD SPMD MPMDMPMD
a.out a.out a.out
SPMD MPMD: Mster/slave MPMD: Couple Analysis
p1 p2 p3 p1 p2 p3 p1 p2 p3
-
A process
Read array a() from the input file
Set is=1 and ie=6
Process from a(is)to a(ie)
Write array a() to the output file
a.out
-
process 1 Read array a() from
the input file Get my rank is=2*rank+1,ie=2*ra
nk+2 Process from a(is)to
a(ie) Gather the result to
process 0 If rank=0 then write
array a() to the ouput file
a.out
process 2 Read array a() from
the input file Get my rank is=2*rank+1,ie=2*ra
nk+2 Process from a(is)to
a(ie) Gather the result to
process 0 If rank=0 then write
array a() to the ouput file
process 0 Read array a() from
the input file Get my rank is=2*rank+1,ie=2*ra
nk+2 Process from a(is)to
a(ie) Gather the result to
process 0 If rank=0 then write
array a() to the ouput file
-
process 1 Read array a() from
the input file Get my rank If(rank.eq.0)is=1,ie=
2 If(rank.eq.1)is=3,ie=
4 If(rank.eq.2)is=5,ie=
6 Process from a(is)to a(ie)
Gather the result to process 0
If rank=0 then write array a() to the ouput file
a.out
process 2 Read array a() from
the input file Get my rank If(rank.eq.0)is=1,ie=
2 If(rank.eq.1)is=3,ie=
4 If(rank.eq.2)is=5,ie=
6 Process from a(is)to a(ie)
Gather the result to process 0
If rank=0 then write array a() to the ouput file
process 0 Read array a() from
the input file Get my rank If(rank.eq.0)is=1,ie=
2 If(rank.eq.1)is=3,ie=
4 If(rank.eq.2)is=5,ie=
6 Process from a(is)to
a(ie) Gather the result to
process 0 If rank=0 then write
array a() to the ouput file
-
1.4 1.4 HPF OpenMP
MPI
sgi
-
p
p
1
pp
1
1.51.5
-
sgi
-
pp
1
-
::
sgi
-
MPIMPI
MPI(Message Passing Interface )19945
sgi
-
MPI1.
2.
3.
4.
-
MPIMPISPMDMPMD
MPIMPIMPICFortran
sgi
-
MPI1291994MPI1997MPI-2200306MPI6MPI
sgi
-
1.
2.
3.
4.
5.
6.
7.
sgiMPI
-
sgi
Point-to-Point MPI_SEND, 35MPI_RECV, MPI_WAIT,...
Collective Communication MPI_BCAST, MPI_GATHER, 30MPI_REDUCE,...
Derived Data Type MPI_TYPE_CONTIGUOUS, 21 MPI_TYPE_COMMIT,...
Topology MPI_CART_CREATE, 16MPI_GRAPH_CREATE,...
Communicator MPI_COMM_SIZE, 17MPI_COMM_RANK,...
Process Group MPI_GROUP_SIZE, 13MPI_GROUP_RANK,...
Environment Management MPI_INIT, 18MPI_FINALIZE, MPI_ABORT,...
Type Subroutines Number
-
uMPI_INIT: MPIuMPI_FIANLIZE: MPIuMPI_COMM_SIZE: uMPI_COMM_RANK: uMPI_SEND: uMPI_RECV:
sgi
-
2.1 2.1 #include mpi.hmain(int argc, char **argv){
int myrank,i,j,k;MPI_Status status;char msg[20];
MPI_Init(&argc,&argv);MPI_Comm_rank(MPI_COMM_WORLD,&myrank);if(myrank == 0){
strcpy(msg,Hello there);MPI_Send(msg,strlen(msg) + 1,MPI_CHAR,1,99,MPI_COMM_WORLD);
}else if(myrank ==1{MPI_Recv(msg,20,MPI_CHAR,0,99,MPI_COMM_WORLD,&status);printf(Receive message = %s\n,msg);
}MPI_Finalize();
}
sgi
-
2.2 2.2
/ /
sgi
-
2.2.1 2.2.1
u
sgi
-
2.2.2 2.2.2
u
sgi
-
2.2.3 2.2.3
u
sgi
-
2.2.4 2.2.4
u
sgi
-
2.2.5 2.2.5 //u
sgi
-
2.2.6 2.2.6 //
u
sgi
-
2.2.7 2.2.7
u
sgi
-
2.2.8 2.2.8
u
sgi
-
2.2.9 2.2.9
u
sgi
-
2.2.10 2.2.10
u
sgi
-
2.3 2.3 MPIMPI
uuuuu
sgi
-
2.3.1 2.3.1
uMPIMPI
C Fortran #include mpi.h Include mpif.h
sgi
-
2.3.2 2.3.2 MPIMPI
u CFortranC
Rc = MPI_Xxxxx(parameter, )
Rc = MPI_Bsend(&buf,count,type,dest,tag,comm)
Rc MPI_SUCCESS
Fortran
CALL MPI_XXXXX(parameter, ,ierr)Call mpi_xxxxx(parameter, ,ierr)
CALL MPI_BSEND(buf,count,type,dest,tag,comm.ierr)
MPI_SUCCESS
sgi
-
CMPIMPI_SUCCESSmpi.hmpi.hMPI
FortranMPIMPI_SUCCESSmpif.hMPI
sgi
-
2.3.3 2.3.3 MPIMPI
MPI
MPI
MPI
sgi
-
2.3.4 2.3.4
uMPIMPIMPI_COMM_WORLDMPI
sgi
-
2.3.5 2.3.5 ((rank)rank)
uMPIID0
if (rank ==0) {0
} else if(rank == 1) {1
}
sgi
-
MPIMPI
uMPIMPI
sgi
-
3.1 3.1 uMPI_InituMPI_Comm_sizeuMPI_Comm_rankuMPI_AbortuMPI_Get_processor_nameuMPI_InitializeduMPI_WtimeuMPI_WtickuMPI_Finalize
sgi
-
3.1.1 3.1.1 MPI_Init()MPI_Init()
uMPIMPIMPIMPICMPI_Init
sgi
uInt MPI_Init (int *argc,char *argv) uMPI_INIT (ierr)
INTEGER ierr
-
33.1..1.22 MPI_Finalize()MPI_Finalize()
uMPIMPIMPI_Init MPI_Finalize
uintMPI_Finalize ()uMPI_FINALIZE (ierr)
INTEGER ierr
sgi
-
program init
include mpif.h
Integer ierr
Call MPI_INIT(ierr)
Print*, hello world
Call MPI_FINALIZE(ierr)
end
MPI
-
3.1.3 3.1.3 MPI_MPI_CommComm_size()_size()
u MPI_COMM_WORDMPI_Comm_size (comm,size)IN comm OUT size comm
u MPI_Comm_size (MPI_Comm comm,int *size)
u MPI_COMM_SIZE (comm,size,ierr)INTEGER comm,size,ierr
sgi
-
33.1..1.44 MPI_MPI_CommComm_rank()_rank()
u 0 ~ -1MPI_Comm_rank (comm, rank)IN comm OUT rank
u int MPI_Comm_rank (MPI_comm comm,int *rank)
u MPI_COMM_RANK (comm,rank,ierr)
INTEGER comm,rank,ierr
sgi
-
0: nprocs = 3 myrank = 0
1: nprocs = 3 myrank = 1
2: nprocs = 3 myrank = 2
1: nprocs = 3 myrank = 1
2: nprocs = 3 myrank = 2
0: nprocs = 3 myrank = 0
-
33.1..1.55 MPI_Abort()MPI_Abort()
u
uMPI_Abort (comm,errorcode)uMPI_ABORT (comm,errorcode,ierr)
sgi
-
33.1..1.66 MPI_MPI_WtimeWtime()()
u
uMPI_Wtime ()uMPI_WTIME ()
sgi
-
3.2 3.2 MPIMPI
uC uFortran
sgi
-
33.2.1 .2.1 C C #include "mpi.hint main(argc,argv)int argc;char *argv[]; {
int numtasks, rank, rc;
rc = MPI_Init(&argc,&argv);if (rc != 0) {
printf ("Error starting MPI program. Terminating.\n");MPI_Abort(MPI_COMM_WORLD, rc);
}MPI_Comm_size(MPI_COMM_WORLD,&numtasks);MPI_Comm_rank(MPI_COMM_WORLD,&rank);printf ("Number of tasks= %d My rank= %d\n", numtasks,rank);/******* do some work *******/MPI_Finalize();
}
sgi
-
33.2.2 .2.2 FortranFortranprogram simpleinclude 'mpif.h'
integer numtasks, rank, ierr, rc
call MPI_INIT(ierr)if (ierr.ne. 0) then
print *,'Error starting MPI program. Terminating.'call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
end ifcall MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)print *, 'Number of tasks=',numtasks,' My rank=',rank
C ****** do some work ******
call MPI_FINALIZE(ierr)end
sgi
-
MPIPoint-to-Point CommunicationMPIMPI
sgi
-
u
MPI_SEND MPI_RECV MPI_ISEND MPI_IRECV
SS RR1
sgi
-
u
MPI_BSEND MPI_IBSEND
SS RR
1
2
sgi
-
u
MPI_SSEND MPI_ISSEND
SS RR1
2
3
sgi
-
u
MPI_RSEND MPI_IRSEND
SS RR1
2
sgi
-
MPI
MPI
sgi
-
4.1 4.1 uMPIubufferucountutypeudestusourceutagucommustatusu request
sgi
-
44.1.1 .1.1 MPIMPI
MPI
: MPI_Send(buffer,count,type,dest,tag,comm)
: MPI_Isend(buffer,count,type,dest,tag,comm,request)
: MPI_Recv(buffer,count,type,source,tag,comm,status)
: MPI_Irecv(buffer,count,type,source,tag,comm,request)
sgi
-
4.1.2 4.1.2 bufferbuffer
u
sgi
-
4.1.3 4.1.3 countcount
uu *
sgi
-
4.1.4 4.1.4 typetype
uMPI MPI_BYTE MPI_PACKEDCFortran
sgi
-
MPI C MPI FortranMPI_CHAR signed charMPI_CHARACTERcharacter(1)MPI_SHORT signed short intMPI_INT signed intMPI_INTEGERintegerMPI_LONG signed long intMPI_UNSIGNED_CHARunsigned charMPI_UNSIGNED_SHORTunsigned short intMPI_UNSIGNEDunsigned intMPI_UNSIGNED_LONGunsigned long intMPI_FLOAT float MPI_REAL realMPI_DOUBLEdouble MPI_DOUBLE_PRECISIONdouble precisionMPI_LONG_DOUBLElong double
MPI_COMPLEXcomplexMPI_LOGICALlogical
MPI_BYTE 8 binary digitsMPI_BYTE 8 binary digits
MPI_PACKEDdata packed orunpacked withMPI_Pack()/MPI_Unpack
MPI_PACKEDdata packed orunpacked withMPI_Pack()/MPI_Unpack
sgi
-
8-byte float real, 8-byte float imaginaryMPI_COMPLEX16, MPI_DOUBLE_COMPLEX
4-byte float real, 4-byte float imaginaryMPI_COMPLEX8,MPI_COMPLEX
16-byte floating pointMPI_REAL16
8-byte floating pointMPI_REAL8, MPI_DOUBLE_PRECISION
4-byte floating pointMPI_REAL4,MPI_REAL
4-byte integerMPI_INTEGER4, MPI_INTEGER
2-byte integerMPI_INTEGER2
1-byte integerMPI_INTEGER1
Description (Fortran bindings)MPI Data Types
sgiMPI Data Type MPI Data Type ((Fortran BindingsFortran Bindings))
-
N/AMPI_BYTE, MPI_PACKED
1-byte characterMPI_CHARACTER
4-byte logicalMPI_LOGICAL4, MPI_LOGICAL
2-byte logicalMPI_LOGICAL2
1-byte logicalMPI_LOGICAL1
16-byte float imaginaryMPI_COMPLEX32
Description (Fortran bindings)MPI Data Types
continuecontinue sgi
-
1)
2)
3)
-
CALL MPI_COMM_RANK(commrankierr)
IF (rank .EQ. 0) THEN
CALL MPI_SEND( a(1)10MPI_REAL1tagcommierr)
ELSE IF (rank .EQ. 1) THEN
CALL MPI_RECV( b(1)15MPI_REAL0tagcommstatusierr)
END IF
-
4.1.5 4.1.5 destdest
u
sgi
-
4.1.6 4.1.6 sourcesource
u MPI_ANY_SOURCE
sgi
-
4.1.7 4.1.7 tagtag
u0-32767tagMPI_ANY_TAGtag
sgi
-
4.1.8 4.1.8 commcomm
uMPI_COMM_WORLD
sgi
-
4.1.9 4.1.9 statusstatus
usourcetagCMPI_Statusstatus.MPI_SOURCEstatus.MPI_TAG Fortran MPI_STATUS_SIZE status(MPI_SOURCE)status(MPI_TAG) MPI_Get_count
sgi
-
4.1.10 4.1.10 requestrequest
uCMPI_RequestFortran
sgi
-
4.2 4.2
uMPI_SendMPI_RecvMPI_SsendMPI_BsendMPI_Rsend
uMPI_Buffer_attachMPI_Buffer_detachuMPI_SendrecvuMPI_WaitMPI_WaitanyMPI_Waitall
MPI_Waitsome uMPI_Probe
sgi
-
44.2.1 .2.1 MPI_Send()MPI_Send()u
MPI_SENDbufcountdatatypedestfagcommIN buf IN count IN datatype IN dest IN tag IN comm
sgi
-
int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest,int tag, MPI_Comm comm)
MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, TERROR)
BUF(*)
INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR
sgi
-
44.2.2 .2.2 MPI_MPI_RecvRecv()()
uMPI_RECVbufcountdatatypesourcetagcommstatus
OUT buf IN count IN datatype IN source IN tag IN comm IN status
sgi
-
int recv(void * bufint countMPI_Datatype datatypeint sourceinttagMPI_Comm commMPI_Status *status)
MPI_RECV(BUFCOUNTDATATYPESOURCETAGCOMMSTATUSIERROR)
BUF(*)
INTEGER COUNTDATATYPESOURCETAGCOMM
STATUSMPI_STATUS_SIZEIERROR
sgi
-
MPI_SEND and MPI_RECV sgi
-
CMPI_Status MPI_SOURCEMPI_TAGMPI_ERROR 3
status.MPI_SOURCE
status.MPI_TAG
status.MPI_ERROR
FortranMPI_STATUS_SIZEMPI_SOURCEMPI_TAGMPI_ERROR 3
statusMPI_SOURCE
statusMPI_TAG
statusMPI_ERROR
sgi
-
MPI_GET_COUNT
MPI_GET_COUNTstatusdatatypecountIN status IN datatype IN count MPI_GET_COUNTMPI_RECV
sgi
-
4.3 4.3
uCuFortran
sgi
-
4.3.1 4.3.1 C C #include "mpi.h"#include
int main(argc,argv) int argc;char *argv[]; {
int numtasks, rank, dest, source, rc, tag=1; char inmsg, outmsg='x';MPI_Status Stat;
MPI_Init(&argc,&argv);MPI_Comm_size(MPI_COMM_WORLD, &numtasks);MPI_Comm_rank(MPI_COMM_WORLD, &rank);
sgi
-
if (rank == 0) {
dest = 1;source = 1;rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD,
&Stat);} else if (rank == 1) {
dest = 0;source = 0;rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD,
&Stat);rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
}MPI_Finalize();
}
sgi
-
4.3.2 4.3.2 Fortran Fortran
program pinginclude 'mpif.h'
integer numtasks, rank, dest, source, tag, ierrinteger stat(MPI_STATUS_SIZE)character inmsg, outmsgtag = 1
call MPI_INIT(ierr)call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)
sgi
-
if (rank .eq. 0) then
dest = 1source = 1outmsg = 'x'call MPI_SEND(outmsg, 1, MPI_CHARACTER, dest, tag,,
& MPI_COMM_WORLD, ierr)call MPI_RECV(inmsg, 1, MPI_CHARACTER, source, tag,
& MPI_COMM_WORLD, stat, ierr)else if (rank .eq. 1) then
dest = 0source = 0call MPI_RECV(inmsg, 1, MPI_CHARACTER, source, tag,
& MPI_COMM_WORLD, stat, err)call MPI_SEND(outmsg, 1, MPI_CHARACTER, dest, tag,
& MPI_COMM_WORLD, err)endifcall MPI_FINALIZE(ierr)end
sgi
-
4.4 4.4
uuu
sgi
-
IF (rank .EQ. 0) THEN
CALL MPI_SEND( sendbufcountMPI_REAL1tagcommierr)
CALL MPI_RECV( recvbufcountMPI_REAL1tagcommstatusierr)
ELSE IF (rank .EQ. 1) THEN
CALL MPI_RECV( recvbufcountMPI_REAL0tagcommstatusierr)
CALL MPI_SEND( sendbufcountMPI_REAL0tagcommierr)
END IF
sgi
-
IF (rank .EQ. 0) THEN
CALL MPI_RECV( recvbufcountMPI_REAL1tagcommstatusierr)
CALL MPI_SEND( sendbufcountMPI_REAL1tagcommierr)
ELSE IF (rank .EQ. 1) THEN
CALL MPI_ RECV ( sendbufcountMPI_REAL0tagcommierr)
CALL MPI_ SEND( recvbufcountMPI_REAL0tagcommstatusierr)
END IF
sgi
-
IF (rank .EQ. 0) THEN
CALL MPI_SEND( sendbufcountMPI_REAL1tagcommierr)
CALL MPI_RECV( recvbufcountMPI_REAL1tagcommstatusierr)
ELSE IF (rank .EQ. 1) THEN
CALL MPI_SEND( sendbufcountMPI_REAL0tagcommierr)
CALL MPI_RECV( recvbufcountMPI_REAL0tagcommstatusierr)
END IF
sgi
-
4.5 4.5 uMPI_Isend uMPI_Irecv uMPI_IssenduMPI_IbsenduMPI_Irsend uMPI_Test MPI_TestanyMPI_Testall
MPI_Testsome uMPI_Iprobe
sgi
-
44..55.1 .1 MPI_MPI_IsendIsend()()
u MPI_Wait MPI_TestMPI_ISENDbufcountdatatypedesttagcommrequest
IN buf IN count IN datatype IN dest IN tag IN comm OUT request
sgi
-
sgi
uint MPI_Isend(void* buf, int count, MPI Datatype datatype, int dest, int tag, MPI Comm comm,MPI Request *request)
uMPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR)
BUF(*)
INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
-
44..55.2 .2 MPI_MPI_IrecvIrecv()()
u MPI_Wait MPI_Test
MPI_IRECV (buf, count, datatype, source, tag, comm, request) OUT buf IN count IN datatype IN source IN tag IN comm OUT request
sgi
-
u int MPI_Irecv(void* buf, int count, MPI Datatype datatype, intsource, int tag, MPI Comm comm, MPI Request *request)
u MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR)
BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG,
COMM, REQUEST, IERROR
sgi
-
sgi
-
MPI_WAITMPI_TEST
MPI_WAIT(request, status)
INOUT request
OUT status
MPI_TEST(request, flag,status)
INOUT request
OUT flag
OUT status
44..55..3 3
-
MPI_WAITANYMPI_TESTANYMPI_WAITALLMPI_TESTALLMPI_WAITSOMEMIP_TESTSOME MPI_WAITTANYcountarray_of_requestsindexstatus
IN count INOUT array_of_requests OUT yndex OUT status
int MPI_Waitany(int count, MPI_Request *array of requests, int *index, MPI_Status *status) MPI_WAITANY(COUNT, ARRAY_OF_REQUESTS, INDEX, STATUS, IERROR) INTEGER COUNT, ARRAY_OF_REQUESTS(*), INDEX, STATUS(MPI STATUS SIZE), IERROR
44..55..4 4
-
MPI_WAITALLcountarray_of_requestsarray_of_statuses
IN count
INOUT array_of_requeses
OUT array_of_statusts
int MPI_Waitall(int count, MPI_Request *array_of_requests,
MPI_Status *array_of_statuses)
MPI_WAITALL(COUNT, ARRAY_OF_REQUESTS, ARRAY_OF_STATUSES, IERROR)
INTEGER COUNT, ARRAY_OF_REQUESTS(*)
INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR
MPI_WAITALLarray_of_statusesiiMPI_REQUEST_NULL
-
MPI_WAITALLMPI_WAITALLMPI_ERR_IN_STATUSMPI_SUCCESSMPI_PENDINGMPI_WAITALLMPI_SUCCESSMPI_WAITALLMPI_ERR_IN_STATUS
-
MPI_WAITSOMEincountarray_of_requestsoutcountarray_of_indicesarray_of_statuses
IN incount array_of_requests
INOUT array_of_requests
OUT outcount
OUT araay_of_indices
OUT array_of_statuses
MPI_WAITSOMEMPI_WAITSOMEoutcountarray_of_indicesoutcountarray_of_statusesoutcount
-
4.6 4.6
uCuFortran
sgi
-
4.6.1 4.6.1 CC#include "mpi.h"#include
int main(argc,argv)int argc;char *argv[]; {
int numtasks, rank, next, prev, buf[2], tag1=1, tag2=2;MPI_Request reqs[4];MPI_Status stats[4];
MPI_Init(&argc,&argv);MPI_Comm_size(MPI_COMM_WORLD, &numtasks);MPI_Comm_rank(MPI_COMM_WORLD, &rank);
sgi
-
prev = rank-1;next = rank+1;if (rank == 0) prev = numtasks - 1;if (rank == (numtasks - 1)) next = 0;
MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD, &reqs[0]);MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD, &reqs[1]);
MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]);MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]);
MPI_Waitall(4, reqs, stats);
MPI_Finalize();}
sgi
-
4.6.2 4.6.2 FortranFortranprogram ringtopoinclude 'mpif.h'
integer numtasks, rank, next, prev, buf(2), tag1, tag2, ierrinteger stats(MPI_STATUS_SIZE,4), reqs(4)tag1 = 1tag2 = 2
call MPI_INIT(ierr)call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)
prev = rank - 1next = rank + 1if (rank .eq. 0) then
prev = numtasks - 1endif
sgi
-
if (rank .eq. numtasks - 1) then
next = 0endifcall MPI_IRECV(buf(1), 1, MPI_INTEGER, prev, tag1,
& MPI_COMM_WORLD, reqs(1), ierr)call MPI_IRECV(buf(2), 1, MPI_INTEGER, next, tag2,
& MPI_COMM_WORLD, reqs(2), ierr)call MPI_ISEND(rank, 1, MPI_INTEGER, prev, tag2,
& MPI_COMM_WORLD, reqs(3), ierr)call MPI_ISEND(rank, 1, MPI_INTEGER, next, tag1,
& MPI_COMM_WORLD, reqs(4), ierr)call MPI_WAITALL(4, reqs, stats, ierr);
call MPI_FINALIZE(ierr)end
sgi
-
4.7 4.7
MPI_PROBEMPI_IPROBE statusMPI_IPROBEsourcetagcommflagstatusIN source IN tag IN comm OUT flag OUT status
Int MPI_Iprobe(int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status)
MPI_IPROBE(SOURCE, TAG, COMM, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE),
IERROR
sgi
-
MPI_IPROBEsourcetagcommMPI_IPROBEflag=trueflag=false status
MPI_IPROBEsourcetagcommflagstatus flag=truecommstatus
sourceMPI_ANY_SOURCE tagMPI_ANY_TAG/comm
sgi
-
int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_PROBE(SOURCE, TAG, COMM, STATUS, IERROR) INTEGER SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR
MPI_PROBEMPI_IPROBEMPI_PROBEMPI_IPROBE
MPI_PROBEMPI_PROBE
sgi
-
CALL MPI_COMM_RANK(comm, rank, ierr)a=rank*10.0 IF (rank.EQ.0) THEN
CALL MPI_SEND(a, 1, MPI_ REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.1) THEN
CALL MPI_SEND(a, 1, MPI_REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.2) THEN
DO i=1, 2 CALL MPI_PROBE(MPI_ANY_SOURCE, 0, comm, status, ierr) IF (status(MPI_SOURCE) = 0) THEN
100 CALL MPI_RECV(a, 1, MPI_ REAL, 0, 0, comm, status, ierr) ELSE
200 CALL MPI_RECV(b, 1, MPI_REAL, 1, 0, comm, status, ierr) END IF
END DO print*, rank, a=,a, b=, b
END IF
sgi
-
CALL MPI_COMM_RANK(comm, rank, ierr)a=rank*10.0 IF (rank.EQ.0) THEN
CALL MPI_SEND(a, 1, MPI_ REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.1) THEN
CALL MPI_SEND(a, 1, MPI_REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.2) THEN
DO i=1, 2 CALL MPI_PROBE(MPI_ANY_SOURCE, 0, comm, status, ierr) IF (status(MPI_SOURCE) = 0) THEN
100 CALL MPI_RECV(a, 1, MPI_ REAL, 0, 0, comm, status, ierr) ELSE
200 CALL MPI_RECV(b, 1, MPI_REAL, 1, 0, comm, status, ierr) END IF
END DO print*, rank, a=,a, b=, b
END IF
sgi
2 a=0.0 b=10.0
-
CALL MPI_COMM_RANK(comm, rank, ierr) a=rank*10.0IF (rank.EQ.0) THEN
CALL MPI_SEND(a, 1, MPI_ REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.1) THEN
CALL MPI_SEND(a, 1, MPI_REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.2) THEN
DO i=1, 2 CALL MPI_PROBE(MPI_ANY_SOURCE, 0, comm, status, ierr) IF (status(MPI_SOURCE) = 0) THEN
100 CALL MPI_RECV(a, 1, MPI_ REAL, MPI_ANY_SOURCE, 0, comm, status, ierr)
ELSE 200 CALL MPI_RECV(b, 1, MPI_REAL, MPI_ANY_SOURCE, 0, m,status,ierr)
END IF END DO print*, rank, a=,a, b=, b
END IF
sgi
-
CALL MPI_COMM_RANK(comm, rank, ierr) a=rank*10.0IF (rank.EQ.0) THEN
CALL MPI_SEND(a, 1, MPI_ REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.1) THEN
CALL MPI_SEND(a, 1, MPI_REAL, 2, 0, comm, ierr) ELSE IF (rank.EQ.2) THEN
DO i=1, 2 CALL MPI_PROBE(MPI_ANY_SOURCE, 0, comm, status, ierr) IF (status(MPI_SOURCE) = 0) THEN
100 CALL MPI_RECV(a, 1, MPI_ REAL, MPI_ANY_SOURCE, 0, comm, status, ierr)
ELSE 200 CALL MPI_RECV(b, 1, MPI_REAL, MPI_ANY_SOURCE, 0, m,status,ierr)
END IF END DO
END IF
sgi
2 a=0.0 b=10.0 2 a=10.0 b=0.0
-
MPI_ANY_SOURCE100200source
MPI_PROBE(sourcetagcommstatus)MPI_PECVsourcetagcommstatus s t cMPI_ANY_TAG s cstc
sgi
-
MPI_CANCELrequestIN request
int MPI_Cancel(MPI_Request *request) MPI_CANCEL(REQUEST, IERROR) INTEGER REQUEST, IERROR
MPI_CANCELMPI_WAITMPI_TEST statusMPI_TEST_CANCELLEDstatus
sgi
-
MPI_TEST_CANCELLEDstatusflagIN status OUT flag
int MPI_Test_cancelled(MPI_Status *status, int *flag) MPI_TEST_CANCELLED(STATUS, FLAG, IERROR) LOGICAL FLAG INTEGER STATUS(MPI_STATUS_SIZE), IERROR
MPI_TEST_CANCELLEDMPI_CANCELflag=truestatusflag=false
sgi
-
MPI_CANCELMPI_Comm_Rank(comm, &rank); if (rank == 0)
MPI_Send(a, 1, MPI_CHAR, 1, tag, comm); else if (rank==1) {
MPI_Irecv(a, 1, MPI_CHAR, 0, tag, comm, &req); MPI_Cancel(&req); MPI_Wait(&req, &status); MPI_Test_cancelled(&status, &flag); if (flag) { /* cancel succeeded -- need to post new receive */
MPI_Recv(a, 1, MPI_CHAR, 0, tag, comm, &req); }
}
sgi
-
sgi
-
4.8 4.8
sgi
-
CALL MPI_COMM_RANK(comm, rank, ierr)
IF (rank.EQ.0) THEN
CALL MPI_ISEND(sendbuf, count, MPI_REAL, 1, tag, comm, req, ierr)
CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr)
CALL MPI_WAIT(req, status, ierr)
ELSE ! rank.EQ.1
CALL MPI_ISEND(sendbuf, count, MPI_REAL, 0, tag, comm, req, ierr)
CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr)
CALL MPI_WAIT(req, status, ierr)
END IF
sgi
-
CALL MPI_COMM_RANK(comm, rank, ierr)
IF (rank.EQ.0) THEN
CALL MPI_SEND(sendbuf1, count, MPI_REAL, 1, 1, comm, ierr)
CALL MPI_SEND(sendbuf2, count, MPI_REAL, 1, 2, comm, ierr)
ELSE ! rank.EQ.1
CALL MPI_IRECV(recvbuf2, count, MPI_REAL, 0, 2, comm, req1, ierr)
CALL MPI_IRECV(recvbuf1, count, MPI_REAL, 0, 1, comm, req2, ierr)
CALL MPI_WAIT(req1, status, ierr)
CALL MPI_WAIT(req2, status, ierr)
END IF
sgi
-
011
MPI10sendbuf1 0
sgi
-
5 5
uuuu
sgi1
2
3
-
4.7.1 4.7.1
MPI_BSENDbufcountdatatypedesttagcommIN buf IN count IN datatype IN dest IN tag IN comm
int MPI_Bsend(void* buf, int count, MPI_Datatype datatype, int dest, inttag,MPI_Comm comm)
MPI_BSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR
sgi
-
sgi
-
MPI_BUFFER_ATTACHbuffersize
IN buffer
IN size
int MPI_Buffer_attach( void* buffer, int size)
MPI_BUFFER_ATTACH( BUFFER, SIZE, IERROR)
BUFFER(*)
INTEGER SIZE, IERROR
MPI_BUFFER_ATTACHMPI
sgi
-
MPI_BUFFER_DETACHbuffersize
OUT buffer
OUT size
int MPI_Buffer_detach( void* buffer, int* size)
MPI_BUFFER_DETACH( BUFFER, SIZE, IERROR)
BUFFER(*)
INTEGER SIZE, IERROR
MPI_BUFFER_DETACHMPI
sgi
-
#define BUFFSIZE 10000
int size
char *buff;
buff = (char *)malloc(BUFFSIZE);
MPI_Buffer_attach(buff, BUFFSIZE);
/* a buffer of 10000 bytes can now be used by MPI_Bsend */
MPI_Buffer_detach( &buff, &size);
/* Buffer size reduced to zero */
MPI_Buffer_attach( buff, size);
/* Buffer of 10000 bytes available again */
sgi
-
4.7.2 4.7.2
sgi
-
MPI_SSENDbufcountdatatypedesttagcommIN buf IN count IN datatype IN dest IN tag IN comm
int MPI_Bsend(void* buf, int count, MPI_Datatype datatype, int dest, inttag,MPI_Comm comm)
MPI_BSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR
-
4.7.3 4.7.3
sgi
-
4.7.3 4.7.3
MPI_RSENDbufcountdatatypedesttagcommIN buf IN count IN datatype IN dest IN tag IN comm
int MPI_Rsend(void* buf, int count, MPI_Datatype datatype, int dest, inttag,MPI_Comm comm)
MPI_RSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR
sgi
-
l MPI_COMM_WORLD
l (barrier)(broadcast)(scatter)(gather)(alltoall)(reduction)
sgi
-
l l tag
sgi
-
5.1 5.1 uMPI_BarrieruMPI_BcastMPI_ScatteruMPI_GatherMPI_AllgatheruMPI_ReduceMPI_AllreduceuMPI_Reduce_scatteruMPI_AlltoalluMPI_Scan
sgi
-
55.1.1 .1.1 MPI_Barrier()MPI_Barrier()
uMPI_Barrier
uMPI_Barrier (comm) uMPI_BARRIER (comm,ierr) u Comm
sgi
-
55.1.2 .1.2 MPI_MPI_BcastBcast()()
u
uMPI_Bcast (*buffer,count,datatype,root,comm) uMPI_BCAST (buffer,count,datatype,root,comm, ierr)
sgi
-
sgi
-
55.1.3 .1.3 MPI_Scatter()MPI_Scatter()u
u MPI_Scatter (*sendbuf,sendcnt,sendtype,*recvbuf, recvcnt,recvtype,root,comm)
u MPI_SCATTER (sendbuf,sendcnt,sendtype,recvbuf, recvcnt,recvtype,root,comm,ierr)
sgi
-
sgi
-
MPI_SCATTERVMPI_SCATTER sendcountsdispls
MPI_SCATTERV(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount,
recvtype, root, comm)IN sendbuf IN sendcounts IN displs IN sendtype OUT recvbuf IN recvcount IN recvtype IN root IN comm
sgi
-
sgiMPI_SCATTERVMPI_SCATTER sendcountsdispls
MPI_Comm comm;int gsize, *sendbuf;
int root, *rbuf[100], i, *displs, *scounts;
MPI_Comm_size(comm, &gsize);sendbuf = (int *)malloc(gsize*stride*sizeof(int));
displs = (int *)malloc(gsize*sizeof(int));
scounts = (int *)malloc(gsize*sizeof(int));for ( i=0; i < gsize; ++i) {
displs[i] = i*stride;
scounts[i] = 100;}MPI_Scatterv (sendbuf, scounts, displs, MPI_INT, rbuf, 100, MPI_INT, root, comm);
-
sgi
100
-
55.1.4 .1.4 MPI_Gather()MPI_Gather()
uMPI_Scatter
uMPI_Gather (*sendbuf,sendcnt,sendtype,*recvbuf, recvcount,recvtype,root,comm)
uMPI_GATHER (sendbuf,sendcnt,sendtype,recvbuf, recvcount,recvtype,root,comm,ierr)
sgi
-
sgi
-
55.1.5 .1.5 MPI_MPI_AllgatherAllgather()()
u
uMPI_Allgather (*sendbuf,sendcount,sendtype,*recvbuf, recvcount,recvtype,comm)
uMPI_ALLGATHER (sendbuf,sendcount,sendtype,recvbuf, recvcount,recvtype,comm,info)
sgi
-
sgi
-
MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs,
recvtype, comm)IN sendbuf IN sendcount IN sendtype OUT recvbuf IN recvcounts IN displs IN recvtype IN comm
sgi
-
55.1.6 .1.6 MPI_Reduce()MPI_Reduce()
u
uMPI_Reduce (*sendbuf,*recvbuf,count,datatype,op,root, comm)
uMPI_REDUCE (sendbuf,recvbuf,count,datatype,op,root, comm,ierr)
uMPIMPI_Op_create
sgi
-
55.1.6 .1.6 MPI_Reduce()MPI_Reduce()
u
uMPI_Reduce (*sendbuf,*recvbuf,count,datatype,op,root, comm)
uMPI_REDUCE (sendbuf,recvbuf,count,datatype,op,root, comm,ierr)
uMPIMPI_Op_create
sgi
-
MPI C FortranMPI_MAX integer, float integer, real, complexMPI_MIN integer, float integer, real, complexMPI_SUM integer, float integer, real, complexMPI_PROD integer, float integer, real, complexMPI_LAND integer logicalMPI_BAND integer, MPI_BYTE integer, MPI_BYTEMPI_LOR integer logicalMPI_BOR integer, MPI_BYTE integer, MPI_BYTEMPI_LXOR integer logicalMPI_BXOR integer, MPI_BYTE integer, MPI_BYTEMPI_MAXLOC float, double and long
doublereal,complex,doubleprecision
MPI_MINLOC float, double andlong double
real,complex,doubleprecision
sgi
-
55.1.7 .1.7 MPI_MPI_AllreduceAllreduce()()
uMPI_ReduceMPI_Bcast
uMPI_Allreduce (*sendbuf,*recvbuf,count,datatype,op,comm)
uMPI_ALLREDUCE (sendbuf,recvbuf,count,datatype,op,comm,ierr)
sgi
-
sgi
-
55.1.8 .1.8 MPI_Reduce_scatter()MPI_Reduce_scatter()
uMPI_ReduceMPI_ Scatter
uMPI_Reduce_scatter (*sendbuf,*recvbuf,recvcount,datatype, op,comm)
uMPI_REDUCE_SCATTER (sendbuf,recvbuf,recvcount,datatype, op,comm,ierr)
sgi
-
sgi
-
55.1.9 .1.9 MPI_MPI_AlltoallAlltoall()()
u
uMPI_Alltoall (*sendbuf,sendcount,sendtype,*recvbuf, recvcnt,recvtype,comm)
uMPI_ALLTOALL (sendbuf,sendcount,sendtype,recvbuf, recvcnt,recvtype,comm,ierr)
sgi
-
sgi
-
5.1.10 5.1.10 MPI_Scan()MPI_Scan()
uuMPI_Scan (*sendbuf,*recvbuf,count,
datatype,op,comm)uMPI_SCAN (sendbuf,recvbuf,count,
datatype,op,comm,ierr)
sgi
-
sgi
-
sgi
-
sgiswitch (rank) {
case 0:
MPI_Bcast (buf 1, count, type, 0, comm);
MPI_Bcast (buf 2, count, type, 1, comm);
break;
case 1:
MPI_Bcast (buf 2, count, type, 1, comm);
MPI_Bcast (buf 1, count, type, 0, comm);
break;
}
5.2 5.2 1
-
sgiswitch (rank) { case 0:
MPI_Bcast (buf 1, count, type, 0, comm);
MPI_Bcast (buf 2, count, type, 1, comm);
break;
case 1:
MPI_Bcast (buf 2, count, type, 1, comm);
MPI_Bcast (buf 1, count, type, 0, comm);
break;
}{01} MPIMPI
1
-
0
1
N1
-
sgiswitch (rank) { case 0:MPI_Bcast (buf 1, count, type, 0, comm0);
MPI_Bcast (buf 2, count, type, 2, comm2);
break;
case 1:
MPI_Bcast (buf 1, count, type, 1, comm1);
MPI_Bcast (buf 2, count, type, 0, comm0);
break;
case 2:
MPI_Bcast (buf 1, count, type, 2, comm2);
MPI_Bcast (buf 2, count, type, 1, comm1);
break;
}
2
-
sgiswitch (rank) { case 0:MPI_Bcast (buf 1, count, type, 0, comm0);
MPI_Bcast (buf 2, count, type, 2, comm2);
break;
case 1:
MPI_Bcast (buf 1, count, type, 1, comm1);
MPI_Bcast (buf 2, count, type, 0, comm0);
break;
case 2:
MPI_Bcast (buf 1, count, type, 2, comm2);
MPI_Bcast (buf 2, count, type, 1, comm1);
break;
}
comm0{01}comm1{12}comm2{20}comm2 comm0 comm0 comm1comm1 comm2
2
-
sgiswitch (rank) {
case 0:
MPI_Bcast (buf 1, count, type, 0, comm);
MPI_Send (buf 2, count, type, 1, tag, comm);
break;
case 1:
MPI_Recv (buf 2, count, type, 0, tag, comm);
MPI_Bcast (buf 1, count, type, 0, comm);
break;
}
3
-
sgiswitch (rank) { case 0:
MPI_Bcast (buf 1, count, type, 0, comm);
MPI_Send (buf 2, count, type, 1, tag, comm);
break;
case 1:
MPI_Recv (buf 2, count, type, 0, tag, comm);
MPI_Bcast (buf 1, count, type, 0, comm);
break;
}
3
0100101
-
sgiswitch (rank) { case 0:MPI_Bcast (buf 1, count, type, 0, comm);MPI_Send (buf 2, count, type, 1, tag , comm);break;
case 1:MPI_Recv (buf 2, count, type, MPI_ANY_SOURCE, tag, comm);MPI_Bcast (buf 1, count, type, 0, comm);MPI_Recv (buf 2, count, type, MPI_ANY_SOURCE, tag, comm);break;
case 2:MPI_Send (buf 2, count, type, 1, tag, comm);MPI_Bcast (buf 1, count, type, 0, comm);break;
}
4
-
sgiswitch (rank) { case 0:MPI_Bcast (buf 1, count, type, 0, comm);MPI_Send (buf 2, count, type, 1, tag , comm);break;
case 1:MPI_Recv (buf 2, count, type, MPI_ANY_SOURCE, tag, comm);MPI_Bcast (buf 1, count, type, 0, comm);MPI_Recv (buf 2, count, type, MPI_ANY_SOURCE, tag, comm);break;
case 2:MPI_Send (buf 2, count, type, 1, tag, comm);MPI_Bcast (buf 1, count, type, 0, comm);break;
}
4
01211
-
sgiProcessor: 0 1 2
(1
-
sgiProcessor: 0 1 2
22
2
-
uMPI()()MPIuMPI
uMPI
sgi
-
uMPI_TYPE_CONTIGUOUS
uMPI_TYPE_VECTOR
uMPI_TYPE_INDEXED
uMPI_TYPE_COMMIT
sgi
-
7.1 7.1 sgi
upper
-
sgiMPIdouble a[100][100] disp[100],blocklen[100],i; MPI_Datatype upper; ... /* compute start and size of each row */ for (i=0; i100; ++i) {
disp[i] = 100 * i + i; blocklen[i] = 100 - i;
}/* create datatype for upper triangular part */ MPI_Type_Indexed(100, blocklen, disp, MPI_DOUBLE, &upper); MPI_Type_Commit(&upper); /* .. and send it */
MPI_Send(a, 1, upper, dest, tag, MPI_COMM_WORLD);
-
sgi
-
sgiTypemap={(type0disp0)typen-1dispn-1}typeidispi
Typesig =(type0 ,typen-1 )bufn,ibuf+dispitypeinTypesig
MPI_INT{( int,0)} int0
-
sgi
Typemap= { (type0,disp0),typen-1,dispn-1,
lbTypemap=mijn dispjubTypemap=majx(dispj+sizeof(typej))+,extent(Typemap)=ub(Typemap) lb(Typemap), j = 0n-1lb ubtypeiki extent(Typemap )maxiki
-
sgiType = { (double,0),(char,8)}0 double
8 chardouble81698double16
-
sgi
MPI_TYPE_EXTENT MPI_TYPE_SIZE
MPI_TYPE_EXTENTdatatype,extentIN datatype OUT extent
int MPI_Type_extent(MPI_Datatype datatype, MPI_Aint *extent) MPI_TYPE_EXTENT(DATATYPE, EXTENT, IERROR) INTEGER DATATYPE, EXTENT, IERROR
MPI_TYPE_EXTENTMPI_TYPE_EXTENTMPI_INTextentextentint
-
sgiMPI_TYPE_SIZE(datatype,size)IN datatype OUT size int MPI_Type_size(MPI_Datatype datatype, int *size) MPI_TYPE_SIZE(DATATYPE, SIZE, IERROR) INTEGER DATATYPE, SIZE, IERROR
MPI_TYPE_SIZEdatatype MPI_TYPE_EXTENT
-
sgi
Type = { (double,0),(char,8)}0 double8 chardouble8 MPI_TYPE_EXTENTdatatype,i i=16MPI_TYPE_SIZE(datatype,i)i=9
-
5.2 5.2 sgi
int MPI_Type_contiguous(int count, MPI_Datatype oldtype,
MPI_Datatype *newtype)
MPI_TYPE_CONTIGUOUS(COUNT, OLDTYPE, NEWTYPE,
IERROR)
INTEGER COUNT, OLDTYPE, NEWTYPE, IERROR
MPI_TYPE_CONTIGUOUS(count,oldtype,newtype)
IN count
IN oldtype
OUT newtype
-
sgi
MPI_TYPE_CONIGUOUS
MPI_TYPE_CONTIGUOUS
newtypecountoldtype
-
sgi oldtype{double,0,(char,8) 16
count=3
newtype
-
5.2 5.2 sgiMPI_TYPE_VECTOR(count,blocklength,stride,oldtype,newtype)
IN count
IN blocklength
IN stride
MPI_TYPE_VECTOR(COUNT, BLOCKLENGTH, STRIDE,
OLDTYPE, NEWTYPE,IERROR)
INTEGER COUNT, BLOCKLENGTH, STRIDE, OLDTYPE,
NEWTYPE, IERROR
int MPI_Type_vector(int count, int blocklength, int stride,
MPI_Datatype oldtype, MPI_Datatype *newtype)
-
sgi
MPI_TYPE_VECTOR
MPI_TYPE_VECTOR
-
sgi : oldtype{(double,0),(char,8)},16
MPI_TYPE_VECTOR(2,3,4,oldtype,newtype)newtype
-
sgi : oldtype{(double,0),(char,8)},16
MPI_TYPE_VECTOR(2,3,4,oldtype,newtype)newtype
: { ( double,0)(char,8)(double,16)(char,24)(double,32),(double,40)(double,64),(char,72),(double,80),(char,88),(double,96),(char,104)} 4416
MPI_TYPE_VECTOR( 31-2oldtype,newtype) {double,0,char,8,double,-32,char,-24,double,-64,char,-56}
-
sgioldtype{type0,disp0,typen-1,dispn-1}exblcountbln
{ (type0,disp0) ,(typen-1,dispn-1)
(type0,disp0 + ex) ,(typen-1,dispn-1 + ex)
...(type0,disp0 + ex(count - 1)) ,(typen-1,dispn-1 + ex(count -1))}
-
sgiMPI_TYPE_CONTIGUOUS(count,oldtype,newtype)
MPI_TYPE_VECTORcount,1,1,oldtype,newtype
MPE_TYPE_VECTOR(1countnumoldtypenewtype)num
-
5.3 5.3 HH sgiMPI_TYPE_HVECTOR(count,blocklength,stride,oldtype,newtype)
IN count
IN blocklength
IN stride
IN oldtype
OUT newtype
MPI_TYPE_HVECTOR(COUNT, BLOCKLENGTH, STRIDE, OLDTYPE, NEWTYPE,IERROR)
INTEGER COUNT, BLOCKLENGTH, STRIDE, OLDTYPE, NEWTYPE, IERROR
int MPI_Type_hvector(int count, int blocklength, MPI Aint stride, MPI_Datatype oldtype,MPI_Datatype *newtype)
-
sgi
MPI_TYPE_HVECTOR
MPI_TYPE_HVECTORMPI_TYPE_VECTOR,stride
-
sgioldtype{double,0,(char,8)}16
MPI_TYPE_HVECTOR(234oldtypenewtype):
-
sgioldtype{double,0,(char,8)}16
MPI_TYPE_HVECTOR(234oldtypenewtype):
{double,0,(char,8),(double,16),(char,24),(double,32),(char,40),
(double,4),(char,12),(double,20),(char,28),(double,36),(char,44)}
DOUBLE04
-
sgi oldtype{ type0,disp0, (typen-1,dispn-1)},exblcoutbln
-
5.3 5.3 sgiMPI_TYPE_INDEXED(count,array_of_blocklength,array_of_displa cement,
oldtypee,newtype)
IN count
IN array_of_blocklength
IN array-of_displacement
IN oldtype
OUT mewtype
-
sgi
MPI_TYPE_INDEXED
MPI_TYPE_INDEXED
-
sgi: oldtype {(double,0),(char,8) }16
B=31D=40
MPI_TYPE_INDEXED(2,B,D,old-type,newtype):
-
sgi: oldtype {(double,0),(char,8) }16
B=31D=40
MPI_TYPE_INDEXED(2,B,D,old-type,newtype):
{ ( double,64)(char,72)(double,80)(char,88)(double,96)(char,104) ,(double,0), (char,8)}416=6430
-
sgioldtype{(type0,disp0), ,(typen-1,dispn-1)} exB array_of_blocklengthDarray_of_displace_ment
nS B[i]count-1
i=0
-
sgiMPI_TYPE_VECTOR(count,blocklength,stride,oldtype,newtype)
MPI_TYPE_INTEXEDcount,B,D,oldtype,new_typeD[j]=jstridej = 0,count-1B[j] = blocklengthj = 0 count-1
MPI_TYPE_INDEXED3.1
-
5.4 5.4 HH sgiMPI_TYPE_HINDEXED(count,array_of_blocklength,array_of_displacement,
oldtypee,newtype)
IN count
IN array_of_blocklength
IN array-of_displacement
IN oldtype
OUT mewtype
HH
-
sgi
MPI_TYPE_HINDEXED
MPI_TYPE_HINDEXED MPI_TYPE_INDEXED, array_of_displacementsoldtypeH
-
sgi: oldtype {(double,0),(char,8) }16
B=31D=40
MPI_TYPE_INDEXED(2,B,D,old-type,newtype):
-
sgi: oldtype {(double,0),(char,8) }16
B=31D=40
MPI_TYPE_INDEXED(2,B,D,old-type,newtype):
{(double,4),(char,12),(double,20),(char,28),(double,36),(char,44),(double,0),(char,8)}
DOUBLE 3.11D=(64,0)
-
sgioldtype{(type0,disp0), ,(typen-1,dispn-1)} exB array_of_blocklengthDarray_of_displace_ment
nS B[i]count-1
i=0
-
5.5 5.5 sgiMPI_TYPE_STRUCT(count,array_of_blocklengths,array_of_displacemnts,
array_of_types,newtype)
IN count
IN array_of_blocklengths
IN array_of_displacements
IN array_of_types
OUT newtype
MPI_TYPE_STRUCT(COUNT,ARRAY_OF_BLOCKLENGTHS,ARRAY_OF_DISPLACEMENTS, ARRAY_OF_TYPES, NEWTYPE, IERROR)
INTEGER COUNT, ARRAY_OF_BLOCKLENGTHS(*),
ARRAY_OF_DISPLACEMENTS(*), ARRAY_OF_TYPES(*), NEWTYPE,
IERROR
-
sgi
MPI_TYPE_STRUCT
MPI_TYPE_STRUCT MPI_PYPE_HINDEXED
-
sgi:type1{(double,0),(char,8)}16B=213D=01626T=MPI_FLOATtype1,MPI_CHARMPI_TYPE_STRUCT(3BDTnewtype)? 4
-
sgi:type1{(double,0),(char,8)}16B=213D=01626T=MPI_FLOATtype1,MPI_CHARMPI_TYPE_STRUCT(3BDTnewtype)? 4
{(float,0)(float,4)(double,16),(char,24),(char,26),(char,27),(char,28)}0MPI_FLOAT16type1326MPI_CHAR
-
sgiT array_of_typesT[i]
c 1 i = 0 exiBarray_of_blocklengths Darray_of_displacementsCcount B[i]ni
-
5.6 5.6 sgi
MPI_TYPE_COMMITdatatype
INOUT datatype
int MPI_Type_commit(MPI_Datatype *datatype)
MPI_TYPE_COMMIT(DATATYPE, IERROR)
INTEGER DATATYPE, IERROR
-
5.7 5.7 sgiMPI_TYPE_FREE
MPI_TYPE_FREE(datatype)
INOUT datatype
int MPI_Type_free(MPI_Datatype *datatype)
MPI_TYPE_FREE(DATATYPE, IERROR)
INTEGER DATATYPE, IERROR
MPI_TYPE_FREEdatatypedatatypeMPI_DATA_TYPE_NULL
-
5.7 5.7 sgiINTEGER type1, type2
CALL MPI_TYPE_CONTIGUOUS(5, MPI_REAL, type1, ierr)
! new type object created
CALL MPI_TYPE_COMMIT(type1, ierr)
! now type1 can be used for communication
type2 = type1
! type2 can be used for communication
! (it is a handle to same object as type1)
CALL MPI_TYPE_VECTOR(3, 5, 4, MPI_REAL, type1, ierr)
! new uncommitted type object created
-
sgiCALL MPI_TYPE_COMMIT(type1, ierr)
! now type1 can be used for communication
CALL MPI_TYPE_FREE(type2, ierr)
! free before overwrite handle
type2 = type1
! type2 can be used for communication
CALL MPI_TYPE_FREE(type2, ierr)
! both type1 and type2 are unavailable; type2
! has value MPI_ATATYPE_ULL and type1 is
! undefined
-
5.8 5.8 sgiMPI_SENDbuf,count,datatype,dest,tag,comm,datatype
{type0,disp0, , (typen-1,dispn-1)}extentncounti,jaddri,j= buf+extenti+dispjtypeji = 0 ,count-1j=0, ,n-1addri,j typej
MPI_RECV(bufcountdatatypesourcetagcommstatus) ncount i,jbuf+extenti+dispjtypej
-
sgiCALL MPI_TYPE_CONTIGUOUS( 2, MPI_REAL, type2, ...) CALL MPI_TYPE_CONTIGUOUS( 4, MPI_REAL, type4, ...)
CALL MPI_TYPE_CONTIGUOUS( 2, type2, type22, ...)
...
CALL MPI_SEND( a, 4, MPI_REAL, ...)
CALL MPI_SEND( a, 2, type2, ...)
CALL MPI_SEND( a, 1, type22, ...)
CALL MPI_SEND( a, 1, type4, ...)
...
CALL MPI_RECV( a, 4, MPI_REAL, ...)
CALL MPI_RECV( a, 2, type2, ...)
CALL MPI_RECV( a, 1, type22, ...)
CALL MPI_RECV( a, 1, type4, ...)
-
u2/31/3
sgi
-
u 0~N-1(N)MPI
MPI_GROUP_EMPTYMPI_GROUP_NULLMPI_GROUP_EMPTYMPI_GROUP_NULL
sgi
-
u MPIMPI_COMM_WORLD
sgi
-
sgi
MPI
ii
-
sgi
-
u
u4 .4 .4 .4 .
u
sgi
-
u
uMPI40
u
sgi
-
1)MPI_Comm_groupMPI_COMM_WORLD
2)MPI_Group_incl3)MPI_Comm_create
4)MPI_Comm_rank5)MPI6)MPI_Comm_free MPI_Group_free
sgi
-
1)MPI_Comm_groupMPI_COMM_WORLD
2)MPI_Group_incl3)MPI_Comm_create
4)MPI_Comm_rank5)MPI6)MPI_Comm_free MPI_Group_free
sgi
-
sgi
-
sgi8.1
1.1.
2.2.
3.3.
MPIMPI
-
sgi8.1.1MPI_GROUP_SIZE
MPI_GROUP_RANK
MPI_GROUP_COMPARE
-
sgiMPI_GROUP_SIZEgroupsize
IN group
OUT size
int MPI_Group_size(MPI_Group group, int *size)
MPI_GROUP_SIZE(GROUP, SIZE, IERROR)
INTEGER GROUP, SIZE, IERROR
MPI_GROUP_SIZE group= MPI_GROUP_EMPTYsize=0group= MPI_GROUP_NULL
-
sgiMPI_GROUP_RANKgroup rank
IN group
OUT rank
int MPI_Group_rank(MPI_Group group, int *rank)
MPI_GROUP_RANK(GROUP, RANK, IERROR)
INTEGER GROUP, RANK, IERROR
MPI_GROUP_RANKMPI_UNDEFINED
-
sgiMPI_GROUP_COMPAREgroup1,group2,result
IN group1 1
IN group2 2
IN result
int MPI_Group_compare(MPI_Group group1,MPI_Group group2, int *result)
MPI_GROUP_COMPARE(GROUP1, GROUP2, RESULT, IERROR)
INTEGER GROUP1, GROUP2, RESULT, IERROR
MPI_GROUP_COMPAREMPI_IDENTgroup1group2MPI_SIMILARMPI_UNEQUAL
-
sgi8.1.2
MPIMPI_COMM_WORLDMPI_COMM_GROUP
5.4.2
-
sgiMPI_COMM_GROUPcomm,group
IN comm
OUT group
int MPI_Comm_group(MPI_Comm comm, MPI_Group *group)
MPI_COMM_GROUP(COMM, GROUP, IERROR)
INTEGER COMM, GROUP, IERROR
-
sgiMPI_GROUP_UNIONgroup1,group2,newgroup
IN group1 1
IN group2 2
OUT newgroup
int MPI_Group_union(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup)
MPI_GROUP_UNION(GROUP1, GROUP2, NEWGROUP, IERROR)
INTEGER GROUP1, GROUP2, NEWGROUP, IERROR
-
sgiMPI_GROUP_INTERSECTIONgroup1,group2,newgroup
IN group1 1
IN group2 2
OUT newgroup
int MPI_Group_intersection(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup)
MPI_GROUP_INTERSECTION(GROUP1, GROUP2, NEWGROUP, IERROR)
INTEGER GROUP1, GROUP2, NEWGROUP, IERROR
-
sgiMPI_GROUP_DIFFERENCEgroup1,group2,newgroup
IN group1 1
IN group2 2
OUT newgroup
int MPI_Group_difference(MPI_Group group1, MPI_Group group2, MPI_Group *newgroup)
MPI_GROUP_DIFFERENCE(GROUP1,GROUP2, NEWGROUP, IERROR)
INTEGER GROUP1, GROUP2, NEWGROUP, IERROR
-
sgi
uniongroup1group2group1
Intersectiongroup1group2group1
differencegroupgroupgroup
12
MPI_GROUP_EMPTY
-
sgi
group1={a,b,c,d}group2={d,a,e}
group1 group2={a,b,c,d,e}
group1 group2={a,d}
group1 \ group2={b,c}
-
sgiMPI_GROUP_INCLgroup,n,ranks,newgroup
IN group
IN n
IN ranks
OUT newgroup
int MPI_Group_incl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup)
MPI_GROUP_INCL(GROUP, N, RANKS, NEWGROUP, IERROR)
INTEGER GROUP, N, RANKS(*), NEWGROUP, IERROR
MPI_GROUP_INCLnewgroupgrouprank[0]rank[n-1] nnewgroupigrouprank[i] Ranksngroupn =0,newgroupMPI_GROUP_EMPTY
-
sgiMPI_GROUP_EXCLgroup,n,ranks, newgroupIN group
IN n
IN ranks groupnewgroup
OUT newgroup group
int MPI_Group_excl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup
MPI_GROUP_EXCL(GROUP, N, RANKS, NEWGROUP, IERROR)
INTEGER GROUP, N, RANKS(*), NEWGROUP, IERROR
MPI_GROUP_EXCLgroupranks[0] ,ranks[n-Cranks[1] ,ranks[n]Fortrannewgroup Newgroupgroupranksngroupn=0newgroupgroup
5.4 group{a,b,c,d,e,f}ranks={312}{a,e,f} newgroup
-
sgi
group { a,b,c,d,e,f} ranks={312}MPI_GROUP_INCLMPI_GROUP_EXCL
{a,e,f} {d,b,c}
-
sgi8.1.3 MPI_GROUP_FREE(group)
INOUT group
int MPI_Group_free(MPI_Group *group)
MPI_GROUP_FREE(GROUP, IERROR)
INTEGER GROUP, IERROR
groupMPI_GROUP_NULL
MPI_COMM_CREATEMPI_COMM_DUP 1 MPI_GROUP_FREEMPI_COMM_FREE1
-
sgi8.2
1.1.
2.2.
3.3.
MPI
-
sgiMPI_COMM_RANK(comm,rank)
IN comm
OUT rank comm
int MPI_Comm_rank(MPI_Comm comm, int *rank)
MPI_COMM_RANK(COMM, RANK, IERROR)
INTEGER COMM, RANK, IERROR
MPI_COMM_GROUP MPI_GROUP_RANKMPI_GROUP_FREE
8.2.1
-
sgiMPI_COMM_COMPAREcomm1,comm2,result
IN comm1
IN comm2
OUT result
int MPI_Comm_compare(MPI_Comm comm1,MPI_Comm comm2, int *result)
MPI_COMM_COMPARE(COMM1, COMM2, RESULT, IERROR)
INTEGER COMM1, COMM2, RESULT, IERROR
MPI_COMM_COMPARE
comm1comm2()MPI_IDENT
,MPI_CONCRUENT
MPI_SIMILAR
MPI_UNEQUEL
-
sgiMPI_COMM_DUP(comm,newcomm)
IN comm
OUT newcomm comm
int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *newcomm)
MPI_COMM_DUP(COMM, NEWCOMM, IERROR)
INTEGER COMM, NEWCOMM, IERROR
1
8.2.2
-
sgiMPI_COMM_CREATE(comm,group,newcomm)
IN comm
IN group comm
OUT newcomm
int MPI_Comm_create(MPI_Comm comm, MPI_Group group, MPI_Comm*newcomm)
MPI_COMM_CREATE(COMM, GROUP, NEWCOMM, IERROR)
INTEGER COMM, GROUP, NEWCOMM, IERROR
MPI_COMM_CREATEMIMDMPI_COMM_CREATEnewcommMPI_COMM_CREATE
-
sgi
MPI_COMM_CREATEMIMDMPI_COMM_CREATEnewcommMPI_COMM_CREATE
-
sgi
AB
AB
AiBi
8.3
-
sgiMPI_COMM_TEST_INTERcomm,flag
IN comm
OUT flag commtrue
int MPI_Comm_test_inter(MPI_Comm comm, int *flag)
MPI_COMM_TEST_INTER(COMM, FLAG, IERROR)
INTEGER COMM, IERROR
LOGICAL FLAG
MPI_COMM_TEST_INTERtruefalse
8.3.1
-
sgiMPI_COMM_TEST_INTERcomm,flag
IN comm
OUT flag commtrue
int MPI_Comm_test_inter(MPI_Comm comm, int *flag)
MPI_COMM_TEST_INTER(COMM, FLAG, IERROR)
INTEGER COMM, IERROR
LOGICAL FLAG
MPI_COMM_TEST_INTERtruefalse
8.3.2
-
sgiMPI_COMM_COMPARE MPI_UNEQUALMPI_CONGRUENT MPI_SIMILARMPI_SIMILAR
MPI_COMM_REMOTE_SIZEcomm,size
IN comm
OUT size comm
int MPI_Comm_remote_size(MPI_Comm comm, int *size)
MPI_COMM_REMOTE_SIZE(COMM, SIZE, IERROR)
INTEGER COMM, SIZE, IERROR
MPI_COMM_REMOTE_SIZEMPI_COMM_SIZE
-
sgiMPI_COMM_REMOTE_GROUPcomm,group
IN comm
OUT gronp comm
int MPI_Comm_remote_group(MPI_Comm comm, MPI_Group *group)
MPI_COMM_REMOTE_GROUP(COMM, GROUP, IERROR)
INTEGER COMM, GROUP, IERROR
MPI_COMM_REMOTE_GROUPMPI_COMM_GROUP
-
sgiMPI_INTERCOMM_CREATElocal_commlocal_leaderbridge_commremote_leader
tagnewintercomm
IN local_comm
IN local_leader local_comm
IN bridge_comm
IN remote_leader bridge_comm
IN tag
OUT newihtercomm
8.3.3
-
sgiMPI_INTEROMM_CREATElocal_comm local_leader bridge_commremote_leaderbridge_comm
Remote_leaderlocal_leadertag
bridge_commtagbridge_comm
MPI_INTERCOMM_MERGEintercommhighnewintracomm
IN intercomm
IN high
IN hewintracomm
8.3.3
-
MPIMPI
uSGI MPIu IRIX MPIu IRIX MPI u
sgi
-
9.1 9.1 SGI MPI SGI MPI
uSGI instMPIMPISGIApplication CD
uMPI versions a mpi Mpi 07/29/2003 MPI4.3(MPT1.8)Mpi.books 07/29/2003 MPI Insight MPI Documentations(4.3)Mpi.hdr 07/29/2003 MPI4.3 HeaderMpi.hdr.lib 07/29/2003 MPI4.3 Library Header
sgi
-
uMpi/usr/lib32,/usr/lib 64
uMpi/usr/include
uMPIman MPI:
$ man MPI_Send
sgi
-
9.2 9.2 IRIX MPIIRIX MPI
u64MPIC++: CC -64 compute.C -lmpi++ -lmpic: cc -64 compute.c -lmpiF77: f77 -64 compute.f -lmpiF90: f90 -64 compute.f -lmpi
sgi
-
u32MPIC++: CC n32 compute.C -lmpi++ -lmpic: cc -n32 compute.c -lmpiF77: f77 n32 compute.f -lmpiF90: f90 n32 compute.f -lmpi
sgi
-
9.3 9.3 IRIX MPIIRIX MPI
uMPI mpirun np 4 ./a.out uMPI
% mpirun np 2 prog1;8 prog2checkpointrestartcpr mpirun -cpr np 4 ./a.out
sgi
-
sgi
-
#include void main(argc,argv) int argc;char *argv[];{
int A,B,C,D,E,F,G;
B = 1;C = 2;F = 3;G = 4;
A = B + C;E = F + G;D = A - E;printf("D=%d\n",D);return;
}
sgi
-
(())#include "mpi.h"#include
void main(argc,argv)int argc;char *argv[];{
int A,B,C,D,E,F,G;int numprocs,myid;
MPI_Init(&argc,&argv);MPI_Comm_size(MPI_COMM_WORLD,&numprocs);MPI_Comm_rank(MPI_COMM_WORLD,&myid);if (numprocs != 3) {
printf("numproces must equal to 3!\n");MPI_Finalize();return;
} sgi
-
(())if(myid == 1) {
B = 1;C = 2;A = B + C;MPI_Send(&A,1,MPI_INT,0,80,MPI_COMM_WORLD);
}else if (myid ==2) {F = 3;G = 4;E = F + G;MPI_Send(&E,1,MPI_INT,0,80,MPI_COMM_WORLD);
}else if (myid == 0) {MPI_Recv(&A,1,MPI_INT,1,80,MPI_COMM_WORLD);MPI_Recv(&E,1,MPI_INT,2,80,MPI_COMM_WORLD);D = A - E;printf("D = %d\n",D);
}MPI_Finalize();
} sgi
-
(())#include "mpi.h"#include
void main(argc,argv)int argc;char *argv[];{
int A,B,C,D,E,F,G;int numprocs,myid;
MPI_Init(&argc,&argv);MPI_Comm_size(MPI_COMM_WORLD,&numprocs);MPI_Comm_rank(MPI_COMM_WORLD,&myid);if (numprocs != 2) {
printf("numproces must equal to 3!\n");MPI_Finalize();return;
} sgi
-
(())if (myid == 0) {
B = 1;C = 2;A = B + C;MPI_Recv(&E,1,MPI_INT,1,80,MPI_COMM_WORLD);D = A - E;printf("D = %d\n",D);
}else if (myid == 1) {F = 3;G = 4;E = F + G;MPI_Send(&E,1,MPI_INT,0,80,MPI_COMM_WORLD);
}MPI_Finalize();
} sgi
-
N
Ni
dxx Ni
1*
5.01
41
4
02
1
02
-
[01]4/1+X2[01]N1/N
sgi
-
+-+
++
++
+
++
+
++
=P
NN
NNN
N 5.0)1(1
1...
5.021
1
5.011
1
5.001
14 222
N
Ni
dxx Ni
1*
5.01
41
4
02
1
02
-
CC#include #include
#define N 1000000main(){
double local,pi=0.0,w;long i;
w = 1.0/N;for ( i = 0; i < N i++) {
local = (i + 0.5) * w;pi = pi + 4.0 / (1.0 + local * local);
}printf(pi is %f\n,pi*w);
}
sgi
-
CC(1)(1)#include "mpi.h"#include #include
double f(a)double a;{
return (4.0 / (1.0 + a*a));}
+-+
++
++
+
++
+
++
=P
NN
NNN
N 5.0)1(1
1...
5.021
1
5.011
1
5.001
14 222
sgi
-
CC(2)(2)void main(argc,argv)int argc;char *argv[];{int done = 0, n, myid, numprocs, i;double PI25DT = 3.141592653589793238462643;double mypi, pi, h, sum, x;
MPI_Init(&argc,&argv);MPI_Comm_size(MPI_COMM_WORLD,&numprocs);MPI_Comm_rank(MPI_COMM_WORLD,&myid);
if (myid == 0) {n = 1000000;
}MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); sgi
-
CC(3)(3)h = 1.0 / (double) n; /* */sum = 0.0;for (i = myid ; i < n; i += numprocs) {
x = h * ((double)i + 0.5);sum += f(x);
}mypi = h * sum;MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0,
MPI_COMM_WORLD);if (myid == 0) {
printf("pi is approximately %.16f, Error is %.16f\n",pi, fabs(pi - PI25DT));
}MPI_Finalize();
}
pi is approximately 3.1415926535897643, Error is 0.0000000000000289sgi
-
FortranFortran(1)(1)program maininclude 'mpif.h'
double precision PI25DTparameter (PI25DT = 3.141592653589793238462643d0)double precision mypi, pi, h, sum, x, f, ainteger n, myid, numprocs, i, rc
c function to integrate
f(a) = 4.d0 / (1.d0 + a*a)
+-+
++
++
+
++
+
++
=P
NN
NNN
N 5.0)1(1
1...
5.021
1
5.011
1
5.001
14 222
sgi
-
FortranFortran(2)(2)call MPI_INIT( ierr )call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )sizetype = 1sumtype = 2
if ( myid .eq. 0 ) thenn = 1000000
endifcall MPI_BCAST(n,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)h = 1.0d0/nsum = 0.0d0do 20 i = myid, n, numprocs
x = h * (dble(i) + 0.5d0) sum = sum + f(x)
20 continuemypi = h * sum sgi
-
FortranFortran(3)(3)
c 0call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,
$ MPI_SUM,0, MPI_COMM_WORLD,ierr)
c 0if (myid .eq. 0) then
write(6, 97) pi, abs(pi - PI25DT)97 format(' pi is approximately: ', F18.16, ' Error is: ', F18.16)
endif
call MPI_FINALIZE(rc)stopend
pi is approximately 3.1415926535897643, Error is 0.0000000000000289sgi
-
(i,j) (i+1,j)(i-1,j)
(i,j-1)
(i,j+1)
-
REAL A(0:n+1,0:n+1),B(1:n,1:n)
...
! Main Loop
DO WHILE(.NOT. converged)
! perform 4 point stencil
DO j = 1, n
DO i = 1, n
B(i, j) = 0.25 * (A(i-1,j) + A(i+1, j) + A(i, j-1) + A(i, j+1))
END DO
END DO
-
! copy result back into array A
DO j = 1, n
DO i = 1, n
A(i, j) = B(i, j)
END DO
END DO
...
! Convergence test omitted
END DO