next up previous contents
Next: MPI Routines used for Up: Parallelizing the Quantum Computing Previous: Initialising the parallelization from   Contents

Defining a group

Not all the nodes running the program must be used for running a parallelization routine. The MPI collective communication routines normally send information to all the nodes, so an MPI Group must be defined of the correct number of nodes, and information will then be sent only to this group. The MPI_Comm_split function is used to create a new group from an old group, in this case the MPI_COMM_WORLD group of all nodes. The MPI_Comm_split function must be run on every node at the same time. The typical way a node determines whether it is to be included in a group communication or not, is by seeing whether it's rank is less than the dimension of the matrix or not. If it isn't, then it takes no further part in the communication.


\begin{lstlisting}[frame=trbl,caption=Extract from receive.cc]{}
// Define new g...
...MPI_COMM_WORLD, color, rank, &group1);
if(rank >= dim)
return;
\end{lstlisting}


When the parallelization of the matrix-vector multiplication is complete, the group should be freed, to save on memory. This can be done with the MPI_Group_free command.


next up previous contents
Next: MPI Routines used for Up: Parallelizing the Quantum Computing Previous: Initialising the parallelization from   Contents
Colm O hEigeartaigh 2003-05-30