next up previous contents
Next: Parallelizing matrix-vector multiplication Up: Parallelizing the Quantum Computing Previous: Defining a group   Contents

MPI Routines used for communicating between nodes

The Message Passing Interface is a standard used for message passing. Typically it is used in conjunction with a C or C++ program to farm out computation to the nodes of a cluster. The implementation of MPI used in this project was the open-source MPICH library. Two types of MPI operations were used in this project, collective and non-collective operations. Only two non-collective operations were used. MPI_Send is used to send data from one node to another, MPI_Recv is used to receive data from a particular node. Both these operations are blocking, meaning that the node which calls the operation pauses until the operation is complete.
\begin{lstlisting}[frame=trbl]{}
MPI_Send(void *buf, int count, MPI_Datatype dat...
...type, int source, int tag,
MPI_Comm comm, MPI_Status *status)
\end{lstlisting}


The other operations used are all collective. The MPI_Bcast operation broadcasts a message from the root node to all other nodes/processes in the specified group. This is used to broadcast the dimension of the matrix to all nodes, and also to broadcast an "exit" matrix to each node.


\begin{lstlisting}[frame=trbl]{}
MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root,
MPI_Comm comm)
\end{lstlisting}


When the group of nodes that are to work on the matrix-vector multiplication has been set up, the root node must give out a portion of the matrix to each node. This can be achieve with MPI_Send, but it is much more efficient to use the MPI_Scatter operation. This operation farms out pieces of an array to different node. Thus, the decomposition of the matrix can be achieved in just one command!


\begin{lstlisting}[frame=trbl]{}
MPI_Scatter(void *sendbuf, int sendcnt, MPI_Dat...
...
int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)
\end{lstlisting}


There is also a function called MPI_Gather that implements the opposite function of MPI_Scatter. When called on the root node, it gathers in data of a fixed size from all the nodes in the specified group, into an array. This is used to gather in the newly calculated qubit vector from the nodes, when the calculation is finished.


\begin{lstlisting}[frame=trbl]{}
MPI_Gather(void *sendbuf, int sendcnt, MPI_Data...
... int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
\end{lstlisting}


None of the collective operations detailed above are blocking, even though they must operate at the same time on each node. To synchronize all the nodes, the MPI_Barrier operation is called after a collective function. This ensures that all nodes in the group are operating in the correct place.


\begin{lstlisting}[frame=trbl]{}
MPI_Barrier(MPI_Comm comm)
\end{lstlisting}



next up previous contents
Next: Parallelizing matrix-vector multiplication Up: Parallelizing the Quantum Computing Previous: Defining a group   Contents
Colm O hEigeartaigh 2003-05-30