|
Peano
|
Namespaces | |
| namespace | tests |
Data Structures | |
| class | BooleanSemaphore |
| Boolean semaphore across MPI ranks. More... | |
| struct | DoubleMessage |
| struct | IntegerMessage |
| class | Lock |
| Create a lock around a boolean semaphore region. More... | |
| class | Rank |
| Represents a program instance within a cluster. More... | |
| struct | StringMessage |
| The string message looks like the other messages which I generate through DaStGen 2, but I actually wrote it myself manually to support dynamic string lengths. More... | |
Functions | |
| std::string | MPIStatusToString (const MPI_Status &status) |
| Returns a string representation of the mpi status. | |
| std::string | MPIReturnValueToString (int result) |
| void | wait (MPI_Request &request, const std::string &fullQualifiedClassName, const std::string &functionName) |
| Wrapper around MPI_Wait. | |
With DaStGen2, we have two variants of sends and receives: We can use the blocking variant and a non-blocking one. Furthermore, we can obviously use the MPI datatype that the generator yields as well. With the blocking variant, the usage is straightforward:
IntegerMessage::receive(message, rank, BarrierTag, tarch::mpi::Rank::getInstance().getCommunicator());
With the option to insert a wait functor, you can do more complicated stuff. The minimalist version is:
tarch::mpi::IntegerMessage::receive(
message, rank, BarrierTag,
[]() {},
[&]() {
tarch::services::ServiceRepository::getInstance().receiveDanglingMessages();
},
tarch::mpi::Rank::getInstance().getCommunicator()
);
It ensures that MPI progress is made as it actively polls the MPI subsystem for further incoming messages. I do prefer the more sophisticated version which also has a timeout mechanism:
tarch::mpi::IntegerMessage::receive(
message, rank, BarrierTag,
[&]() {
tarch::mpi::Rank::getInstance().setDeadlockWarningTimeStamp();
tarch::mpi::Rank::getInstance().setDeadlockTimeOutTimeStamp();
},
[&]() {
tarch::mpi::Rank::getInstance().writeTimeOutWarning( "tarch::mpi::DoubleMessage", "sendAndPollDanglingMessages()",destination, tag );
tarch::mpi::Rank::getInstance().triggerDeadlockTimeOut( "tarch::mpi::DoubleMessage", "sendAndPollDanglingMessages()", destination, tag );
tarch::services::ServiceRepository::getInstance().receiveDanglingMessages();
},
tarch::mpi::Rank::getInstance().getCommunicator()
);
As inserting the code from above is cumbersome, I decided to create my own Peano4 aspect and to inject it into DaStGen. So just invoke the sendAndPoll or receiveAndPoll operation and you basically get the above aspect.
| std::string tarch::mpi::MPIReturnValueToString | ( | int | result | ) |
Referenced by peano4::stacks::STDVectorStack< T >::startReceive(), and peano4::stacks::STDVectorStack< T >::startSend().

| std::string tarch::mpi::MPIStatusToString | ( | const MPI_Status & | status | ) |
Returns a string representation of the mpi status.
For a detailed description of the mpi status itself see the file mpi.h.
| void tarch::mpi::wait | ( | MPI_Request & | request, |
| const std::string & | fullQualifiedClassName, | ||
| const std::string & | functionName ) |
Wrapper around MPI_Wait.
This routine wraps around MPI_Wait and implements some deadlock detection. That's the not the only thing: the routine also invokes the receiveDanglingMessages() routine.
| request | Valid MPI request |
| fullQualifiedClassName | The fully qualified function name invoking the wait(). This one is used to report on deadlocks. |