![]() |
Peano
|
Namespaces | |
namespace | tests |
Data Structures | |
class | BooleanSemaphore |
Boolean semaphore across MPI ranks. More... | |
struct | DoubleMessage |
struct | IntegerMessage |
class | Lock |
Create a lock around a boolean semaphore region. More... | |
class | Rank |
Represents a program instance within a cluster. More... | |
struct | StringMessage |
The string message looks like the other messages which I generate through DaStGen 2, but I actually wrote it myself manually to support dynamic string lengths. More... | |
Functions | |
std::string | MPIStatusToString (const MPI_Status &status) |
Returns a string representation of the mpi status. | |
std::string | MPIReturnValueToString (int result) |
With DaStGen2, we have two variants of sends and receives: We can use the blocking variant and a non-blocking one. Furthermore, we can obviously use the MPI datatype that the generator yields as well. With the blocking variant, the usage is straightforward:
IntegerMessage::receive(message, rank, BarrierTag, tarch::mpi::Rank::getInstance().getCommunicator());
With the option to insert a wait functor, you can do more complicated stuff. The minimalist version is:
tarch::mpi::IntegerMessage::receive( message, rank, BarrierTag, []() {}, [&]() { tarch::services::ServiceRepository::getInstance().receiveDanglingMessages(); }, tarch::mpi::Rank::getInstance().getCommunicator() );
It ensures that MPI progress is made as it actively polls the MPI subsystem for further incoming messages. I do prefer the more sophisticated version which also has a timeout mechanism:
tarch::mpi::IntegerMessage::receive( message, rank, BarrierTag, [&]() { tarch::mpi::Rank::getInstance().setDeadlockWarningTimeStamp(); tarch::mpi::Rank::getInstance().setDeadlockTimeOutTimeStamp(); }, [&]() { tarch::mpi::Rank::getInstance().writeTimeOutWarning( "tarch::mpi::DoubleMessage", "sendAndPollDanglingMessages()",destination, tag ); tarch::mpi::Rank::getInstance().triggerDeadlockTimeOut( "tarch::mpi::DoubleMessage", "sendAndPollDanglingMessages()", destination, tag ); tarch::services::ServiceRepository::getInstance().receiveDanglingMessages(); }, tarch::mpi::Rank::getInstance().getCommunicator() );
As inserting the code from above is cumbersome, I decided to create my own Peano4 aspect and to inject it into DaStGen. So just invoke the sendAndPoll or receiveAndPoll operation and you basically get the above aspect.
std::string tarch::mpi::MPIReturnValueToString | ( | int | result | ) |
Definition at line 209 of file Rank.cpp.
Referenced by tarch::mpi::Rank::init(), MPIStatusToString(), tarch::mpi::Rank::setCommunicator(), tarch::mpi::Rank::shutdown(), peano4::stacks::STDVectorStack< T >::startReceive(), peano4::stacks::STDVectorStackOverSmartPointers< T >::startReceive(), peano4::stacks::STDVectorStack< T >::startReceive(), peano4::stacks::STDVectorStack< T >::startSend(), peano4::stacks::STDVectorStackOverSmartPointers< T >::startSend(), and peano4::stacks::STDVectorStack< T >::startSend().
std::string tarch::mpi::MPIStatusToString | ( | const MPI_Status & | status | ) |
Returns a string representation of the mpi status.
For a detailed description of the mpi status itself see the file mpi.h.
Definition at line 250 of file Rank.cpp.
References MPIReturnValueToString().