Peano 4
Loading...
Searching...
No Matches
peano4::parallel::StartTraversalMessage Struct Reference

#include <StartTraversalMessage.h>

Public Types

enum  ObjectConstruction { NoData }
 

Public Member Functions

 StartTraversalMessage ()
 
 StartTraversalMessage (int __stepIdentifier)
 
int getStepIdentifier () const
 
void setStepIdentifier (int value)
 
 StartTraversalMessage (const StartTraversalMessage &copy)=default
 
PragmaPop int getSenderRank () const
 
 StartTraversalMessage (ObjectConstruction)
 
std::string toString () const
 

Static Public Member Functions

PragmaPush static SilenceUnknownAttribute MPI_Datatype getForkDatatype ()
 Hands out MPI datatype if we work without the LLVM MPI extension.
 
static MPI_Datatype getJoinDatatype ()
 
static MPI_Datatype getBoundaryExchangeDatatype ()
 
static MPI_Datatype getMultiscaleDataExchangeDatatype ()
 
static MPI_Datatype getGlobalCommunciationDatatype ()
 
static void initDatatype ()
 Wrapper around getDatatype() to trigger lazy evaluation if we use the lazy initialisation.
 
static void shutdownDatatype ()
 Free the underlying MPI datatype.
 
static void send (const peano4::parallel::StartTraversalMessage &buffer, int destination, int tag, MPI_Comm communicator)
 In DaStGen (the first version), I had a non-static version of the send as well as the receive.
 
static void receive (peano4::parallel::StartTraversalMessage &buffer, int source, int tag, MPI_Comm communicator)
 
static void send (const peano4::parallel::StartTraversalMessage &buffer, int destination, int tag, std::function< void()> startCommunicationFunctor, std::function< void()> waitFunctor, MPI_Comm communicator)
 Alternative to the other send() where I trigger a non-blocking send an then invoke the functor until the corresponding MPI_Test tells me that the message went through.
 
static void receive (peano4::parallel::StartTraversalMessage &buffer, int source, int tag, std::function< void()> startCommunicationFunctor, std::function< void()> waitFunctor, MPI_Comm communicator)
 
static void sendAndPollDanglingMessages (const peano4::parallel::StartTraversalMessage &message, int destination, int tag, MPI_Comm communicator=tarch::mpi::Rank::getInstance().getCommunicator())
 
static void receiveAndPollDanglingMessages (peano4::parallel::StartTraversalMessage &message, int source, int tag, MPI_Comm communicator=tarch::mpi::Rank::getInstance().getCommunicator())
 

Private Attributes

PragmaPush SilenceUnknownAttribute int _stepIdentifier
 
int _senderDestinationRank
 

Static Private Attributes

static MPI_Datatype Datatype
 Whenever we use LLVM's MPI extension (DaStGe), we rely on lazy initialisation of the datatype.
 

Detailed Description

Definition at line 27 of file StartTraversalMessage.h.

Member Enumeration Documentation

◆ ObjectConstruction

Constructor & Destructor Documentation

◆ StartTraversalMessage() [1/4]

peano4::parallel::StartTraversalMessage::StartTraversalMessage ( )

Definition at line 29 of file StartTraversalMessage.h.

◆ StartTraversalMessage() [2/4]

peano4::parallel::StartTraversalMessage::StartTraversalMessage ( int __stepIdentifier)

Definition at line 10 of file StartTraversalMessage.cpp.

References setStepIdentifier().

Here is the call graph for this function:

◆ StartTraversalMessage() [3/4]

peano4::parallel::StartTraversalMessage::StartTraversalMessage ( const StartTraversalMessage & copy)
default

◆ StartTraversalMessage() [4/4]

peano4::parallel::StartTraversalMessage::StartTraversalMessage ( ObjectConstruction )

Definition at line 104 of file StartTraversalMessage.h.

Member Function Documentation

◆ getBoundaryExchangeDatatype()

MPI_Datatype peano4::parallel::StartTraversalMessage::getBoundaryExchangeDatatype ( )
static

Definition at line 68 of file StartTraversalMessage.cpp.

◆ getForkDatatype()

MPI_Datatype peano4::parallel::StartTraversalMessage::getForkDatatype ( )
static

Hands out MPI datatype if we work without the LLVM MPI extension.

If we work with this additional feature, this is the routine where the lazy initialisation is done and the datatype is also cached.

Definition at line 50 of file StartTraversalMessage.cpp.

◆ getGlobalCommunciationDatatype()

MPI_Datatype peano4::parallel::StartTraversalMessage::getGlobalCommunciationDatatype ( )
static

Definition at line 56 of file StartTraversalMessage.cpp.

Referenced by peano4::parallel::tests::PingPongTest::testDaStGenArray().

Here is the caller graph for this function:

◆ getJoinDatatype()

MPI_Datatype peano4::parallel::StartTraversalMessage::getJoinDatatype ( )
static

Definition at line 62 of file StartTraversalMessage.cpp.

◆ getMultiscaleDataExchangeDatatype()

MPI_Datatype peano4::parallel::StartTraversalMessage::getMultiscaleDataExchangeDatatype ( )
static

Definition at line 74 of file StartTraversalMessage.cpp.

◆ getSenderRank()

int peano4::parallel::StartTraversalMessage::getSenderRank ( ) const
Returns
The rank of the sender of an object. It only make ssense to call this routine after you've invoked receive with MPI_ANY_SOURCE.

Definition at line 79 of file StartTraversalMessage.cpp.

◆ getStepIdentifier()

int peano4::parallel::StartTraversalMessage::getStepIdentifier ( ) const

Definition at line 28 of file StartTraversalMessage.cpp.

Referenced by peano4::parallel::Node::continueToRun(), and peano4::parallel::tests::PingPongTest::testDaStGenTypeStartTraversalMessage().

Here is the caller graph for this function:

◆ initDatatype()

void peano4::parallel::StartTraversalMessage::initDatatype ( )
static

Wrapper around getDatatype() to trigger lazy evaluation if we use the lazy initialisation.

Definition at line 84 of file StartTraversalMessage.cpp.

Referenced by peano4::parallel::Node::initMPIDatatypes().

Here is the caller graph for this function:

◆ receive() [1/2]

void peano4::parallel::StartTraversalMessage::receive ( peano4::parallel::StartTraversalMessage & buffer,
int source,
int tag,
MPI_Comm communicator )
static

Definition at line 163 of file StartTraversalMessage.cpp.

References _senderDestinationRank.

Referenced by receiveAndPollDanglingMessages(), and peano4::parallel::tests::PingPongTest::testDaStGenTypeStartTraversalMessage().

Here is the caller graph for this function:

◆ receive() [2/2]

void peano4::parallel::StartTraversalMessage::receive ( peano4::parallel::StartTraversalMessage & buffer,
int source,
int tag,
std::function< void()> startCommunicationFunctor,
std::function< void()> waitFunctor,
MPI_Comm communicator )
static

Definition at line 190 of file StartTraversalMessage.cpp.

References _senderDestinationRank.

◆ receiveAndPollDanglingMessages()

void peano4::parallel::StartTraversalMessage::receiveAndPollDanglingMessages ( peano4::parallel::StartTraversalMessage & message,
int source,
int tag,
MPI_Comm communicator = tarch::mpi::Rank::getInstance().getCommunicator() )
static

◆ send() [1/2]

void peano4::parallel::StartTraversalMessage::send ( const peano4::parallel::StartTraversalMessage & buffer,
int destination,
int tag,
MPI_Comm communicator )
static

In DaStGen (the first version), I had a non-static version of the send as well as the receive.

However, this did not work with newer C++11 versions, as a member function using this as pointer usually doesn't see the vtable while the init sees the object from outside, i.e. including a vtable. So this routine now is basically an alias for a blocking MPI_Send.

Definition at line 158 of file StartTraversalMessage.cpp.

Referenced by sendAndPollDanglingMessages(), and peano4::parallel::tests::PingPongTest::testDaStGenTypeStartTraversalMessage().

Here is the caller graph for this function:

◆ send() [2/2]

void peano4::parallel::StartTraversalMessage::send ( const peano4::parallel::StartTraversalMessage & buffer,
int destination,
int tag,
std::function< void()> startCommunicationFunctor,
std::function< void()> waitFunctor,
MPI_Comm communicator )
static

Alternative to the other send() where I trigger a non-blocking send an then invoke the functor until the corresponding MPI_Test tells me that the message went through.

In systems with heavy MPI usage, this can help to avoid deadlocks.

Definition at line 170 of file StartTraversalMessage.cpp.

◆ sendAndPollDanglingMessages()

void peano4::parallel::StartTraversalMessage::sendAndPollDanglingMessages ( const peano4::parallel::StartTraversalMessage & message,
int destination,
int tag,
MPI_Comm communicator = tarch::mpi::Rank::getInstance().getCommunicator() )
static

◆ setStepIdentifier()

void peano4::parallel::StartTraversalMessage::setStepIdentifier ( int value)

◆ shutdownDatatype()

void peano4::parallel::StartTraversalMessage::shutdownDatatype ( )
static

Free the underlying MPI datatype.

Definition at line 138 of file StartTraversalMessage.cpp.

Referenced by peano4::parallel::Node::shutdownMPIDatatypes().

Here is the caller graph for this function:

◆ toString()

std::string peano4::parallel::StartTraversalMessage::toString ( ) const

Definition at line 16 of file StartTraversalMessage.cpp.

Referenced by peano4::parallel::Node::continueToRun(), and peano4::parallel::tests::PingPongTest::testDaStGenTypeStartTraversalMessage().

Here is the caller graph for this function:

Field Documentation

◆ _senderDestinationRank

int peano4::parallel::StartTraversalMessage::_senderDestinationRank
private

Definition at line 123 of file StartTraversalMessage.h.

Referenced by receive(), and receive().

◆ _stepIdentifier

PragmaPush SilenceUnknownAttribute int peano4::parallel::StartTraversalMessage::_stepIdentifier
private

Definition at line 118 of file StartTraversalMessage.h.

◆ Datatype

MPI_Datatype peano4::parallel::StartTraversalMessage::Datatype
staticprivate

Whenever we use LLVM's MPI extension (DaStGe), we rely on lazy initialisation of the datatype.

However, Peano calls init explicitly in most cases. Without the LLVM extension which caches the MPI datatype once constructed, this field stores the type.

Definition at line 132 of file StartTraversalMessage.h.


The documentation for this struct was generated from the following files: