Peano
Loading...
Searching...
No Matches
peano4 Namespace Reference

Namespaces

namespace  dastgen2
 
namespace  datamanagement
 
namespace  datamodel
 
namespace  grid
 The grid namespace is Peano's core.
 
namespace  maps
 The map namespaces hosts various map implementations.
 
namespace  output
 
namespace  parallel
 
namespace  Project
 
namespace  runner
 
namespace  solversteps
 
namespace  stacks
 
namespace  toolbox
 
namespace  utils
 
namespace  visualisation
 

Data Structures

struct  SplitInstruction
 Instruction to split. More...
 

Typedefs

typedef std::map< int, SplitInstructionSplitSpecification
 

Functions

void writeCopyrightMessage ()
 You can invoke this operation manually, but it will also implicitly be triggered by the init routines.
 
void fillLookupTables ()
 Fill Lookup Tables.
 
int initParallelEnvironment (int *argc, char ***argv)
 Init Parallel Environment.
 
void shutdownParallelEnvironment ()
 Shutdown all the parallel environment, i.e.
 
void initSingletons (const tarch::la::Vector< Dimensions, double > &offset, const tarch::la::Vector< Dimensions, double > &width, const std::bitset< Dimensions > &periodicBC=0)
 Fire up all the singletons.
 
void shutdownSingletons ()
 The very first thing I have to do is to shut down Node.
 
tarch::tests::TestCasegetUnitTests ()
 Please destroy after usage.
 

Typedef Documentation

◆ SplitSpecification

Definition at line 44 of file grid.h.

Function Documentation

◆ fillLookupTables()

void peano4::fillLookupTables ( )

Fill Lookup Tables.

Fill all the lookup tables used within the application. As lookup tables are used by many operations, I suggest to call this operation as soon as possible.

There shall no error occur in this operation. Thus, it does not return any code.

Definition at line 87 of file peano.cpp.

References peano4::utils::setupLookupTableForDDelinearised(), and peano4::utils::setupLookupTableForDLinearised().

Referenced by main().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ getUnitTests()

tarch::tests::TestCase * peano4::getUnitTests ( )

Please destroy after usage.

Definition at line 12 of file UnitTests.cpp.

References tarch::tests::TreeTestCaseCollection::addTestCase().

Referenced by main(), and runTests().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initParallelEnvironment()

int peano4::initParallelEnvironment ( int * argc,
char *** argv )

Init Parallel Environment.

Inits the parallel environment. If the parallel mode is not set, the operation detoriates to nop. The function returns 0 if everything is o.k., it returns -2 otherwise. Please call this operation before you call any other operation that could result in an error. I suggest to call it right after fillLookupTables().

Please note that Peano 4 considers both shared memory and distributed memory to be a parallel environment.

init might change the variables passed. If you want to parse the command line arguments, use the values returned. If you use the arguments directly without calling initParallelEnvironment() they might contain MPI values not important to the program.

Usage/implementation details

You may not use the trace macros before this operation has invoked the init operation. Otherwise, the getRank() assertion fails, as the node has not been configured correctly.

Invoke with an address operator before that.

peano4::initParallelEnvironment(&argc,&argv);
   

This has to be done as one of the very first things, i.e. before you init the logging, or run tests, or ...

Definition at line 101 of file peano.cpp.

References tarch::mpi::Rank::getInstance(), tarch::multicore::Core::getInstance(), peano4::parallel::Node::initMPIDatatypes(), tarch::mpi::Rank::setDeadlockTimeOut(), tarch::mpi::Rank::setTimeOutWarning(), and writeCopyrightMessage().

Referenced by main().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initSingletons()

void peano4::initSingletons ( const tarch::la::Vector< Dimensions, double > & offset,
const tarch::la::Vector< Dimensions, double > & width,
const std::bitset< Dimensions > & periodicBC = 0 )

Fire up all the singletons.

Singletons that I don't touch are:

Note that there is a bug when passing in initializer lists to this function. If the number of elements in "offset" is greater than Dimensions, then "width" will borrow some elements that were intended for "offset". Best practice is to initialize vectors that are to be passed in before the call to initSingletons. That way the compiler will throw an error if the sizes are not correct.

Definition at line 133 of file peano.cpp.

References peano4::parallel::Node::getInstance(), peano4::parallel::SpacetreeSet::getInstance(), tarch::mpi::BooleanSemaphore::BooleanSemaphoreService::getInstance(), tarch::services::ServiceRepository::getInstance(), peano4::parallel::Node::init(), tarch::mpi::BooleanSemaphore::BooleanSemaphoreService::init(), tarch::services::ServiceRepository::init(), peano4::parallel::SpacetreeSet::init(), periodicBC, and writeCopyrightMessage().

Referenced by main(), runParallel(), and runSerial().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ shutdownParallelEnvironment()

void peano4::shutdownParallelEnvironment ( )

Shutdown all the parallel environment, i.e.

free all MPI datatypes and close down MPI. This also turns off the shared memory environment. Before this happens, you have to shutdown the node such that everybody knows that we are going down. So you have to call Node::shutdown() before you trigger this operation. This is your responsibility.

The routine first adds a barrier. This barrier is necessary. If the very last activity of all ranks is for example to plot stuff, they typically use global semaphores as well. To make these semaphores work, we still require that all nodes call receiveDanglingMessages(). It is only after everyone has done their dump, that we can shut down the shared memory system. This is the reason the barrier has to come after the node's shutdown and then has to be a Peano 4 barrier which still invokes receiveDanglingMessages() on all services.

Once all shared memory tasks have terminated, we free the MPI datatypes.

Eventually, we shut down the MPI rank.

Once this routine has terminated, do not add any barrier() anymore!

See also
peano4::parallel::Node::shutdown()

Definition at line 127 of file peano.cpp.

References tarch::mpi::Rank::getInstance(), tarch::multicore::Core::getInstance(), tarch::mpi::Rank::shutdown(), tarch::multicore::Core::shutdown(), and peano4::parallel::Node::shutdownMPIDatatypes().

Referenced by main().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ shutdownSingletons()

void peano4::shutdownSingletons ( )

The very first thing I have to do is to shut down Node.

This shutdown will tell all the other ranks to go down as well.

Definition at line 150 of file peano.cpp.

References peano4::parallel::Node::getInstance(), peano4::parallel::SpacetreeSet::getInstance(), tarch::mpi::BooleanSemaphore::BooleanSemaphoreService::getInstance(), tarch::services::ServiceRepository::getInstance(), peano4::parallel::Node::shutdown(), peano4::parallel::SpacetreeSet::shutdown(), tarch::mpi::BooleanSemaphore::BooleanSemaphoreService::shutdown(), and tarch::services::ServiceRepository::shutdown().

Referenced by main(), runParallel(), and runSerial().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ writeCopyrightMessage()

void peano4::writeCopyrightMessage ( )

You can invoke this operation manually, but it will also implicitly be triggered by the init routines.

For the machine name.

If it doesn't work, switch it off in the file CompilerSpecificSettings.h.

Definition at line 26 of file peano.cpp.

References tarch::mpi::Rank::getGlobalMasterRank(), tarch::accelerator::Device::getInstance(), tarch::mpi::Rank::getInstance(), and tarch::multicore::Core::getInstance().

Referenced by initParallelEnvironment(), and initSingletons().

Here is the call graph for this function:
Here is the caller graph for this function: