Peano
Loading...
Searching...
No Matches
peano4::grid::TraversalObserver Class Referenceabstract

#include <TraversalObserver.h>

Inheritance diagram for peano4::grid::TraversalObserver:

Public Types

enum class  SendReceiveContext {
  BoundaryExchange , MultiscaleExchange , ForkDomain , JoinDomain ,
  PeriodicBoundaryDataSwap
}
 There are three different scenarios when we merge data: More...
 

Public Member Functions

virtual ~TraversalObserver ()
 
virtual void loadCell (const GridTraversalEvent &event)=0
 
virtual void enterCell (const GridTraversalEvent &event)=0
 Event is invoked per cell.
 
virtual void leaveCell (const GridTraversalEvent &event)=0
 
virtual void storeCell (const GridTraversalEvent &event)=0
 
virtual TraversalObserverclone (int spacetreeId)=0
 
virtual std::vector< GridControlEventgetGridControlEvents () const =0
 
virtual void beginTraversal (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h)=0
 Begin the traversal.
 
virtual void endTraversal (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h)=0
 
virtual void exchangeAllVerticalDataExchangeStacks (int)
 Send local data from top level of local mesh to master and receive its top-down information in return.
 
virtual void exchangeAllHorizontalDataExchangeStacks (bool)
 Exchange all the data along the domain boundaries.
 
virtual void exchangeAllPeriodicBoundaryDataStacks ()
 Exchange all periodic boundary data.
 
virtual void streamDataFromSplittingTreeToNewTree (int)
 Stream data from current tree on which this routine is called to the new worker.
 
virtual void streamDataFromJoiningTreeToMasterTree (int)
 
virtual void finishAllOutstandingSendsAndReceives ()
 Wrap up all sends and receives, i.e.
 
virtual void sendVertex (int, int, SendReceiveContext, const GridTraversalEvent &)
 
virtual void sendFace (int, int, SendReceiveContext, const GridTraversalEvent &)
 
virtual void sendCell (int, SendReceiveContext, const GridTraversalEvent &)
 
virtual void receiveAndMergeVertex (int, int, SendReceiveContext, const GridTraversalEvent &)
 
virtual void receiveAndMergeFace (int, int, SendReceiveContext, const GridTraversalEvent &)
 
virtual void receiveAndMergeCell (int, SendReceiveContext, const GridTraversalEvent &)
 
virtual void deleteAllStacks ()
 

Static Public Attributes

static constexpr int NoRebalancing = -1
 
static constexpr int NoData = -1
 Can this grid entity hold data.
 
static constexpr int CreateOrDestroyPersistentGridEntity = -2
 Implies that the data will then be local or had been local.
 
static constexpr int CreateOrDestroyHangingGridEntity = -3
 Implies that the data will then be local or had been local.
 

Detailed Description

behaviour

There is one observer by grid traversal thread and per rank. The observers are generated from the original observer via the clone() operator. Therefore, you never should be required to write a copy constructor. If you run without multithreading but with MPI, you still have to use a SpacetreeSet. The code therefore continues to copy observers.

Definition at line 36 of file TraversalObserver.h.

Member Enumeration Documentation

◆ SendReceiveContext

There are three different scenarios when we merge data:

  • We exchange data as we are at a proper parallel boundary. As we work with a non-overlapping domain decomposition and handle the individual levels separately, this can only happen for vertices and faces. It never happens for cells.
  • We exchange data as we are at a periodic boundary. Again, this happens only for vertices and faces.
  • We exchange data between different levels of the tree, where the parent cell is hosted on another tree than the child. In this case, we also exchange cell data.

There are two more modes: join and fork. It is important to note that these differ from the aforementioned modi, as forks and joins are realised as copies. We do not merge existing data structures, but we copy them. As such, the context fork and the context join do arise when we discuss data exchange. They do never arise when we speak about data merges. If you have a case distinction within a merge, those two modi should be left out (or equipped with assertion). They should be never entered.

Enumerator
BoundaryExchange 
MultiscaleExchange 
ForkDomain 
JoinDomain 
PeriodicBoundaryDataSwap 

Definition at line 231 of file TraversalObserver.h.

Constructor & Destructor Documentation

◆ ~TraversalObserver()

virtual peano4::grid::TraversalObserver::~TraversalObserver ( )
virtual

Definition at line 38 of file TraversalObserver.h.

Member Function Documentation

◆ beginTraversal()

virtual void peano4::grid::TraversalObserver::beginTraversal ( const tarch::la::Vector< Dimensions, double > & x,
const tarch::la::Vector< Dimensions, double > & h )
pure virtual

Begin the traversal.

This routine is called per spacetree instance, i.e. per subtree (thread) per rank. Within the usual implementation, everything will reside on the call stack anyway. If the routine is called on tree no 0, this operation has to establish the master data of the global root tree, i.e. ensure that the data of level -1 is technically there for the subsequent enterCell event, though this data is ill-defined.

Parameters
xRoot cell coordinates
hRoot cell size

Implemented in examples::grid::MyObserver, examples::regulargridupscaling::MyObserver, peano4::grid::EmptyTraversalObserver, and peano4::grid::TraversalVTKPlotter.

Referenced by peano4::grid::Spacetree::traverse().

Here is the caller graph for this function:

◆ clone()

virtual TraversalObserver * peano4::grid::TraversalObserver::clone ( int spacetreeId)
pure virtual
 I use the clone to create one observer object per traversal thread. So
 between different spacetrees of one spacetree set, there can be no race
 condition. Yet, the clone() itself could be called in parallel.

 \section  Global per-sweep actions

 If you want to implement an operation once per sweep in a parallel
 environment, then you can exploit the fact that the spacetree set also
 creates an observer for the global master thread, i.e. tree no 0. So if
 you add a statement alike

 <pre>

if (peano4::parallel::Node::isGlobalMaster(spacetreeId)) { ... }

then you can be sure that the branch body is executed only once globally per grid sweep.

The counterpart of the clone operation is the destructor.

Implemented in examples::grid::MyObserver, examples::regulargridupscaling::MyObserver, peano4::grid::EmptyTraversalObserver, and peano4::grid::TraversalVTKPlotter.

Referenced by peano4::parallel::SpacetreeSet::createObserverCloneIfRequired().

Here is the caller graph for this function:

◆ deleteAllStacks()

virtual void peano4::grid::TraversalObserver::deleteAllStacks ( )
virtual

Definition at line 247 of file TraversalObserver.h.

◆ endTraversal()

virtual void peano4::grid::TraversalObserver::endTraversal ( const tarch::la::Vector< Dimensions, double > & x,
const tarch::la::Vector< Dimensions, double > & h )
pure virtual

◆ enterCell()

virtual void peano4::grid::TraversalObserver::enterCell ( const GridTraversalEvent & event)
pure virtual

Event is invoked per cell.

It is however not called for the root cell, i.e. for the cell with level 0 that does not have a parent.

Implemented in peano4::grid::EmptyTraversalObserver, peano4::grid::TraversalVTKPlotter, examples::grid::MyObserver, and examples::regulargridupscaling::MyObserver.

Referenced by peano4::grid::Spacetree::descend().

Here is the caller graph for this function:

◆ exchangeAllHorizontalDataExchangeStacks()

virtual void peano4::grid::TraversalObserver::exchangeAllHorizontalDataExchangeStacks ( bool )
virtual

Exchange all the data along the domain boundaries.

If the bool is set, we do send out exactly as many elements per face or vertex as we expect to receive. Therefore, the boundary exchange can optimise the data exchange.

The SpacetreeSet class provides some generic routines for this that you can use. Simply invoke them for every data container that you use. If you trigger non-blocking MPI, you don't have to wait until they are finished. You can expect the calling routine that it calls finishAllOutstandingSendsAndReceives() later on.

Definition at line 178 of file TraversalObserver.h.

◆ exchangeAllPeriodicBoundaryDataStacks()

virtual void peano4::grid::TraversalObserver::exchangeAllPeriodicBoundaryDataStacks ( )
virtual

Exchange all periodic boundary data.

Periodic boundary values are always handled by tree 0, i.e. there's no need to distinguish ranks here. On all trees that are not rank 0, this operation should immediately return.

Definition at line 185 of file TraversalObserver.h.

◆ exchangeAllVerticalDataExchangeStacks()

virtual void peano4::grid::TraversalObserver::exchangeAllVerticalDataExchangeStacks ( int )
virtual

Send local data from top level of local mesh to master and receive its top-down information in return.

The SpacetreeSet class provides some generic routines for this that you can use. Simply invoke them for every data container that you use. If you trigger non-blocking MPI, you don't have to wait until they are finished. You can expect the calling routine that it calls finishAllOutstandingSendsAndReceives() later on.

Definition at line 164 of file TraversalObserver.h.

◆ finishAllOutstandingSendsAndReceives()

virtual void peano4::grid::TraversalObserver::finishAllOutstandingSendsAndReceives ( )
virtual

Wrap up all sends and receives, i.e.

invoke wait() on the MPI requests. The SpacetreeSet provides a generic routine for this that you can call per data container in use.

Definition at line 207 of file TraversalObserver.h.

◆ getGridControlEvents()

virtual std::vector< GridControlEvent > peano4::grid::TraversalObserver::getGridControlEvents ( ) const
pure virtual
 The tree traversal invokes this operation before beginIteration.

 \section Content

Dynamic AMR is controlled via a sequence of grid control events. Each event spans a certain region and prescribes an h resolution over this region. Depending on the type of the event (erase or refine), the grid adopts. A simple snippet just creating a refined area in a square is

std::vector< peano4::grid::GridControlEvent > applications4::grid::MyObserver::getGridControlEvents() {
std::vector< peano4::grid::GridControlEvent >  controlEvents;
peano4::grid::GridControlEvent newEvent;
newEvent.setRefinementControl( peano4::grid::GridControlEvent::RefinementControl::Refine );
newEvent.setOffset( {0.0,0.0} );
newEvent.setWidth( {0.5,0.5} );
newEvent.setH( {0.05,0.05} );
controlEvents.push_back(newEvent);
return controlEvents;
}
   

The entries are logically ordered. The later the entry, the more important it is. So entry 2 overrules entry 1.

Implemented in peano4::grid::EmptyTraversalObserver, and peano4::grid::TraversalVTKPlotter.

Referenced by peano4::grid::Spacetree::traverse().

Here is the caller graph for this function:

◆ leaveCell()

virtual void peano4::grid::TraversalObserver::leaveCell ( const GridTraversalEvent & event)
pure virtual

◆ loadCell()

virtual void peano4::grid::TraversalObserver::loadCell ( const GridTraversalEvent & event)
pure virtual

Implemented in peano4::grid::EmptyTraversalObserver, and peano4::grid::TraversalVTKPlotter.

Referenced by peano4::grid::Spacetree::descend().

Here is the caller graph for this function:

◆ receiveAndMergeCell()

virtual void peano4::grid::TraversalObserver::receiveAndMergeCell ( int ,
SendReceiveContext ,
const GridTraversalEvent &  )
virtual

Definition at line 245 of file TraversalObserver.h.

◆ receiveAndMergeFace()

virtual void peano4::grid::TraversalObserver::receiveAndMergeFace ( int ,
int ,
SendReceiveContext ,
const GridTraversalEvent &  )
virtual

Definition at line 244 of file TraversalObserver.h.

Referenced by peano4::grid::Spacetree::receiveAndMergeUserData().

Here is the caller graph for this function:

◆ receiveAndMergeVertex()

virtual void peano4::grid::TraversalObserver::receiveAndMergeVertex ( int ,
int ,
SendReceiveContext ,
const GridTraversalEvent &  )
virtual

Definition at line 243 of file TraversalObserver.h.

Referenced by peano4::grid::Spacetree::receiveAndMergeUserData().

Here is the caller graph for this function:

◆ sendCell()

virtual void peano4::grid::TraversalObserver::sendCell ( int ,
SendReceiveContext ,
const GridTraversalEvent &  )
virtual

Definition at line 241 of file TraversalObserver.h.

◆ sendFace()

virtual void peano4::grid::TraversalObserver::sendFace ( int ,
int ,
SendReceiveContext ,
const GridTraversalEvent &  )
virtual

Definition at line 240 of file TraversalObserver.h.

◆ sendVertex()

virtual void peano4::grid::TraversalObserver::sendVertex ( int ,
int ,
SendReceiveContext ,
const GridTraversalEvent &  )
virtual

Definition at line 239 of file TraversalObserver.h.

◆ storeCell()

virtual void peano4::grid::TraversalObserver::storeCell ( const GridTraversalEvent & event)
pure virtual

Implemented in peano4::grid::EmptyTraversalObserver, and peano4::grid::TraversalVTKPlotter.

Referenced by peano4::grid::Spacetree::descend().

Here is the caller graph for this function:

◆ streamDataFromJoiningTreeToMasterTree()

virtual void peano4::grid::TraversalObserver::streamDataFromJoiningTreeToMasterTree ( int )
virtual

Definition at line 200 of file TraversalObserver.h.

◆ streamDataFromSplittingTreeToNewTree()

virtual void peano4::grid::TraversalObserver::streamDataFromSplittingTreeToNewTree ( int )
virtual

Stream data from current tree on which this routine is called to the new worker.

Todo
Not clear how this works on the worker side.

The SpacetreeSet class provides some generic routines for this that you can use. Simply invoke them for every data container that you use. If you trigger non-blocking MPI, you don't have to wait until they are finished. You can expect the calling routine that it calls finishAllOutstandingSendsAndReceives() later on.

Definition at line 199 of file TraversalObserver.h.

Field Documentation

◆ CreateOrDestroyHangingGridEntity

constexpr int peano4::grid::TraversalObserver::CreateOrDestroyHangingGridEntity = -3
staticconstexpr

◆ CreateOrDestroyPersistentGridEntity

constexpr int peano4::grid::TraversalObserver::CreateOrDestroyPersistentGridEntity = -2
staticconstexpr

◆ NoData

constexpr int peano4::grid::TraversalObserver::NoData = -1
staticconstexpr

Can this grid entity hold data.

I use this one to indicate that no data is associated with a grid entity, as the grid entity is outside of the local computational domain. The term refers explicitly to the domain decomposition, i.e. this value is used to flag grid entities which are there in the tree, as we always have to work with full trees, but that cannot really hold any data as the user never sees them.

Definition at line 52 of file TraversalObserver.h.

Referenced by peano4::grid::GridTraversalEventGenerator::createEnterCellTraversalEvent(), peano4::grid::GridTraversalEventGenerator::createLeaveCellTraversalEvent(), and peano4::grid::Spacetree::getNeighbourTrees().

◆ NoRebalancing

constexpr int peano4::grid::TraversalObserver::NoRebalancing = -1
staticconstexpr

Definition at line 40 of file TraversalObserver.h.


The documentation for this class was generated from the following files: