![]() |
Peano
|
Observer which pipes the automaton transitions into a VTK file. More...
#include <TraversalVTKPlotter.h>
Public Member Functions | |
TraversalVTKPlotter (const std::string &filename, int treeId=-1) | |
You have to invoke startNewSnapshot() if you wanna have a pvd file immediately after you've created this observer in the main code. | |
virtual | ~TraversalVTKPlotter () |
virtual void | beginTraversal (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h) override |
Begin the traversal. | |
virtual void | endTraversal (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h) override |
virtual void | loadCell (const GridTraversalEvent &event) override |
virtual void | storeCell (const GridTraversalEvent &event) override |
virtual void | enterCell (const GridTraversalEvent &event) override |
Event is invoked per cell. | |
virtual void | leaveCell (const GridTraversalEvent &event) override |
virtual TraversalObserver * | clone (int spacetreeId) override |
virtual std::vector< GridControlEvent > | getGridControlEvents () const override |
Obviously empty for this particular observer. | |
![]() | |
virtual | ~TraversalObserver () |
virtual void | exchangeAllVerticalDataExchangeStacks (int) |
Send local data from top level of local mesh to master and receive its top-down information in return. | |
virtual void | exchangeAllHorizontalDataExchangeStacks (bool) |
Exchange all the data along the domain boundaries. | |
virtual void | exchangeAllPeriodicBoundaryDataStacks () |
Exchange all periodic boundary data. | |
virtual void | streamDataFromSplittingTreeToNewTree (int) |
Stream data from current tree on which this routine is called to the new worker. | |
virtual void | streamDataFromJoiningTreeToMasterTree (int) |
virtual void | finishAllOutstandingSendsAndReceives () |
Wrap up all sends and receives, i.e. | |
virtual void | sendVertex (int, int, SendReceiveContext, const GridTraversalEvent &) |
virtual void | sendFace (int, int, SendReceiveContext, const GridTraversalEvent &) |
virtual void | sendCell (int, SendReceiveContext, const GridTraversalEvent &) |
virtual void | receiveAndMergeVertex (int, int, SendReceiveContext, const GridTraversalEvent &) |
virtual void | receiveAndMergeFace (int, int, SendReceiveContext, const GridTraversalEvent &) |
virtual void | receiveAndMergeCell (int, SendReceiveContext, const GridTraversalEvent &) |
virtual void | deleteAllStacks () |
Protected Member Functions | |
void | plotCell (const GridTraversalEvent &event) |
Does the actual plotting, i.e. | |
Protected Attributes | |
const std::string | _filename |
const int | _spacetreeId |
Static Protected Attributes | |
static tarch::logging::Log | _log |
Static Private Attributes | |
static tarch::mpi::BooleanSemaphore | _sempahore |
Additional Inherited Members | |
![]() | |
enum class | SendReceiveContext { BoundaryExchange , MultiscaleExchange , ForkDomain , JoinDomain , PeriodicBoundaryDataSwap } |
There are three different scenarios when we merge data: More... | |
![]() | |
static constexpr int | NoRebalancing = -1 |
static constexpr int | NoData = -1 |
Can this grid entity hold data. | |
static constexpr int | CreateOrDestroyPersistentGridEntity = -2 |
Implies that the data will then be local or had been local. | |
static constexpr int | CreateOrDestroyHangingGridEntity = -3 |
Implies that the data will then be local or had been local. | |
Observer which pipes the automaton transitions into a VTK file.
While we use the up-to-date vtk format, the observer plots the whole thing as a discontinuous unstructured mesh. It is not particular sophisticated.
The plotter can write whole time series. For this, you have to invoke startNewSnapshot() prior to each plot. It is the latter which also ensures that parallel plots in an MPI environment do work.
Each tree dumps its own vtk file. That is, each thread and each rank in theory might write its file parallel to the other guys. VTK/VTU offers us to define a metafile (pvtu) which collocates various dumps. As we create one observer per thread through clone(), every thread on every rank has its instance and pipes its data. getFilename() ensures that no file is overwritten. It combines the tree number with a counter, and _counter, which is static, is incremented through endTraversalOnRank() which I expect the user to call once after each traversal.
As the MPI domain decomposition creates fake observers for the master of a local rank when it is created, we'll have multiple entries for forking ranks in the meta file.
Definition at line 48 of file TraversalVTKPlotter.h.
peano4::grid::TraversalVTKPlotter::TraversalVTKPlotter | ( | const std::string & | filename, |
int | treeId = -1 ) |
You have to invoke startNewSnapshot() if you wanna have a pvd file immediately after you've created this observer in the main code.
If this guy is ran on the global master,
Definition at line 22 of file TraversalVTKPlotter.cpp.
|
virtual |
Definition at line 32 of file TraversalVTKPlotter.cpp.
|
overridevirtual |
Begin the traversal.
This routine is called per spacetree instance, i.e. per subtree (thread) per rank. Within the usual implementation, everything will reside on the call stack anyway. If the routine is called on tree no 0, this operation has to establish the master data of the global root tree, i.e. ensure that the data of level -1 is technically there for the subsequent enterCell event, though this data is ill-defined.
x | Root cell coordinates |
h | Root cell size |
Implements peano4::grid::TraversalObserver.
Definition at line 34 of file TraversalVTKPlotter.cpp.
References tarch::mpi::Rank::getInstance(), tarch::mpi::Rank::getRank(), and tarch::plotter::PVDTimeSeriesWriter::NoIndexFile.
|
overridevirtual |
I use the clone to create one observer object per traversal thread. So between different spacetrees of one spacetree set, there can be no race condition. Yet, the clone() itself could be called in parallel. \section Global per-sweep actions If you want to implement an operation once per sweep in a parallel environment, then you can exploit the fact that the spacetree set also creates an observer for the global master thread, i.e. tree no 0. So if you add a statement alike <pre>
if (peano4::parallel::Node::isGlobalMaster(spacetreeId)) { ... }
then you can be sure that the branch body is executed only once globally per grid sweep.
The counterpart of the clone operation is the destructor.
Implements peano4::grid::TraversalObserver.
Definition at line 142 of file TraversalVTKPlotter.cpp.
|
overridevirtual |
Implements peano4::grid::TraversalObserver.
Definition at line 63 of file TraversalVTKPlotter.cpp.
References assertion.
|
overridevirtual |
Event is invoked per cell.
It is however not called for the root cell, i.e. for the cell with level 0 that does not have a parent.
Implements peano4::grid::TraversalObserver.
Definition at line 100 of file TraversalVTKPlotter.cpp.
|
overridevirtual |
Obviously empty for this particular observer.
Implements peano4::grid::TraversalObserver.
Definition at line 149 of file TraversalVTKPlotter.cpp.
|
overridevirtual |
Implements peano4::grid::TraversalObserver.
Definition at line 138 of file TraversalVTKPlotter.cpp.
|
overridevirtual |
Implements peano4::grid::TraversalObserver.
Definition at line 92 of file TraversalVTKPlotter.cpp.
|
protected |
Does the actual plotting, i.e.
all checks/decision making is already done before
Definition at line 109 of file TraversalVTKPlotter.cpp.
References assertion, dfor2, enddforx, peano4::grid::GridTraversalEvent::getH(), tarch::multicore::Core::getInstance(), peano4::grid::GridTraversalEvent::getX(), logError, tarch::la::multiplyComponents(), and TwoPowerD.
|
overridevirtual |
Implements peano4::grid::TraversalObserver.
Definition at line 96 of file TraversalVTKPlotter.cpp.
|
private |
Definition at line 65 of file TraversalVTKPlotter.h.
|
private |
Definition at line 67 of file TraversalVTKPlotter.h.
|
protected |
Definition at line 52 of file TraversalVTKPlotter.h.
Referenced by exahype2.tracer.InsertParticlesFromFile.InsertParticlesFromFile::get_constructor_body(), and swift2.input.InsertParticlesFromHDF5File.InsertParticlesFromHDF5File::get_constructor_body().
|
staticprotected |
Definition at line 50 of file TraversalVTKPlotter.h.
|
staticprivate |
Definition at line 69 of file TraversalVTKPlotter.h.
|
protected |
Definition at line 53 of file TraversalVTKPlotter.h.
|
private |
Definition at line 66 of file TraversalVTKPlotter.h.
|
private |
Definition at line 64 of file TraversalVTKPlotter.h.
|
private |
Definition at line 63 of file TraversalVTKPlotter.h.