- Class api.actionsets.ImposeDirichletBoundaryConditions.ImposeDirichletBoundaryConditions
This version is not yet implemented, and maybe we don't need it ever!
We need the projection matrix for the solution!
We have to add the corresponding matrix entries below, as well as the right-hand side
- Class api.actionsets.ImposeDirichletBoundaryConditionsWithInteriorPenaltyMethod.ImposeDirichletBoundaryConditionsWithInteriorPenaltyMethod
- Add comments which document from Slack
- Global api.solvers.Solver.Solver.create_readme_descriptor (self)
- Has to be implemented properly, i.e. should at least report on the type and the name of the solver
- Global applications::exahype2::ccz4::maxEigenvalue (const double *const Q, int normal, const double CCZ4e, const double CCZ4ds, const double CCZ4GLMc, const double CCZ4GLMd) InlineMethod
Han
Han
- Global benchmarks::multigrid::petsc::poisson::DGPoisson::DGPoisson ()
- Please add your documentation here.
- Global benchmarks::multigrid::petsc::poisson::DGPoisson::initNode (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h, double &value, double &rhs, double &exactSol) override
- Please add your documentation here
- Global benchmarks::multigrid::petsc::poisson::DGPoisson::~DGPoisson ()
- Please add your documentation here.
- Global benchmarks::multigrid::petsc::poisson::Poisson::initVertex (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h, double &value, double &rhs) override
- Please add your documentation here
- Global benchmarks::multigrid::petsc::poisson::Poisson::Poisson ()
- Please add your documentation here.
- Global benchmarks::multigrid::petsc::poisson::Poisson::~Poisson ()
- Please add your documentation here.
- Page Blockstructured
- This docu should move closer to the source code
- Page Boundary Conditions
- Yet to be written/sketched
- Page Boundary conditions and solver coupling
- Yet to be written
- Page Building your project from scratch
document units here: time and spatial.
this
Go into plotter details. Maybe wait for swift-like hdf5 output.
elaborate
Document parameters either here or in the py file itself. If you document it there, make sure to write here for users to look there for documentation.
all of this, once it converges. I want to move the majority of parameters away from users and from individual project .py files. No point in users specifying KERNEL_GAMMAs etc.
what are minimal required particle fields? Shall we hardcode them? If hardcoded, document what fields are always present. If not, document what fields need to be present.
- Global ccz4_archived.add_constraint_RefinementFlag_verification (self)
- legacy code
- Global ccz4_archived.add_derivative_calculation (self)
- legacy code
- Global ccz4_archived.add_PT_ConsW_Psi4W_AMRFlag (self)
- legacy code
- Page Checkpointing
- Yet to be written
- Page Continuous Galerkin for the Poisson equation
- If someone feels the need to write something here, please do so
- Page Continuous Galerkin for the Poisson equation with PETSc
- Sean can you write down stuff here?
- Page Creating new particle types (solvers)
- Conflict of concurrent data access is here, but it is not really a conflict. It is a simple critical section.
- Page Discontinuous Galerkin for the Poisson equation with PETSc
Tobias. Continue writing.
This section needs to be revisited or moved.
Change the arguments' names to the "from" notation.
- Page Discontinuous Galerkin Solver with PETSc
Different polynomial degrees are simple to realise. This is something to do in the future. Note that +/- are to be taken from the same space as the cell, as they are projections, but maybe we want to alter this, and even provide completely different degrees for the result of the Riemann solve.
We need a nice illustration here of all the function spaces
- Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_equidistant_grid_projector (self)
- Not used atm, but definitely useful
- Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_fine_grid_projector (self, j)
- Definitely useful, but not used atm
- Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_K1 (self)
- Not yet updated atm but seems also not to be used anywhere
- Global exahype2.solvers.rkdg.RungeKuttaDG.RungeKuttaDG.add_actions_to_create_grid (self, step, evaluate_refinement_criterion)
- : The boundary information is set only once. It is therefore important that we ues the face label and initialise it properly.
- Class exahype2.solvers.rkfd.actionsets.PreprocessSolution.PreprocessReconstructedSolutionWithHalo
- Das ist falsch
- Global exahype2::dg::internal::copySolution (const double *__restrict__ cellQin, const int order, const int unknowns, const int auxiliaryVariables, const tarch::la::Vector< Dimensions, int > &node, double *__restrict__ cellQout)
- Noch komplett auf diese Micro-Kernel-Notation umstellen, die Dominic im ADER eingefuehrt hat Das aber erst machen, nachdem wir die Enumeratoren da haben
- Global exahype2::fv::rusanov::loopbodies::updateSolutionWithFlux (const double *__restrict__ tempFluxX, const double *__restrict__ tempFluxY, const double *__restrict__ tempFluxZ, const FluxEnumeratorType &fluxEnumerator, const ::tarch::la::Vector< Dimensions, double > &patchCentre, const ::tarch::la::Vector< Dimensions, double > &patchSize, int patchIndex, const ::tarch::la::Vector< Dimensions, int > &volumeIndex, int unknown, double dt, double *__restrict__ QOut, const QOutEnumeratorType &QOutEnumerator) InlineMethod
- Remove flux in the centre, as it is eliminated.
- Page ExaSeis with rupture coupling
- Maybe we can model this differently using the FV infrastructure.
- Page Explore data with PyVista
- This is yet to be written.
- Page Extend the visualisation (and checkpointing)
- More docu here please.
- Page Finite Volumes
- Das sollte viel mehr Projektionen enthalten, also wie die Realisiert sind
- Page GPU support (and aggressive vectorisation)
Needs revision. Currently some of this stuff is already covered by the tutorials ,where it belongs. This should be more about the interna.
More details on teh maturity and working of coce
Discussion how to introduce your own GPU-specific kernels
This will change completely as we switch to xDSL. Notably, we will not support multiple target GPUs in one source code anymore.
This will change completely as we switch to xDSL. Notably, we will not support multiple target GPUs in one source code anymore.
- Page Hybrid Galerkin for the Poisson equation with PETSc
Eike, Alex maybe you can assist here
Continue to write
- Page Matrix-free mixed DG for the Poisson equation
Tobias, please write.
Tobias and Alex should discuss.
- Page Merging/coupling various applications
We need to figure out how to toggle and decide whether we want to run a step from either project. How to switch forth and back in arbitrary order.
Dmitry, if we find that attributes' names clash, we have to rename them too
- Global mghype.api.matrixgenerators.DLinear.DLinear._cell_dofs (self)
- should this be 2**D or (p+1)**D? suppose it's not relevant, given this is a linear solver
- Global mghype.api.matrixgenerators.DLinearMassIdentity.DLinearMassIdentity._cell_dofs (self)
- should this be 2**D or (p+1)**D? suppose it's not relevant, given this is a linear solver
- Page Noh implosion test
We have to extend this list
This is all to be rewritten
- Page Parallelisation
A lot to write here
Scatter fehlt
- Page Peano 4
add links to the python and c++ documentation which has already been captured.
tidy this up - just adding this page in to introduce methods which need to be written for multigrid, such as touchVertexFirstTime
Specify each of these, and how they can be given extra functionality.
clear up wording here!
- Class peano4.visualisation.input.Patch.Patch
- update
- Global peano4.visualisation.input.PatchFileParser.PatchFileParser.parse_file (self)
- : clean up the loop below. we can separate into getting metadata, and then reading patches
- Global peano4::grid::GridTraversalEventGenerator::createEnterCellTraversalEvent (GridVertex coarseGridVertices[TwoPowerD], GridVertex fineGridVertices[TwoPowerD], const AutomatonState &state, const SplitSpecification &splitTriggered, const std::set< int > &splitting, const std::set< int > &joinTriggered, const std::set< int > &joining, const std::set< int > &hasSplit, const tarch::la::Vector< Dimensions, int > &relativePositionToFather, bool spacetreeStateIsRunning) const
- Joining is not discussed or implemented yet.
- Global peano4::grid::Spacetree::getAdjacentRanksForNewVertex (GridVertex coarseGridVertices[TwoPowerD], const tarch::la::Vector< Dimensions, int > &vertexPositionWithin3x3Patch) const
-
- Global peano4::grid::Spacetree::sendUserData (const AutomatonState &state, TraversalObserver &observer, const GridTraversalEvent &enterCellTraversalEvent, GridVertex fineGridVertices[TwoPowerD])
- Was passiert bei dynamic AMR?
- Global peano4::grid::TraversalObserver::streamDataFromSplittingTreeToNewTree (int newWorker)
- Not clear how this works on the worker side.
- Global peano4::parallel::SpacetreeSet::answerQuestions ()
- replyToUnansweredMessages() sollte der Name sein
- Page Performance optimisation
Does the code use enclave tasks and yield many tasks? In this case, you might have encountered a scheduler flaw. Change the multicore scheduler on the command line.
This has to be worked out. Does usually not happen for OpenMP. But TBB/SYCL are different stories.
- Class ProjectOntoFaces.ProjectOntoFaces
- There's documentation missing
- Page Single black hole test
The calls above are not valid anymore. See benchmark docu further down which works.
Remove the KO terms here
- Page SWIFT's task graph compiler
- Skip mechanism does not exist yet I think. Not sure if we need it at all. Not sure if the disadvantage is still there.
- Global swift2::committedGridControlEvents
- write some docu
- Global swift2::kernels::adoptInteractionRadiusAndTriggerRerun (const std::list< Particle * > &localParticles, const std::list< Particle * > &activeParticles, int targetNumberOfNeighbourParticles, double maxGrowthPerSweep=2.0, double shrinkingFactor=0.8)
- Mladen This docu is messed up, and we have to take into account here that we have to accommodate multiscale effects.
- Global swift2::kernels::flagBoundaryParticles (const ParticleContainer &localParticles, const double nparts, const tarch::la::Vector< Dimensions, double > &domainSize, const tarch::la::Vector< Dimensions, double > &domainOffset)
- Das ist falsch
- Global swift2::kernels::forAllParticlePairsVectorised (const peano4::datamanagement::CellMarker &marker, LocalParticleContainer &localParticles, ActiveParticleContainer &activeParticles, const std::vector< int > &numberOfLocalParticlesPerVertex, const std::vector< int > &numberOfActiveParticlesPerVertex, ParticleBinaryOperator< PCParticle< LocalParticleContainer >, PCParticle< ActiveParticleContainer > > auto f)
- Pawel I think this is a second place where we can, in the second variant, use a bitset.
- Global swift2::kernels::forAllParticlePairsVectoriseWithCheckPreamble (const peano4::datamanagement::CellMarker &marker, LocalParticleContainer &localParticles, ActiveParticleContainer &activeParticles, const std::vector< int > &numberOfLocalParticlesPerVertex, const std::vector< int > &numberOfActiveParticlesPerVertex, ParticleBinaryOperator< PCParticle< LocalParticleContainer >, PCParticle< ActiveParticleContainer > > auto f)
- dummy function at the moment. Only wraps around forAllParticlePairsVectorised()
- Global swift2::kernels::legacy::densityKernelWithMasking (const peano4::datamanagement::CellMarker &marker, Particle &localParticle, const Particle &activeParticle)
- No masking yet
- Global swift2::kernels::legacy::endHydroForceCalculationWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No masking yet
- Global swift2::kernels::legacy::forceKernelWithMasking (const peano4::datamanagement::CellMarker &marker, Particle &localParticle, const Particle &activeParticle)
This assumption is not correct atm (as the iterator is programmed). We either have to alter the iterator (and then can remove this additional check) or we have to update the docu.
This routine does not support boundary particles. But we wanted to remove this flag anyway.
- Global swift2::kernels::legacy::hydroPredictExtraWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No masking yet
- Global swift2::kernels::legacy::leapfrogDriftWithGlobalTimeStepSizeWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No masking in here yet
- Global swift2::kernels::legacy::leapfrogKickWithGlobalTimeStepSizeWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No maksing
- Global swift2::kernels::legacy::prepareDensityWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No masking yet
- Global swift2::kernels::legacy::prepareHydroForceWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No masking yet
- Global swift2::kernels::legacy::resetAccelerationWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- no Masking yet
- Global swift2::kernels::legacy::resetPredictedValuesWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No masking yet
- Global swift2::kernels::legacy::updateSmoothingLengthAndRerunIfRequiredWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- No masking yet. This one is tricky, as it might alter the global state. It is very likely that we have to introduce a new iterator which handles the reduction for us.
- Global swift2::statistics::reduceVelocityAndSearchRadiusWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &localParticle)
- This version is not yet available and likely will never vectorise anyway
- Global swift2::timestepping::resetMovedParticleMarkerWithMasking (const peano4::datamanagement::VertexMarker &marker, Particle &particle)
- Mladen I think we can skip the if here, but I'm not sure.
- Global tarch::logging::CommandLineLogger::NumberOfIndentSpaces
-
- Global tarch::logging::CommandLineLogger::NumberOfStandardColumnSpaces
-
- Global tarch::mpi::BooleanSemaphore::_localRankLockRequestSemaphore
- explain why we lock locally first
- Global tarch::multicore::orchestration::Hardcoded::Hardcoded (int numberOfTasksToHoldBack, int minTasksToFuse, int maxTasksToFuse, int deviceForFusedTasks, bool fuseTasksImmediatelyWhenSpawned, int maxNestedConcurrency)
- Docu
- Global tarch::multicore::orchestration::Strategy::ExecutionPolicy
- I would like to have a flag which tells the actual multicore runtime not (!) to continue with further ready tasks. Such a feature does not exist in OpenMP, e.g., and therefore we do not use such a flag.
- Global tarch::multicore::orchestration::Strategy::fuse (int taskType)=0
- Docu
- Global tarch::multicore::spawnAndWait (const std::vector< Task * > &tasks)
Speak to OpenMP that we would like a taskyield() which does not (!) continue with a sibling. This is important for producer-consumer patterns.
Speak to OpenMP. It would be totally great, if we could say that the task wait shall not(!) issue a new scheduling point. We would like to distinguish taskwaits which priorities throughput vs algorithmic latency.
- Global tbb::dynamic_task_graph_spawned_node::run ()
- Write something about lazy deletion
- Global toolbox::finiteelements::getElementWiseAssemblyMatrix (const ComplexStencil &complexStencil)
- Die Abbildung sollte man im Konstruktor einmal bauen und dann hier nur noch anwenden. Deshalb ist das Ding ja eine Methode und kein statisches Ding.
- Global toolbox::finiteelements::getElementWiseAssemblyMatrix (const Stencil &stencil)
- Die Abbildung sollte man im Konstruktor einmal bauen und dann hier nur noch anwenden. Deshalb ist das Ding ja eine Methode und kein statisches Ding.
- Page Tracers
- Requires update/revision
- Page Tutorial 02: Matrix-free Discontinuous Galerkin single-level solver on a regular grid
DO we need some docu here?
Alex can you add something on the numerical flux employed and fix the equations above? I think we can copy n paste from the paper draft.
- Class UpdateFaceSolution.UpdateFaceSolution
- Docu missing
- Class UpdateResidual.UpdateResidual
- Documntation missing
- Page Visualisation
- Yet to be written