- Page 2D Noh implosion test
We have to extend this list
This is all to be rewritten
- Class api.actionsets.ImposeDirichletBoundaryConditions.ImposeDirichletBoundaryConditions
We need the projection matrix for the solution!
We have to add the corresponding matrix entries below, as well as the right-hand side
This version is not yet implemented, and maybe we don't need it ever!
- Class api.actionsets.ImposeDirichletBoundaryConditionsWithInteriorPenaltyMethod.ImposeDirichletBoundaryConditionsWithInteriorPenaltyMethod
- Add comments which document from Slack
- Global api.solvers.Solver.Solver.create_readme_descriptor (self)
- Has to be implemented properly, i.e. should at least report on the type and the name of the solver
- Global applications::exahype2::ccz4::maxEigenvalue (const double *const Q, int normal, const double CCZ4e, const double CCZ4ds, const double CCZ4GLMc, const double CCZ4GLMd) InlineMethod
Han
Han
- Page Assigning data to mesh entities
Something on store/load predicates
Something on send and receives
Something on specialisations (heap, smartpointers, ...)
- Global benchmarks::mghype::poisson::DGPoissonPrecompute::DGPoissonPrecompute ()
- Please add your documentation here.
- Global benchmarks::mghype::poisson::DGPoissonPrecompute::~DGPoissonPrecompute ()
- Please add your documentation here.
- Global benchmarks::multigrid::petsc::poisson::DGPoisson::DGPoisson ()
- Please add your documentation here.
- Global benchmarks::multigrid::petsc::poisson::DGPoisson::initNode (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h, double &value, double &rhs, double &exactSol) override
- Please add your documentation here
- Global benchmarks::multigrid::petsc::poisson::DGPoisson::~DGPoisson ()
- Please add your documentation here.
- Global benchmarks::multigrid::petsc::poisson::Poisson::initVertex (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h, double &value, double &rhs) override
- Please add your documentation here
- Global benchmarks::multigrid::petsc::poisson::Poisson::Poisson ()
- Please add your documentation here.
- Global benchmarks::multigrid::petsc::poisson::Poisson::~Poisson ()
- Please add your documentation here.
- Page Blockstructured
- This docu should move closer to the source code
- Page Boundary Conditions
- Yet to be written/sketched
- Page Boundary conditions and solver coupling
- Yet to be written
- Page Building your project from scratch
this
Go into plotter details. Maybe wait for swift-like hdf5 output.
elaborate
document units here: time and spatial.
all of this, once it converges. I want to move the majority of parameters away from users and from individual project .py files. No point in users specifying KERNEL_GAMMAs etc.
what are minimal required particle fields? Shall we hardcode them? If hardcoded, document what fields are always present. If not, document what fields need to be present.
Document parameters either here or in the py file itself. If you document it there, make sure to write here for users to look there for documentation.
- Global ccz4_archived.add_constraint_RefinementFlag_verification (self)
- legacy code
- Global ccz4_archived.add_derivative_calculation (self)
- legacy code
- Global ccz4_archived.add_PT_ConsW_Psi4W_AMRFlag (self)
- legacy code
- Page Continuous Galerkin for the Poisson equation
- If someone feels the need to write something here, please do so
- Page Continuous Galerkin for the Poisson equation with PETSc
- Sean can you write down stuff here?
- Page Creating new particle types (solvers) with new algorithmic steps
Conflict of concurrent data access is here, but it is not really a conflict. It is a simple critical section.
Yet to be written, but, in principle, it is just a switch to another iterator.
- Page Discontinuous Galerkin for the Poisson equation with PETSc
Change the arguments' names to the "from" notation.
Tobias. Continue writing.
This section needs to be revisited or moved.
- Page Discontinuous Galerkin Solver with PETSc
Different polynomial degrees are simple to realise. This is something to do in the future. Note that +/- are to be taken from the same space as the cell, as they are projections, but maybe we want to alter this, and even provide completely different degrees for the result of the Riemann solve.
We need a nice illustration here of all the function spaces
- Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_equidistant_grid_projector (self)
- Not used atm, but definitely useful
- Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_fine_grid_projector (self, j)
- Definitely useful, but not used atm
- Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_K1 (self)
- Not yet updated atm but seems also not to be used anywhere
- Global exahype2.solvers.rkdg.RungeKuttaDG.RungeKuttaDG.add_actions_to_create_grid (self, step, evaluate_refinement_criterion)
- : The boundary information is set only once. It is therefore important that we ues the face label and initialise it properly.
- Class exahype2.solvers.rkfd.actionsets.PreprocessSolution.PreprocessReconstructedSolutionWithHalo
- Das ist falsch
- Global exahype2::dg::internal::copySolution (const double *__restrict__ cellQin, const int order, const int unknowns, const int auxiliaryVariables, const tarch::la::Vector< Dimensions, int > &node, double *__restrict__ cellQout)
- Noch komplett auf diese Micro-Kernel-Notation umstellen, die Dominic im ADER eingefuehrt hat Das aber erst machen, nachdem wir die Enumeratoren da haben
- Page ExaSeis with rupture coupling
- Maybe we can model this differently using the FV infrastructure.
- Page Explore data with PyVista
- This is yet to be written.
- Page Extend the visualisation (and checkpointing)
- More docu here please.
- Page Finite Volumes
- Das sollte viel mehr Projektionen enthalten, also wie die Realisiert sind
- Page GPU support (and aggressive vectorisation)
This will change completely as we switch to xDSL. Notably, we will not support multiple target GPUs in one source code anymore.
More details on teh maturity and working of coce
Discussion how to introduce your own GPU-specific kernels
This will change completely as we switch to xDSL. Notably, we will not support multiple target GPUs in one source code anymore.
- Page Hybrid Galerkin for the Poisson equation with PETSc
Eike, Alex maybe you can assist here
Continue to write
- Page Matrix-free mixed DG for the Poisson equation
Tobias, please write.
Tobias and Alex should discuss.
- Page Merging/coupling various applications
Dmitry, if we find that attributes' names clash, we have to rename them too
We need to figure out how to toggle and decide whether we want to run a step from either project. How to switch forth and back in arbitrary order.
- Global mghype.api.matrixgenerators.DLinear.DLinear._cell_dofs (self)
- should this be 2**D or (p+1)**D? suppose it's not relevant, given this is a linear solver
- Global mghype.api.matrixgenerators.DLinearMassIdentity.DLinearMassIdentity._cell_dofs (self)
- should this be 2**D or (p+1)**D? suppose it's not relevant, given this is a linear solver
- Page Parallelisation
A lot to write here
Scatter fehlt
- Page Peano 4
add links to the python and c++ documentation which has already been captured.
tidy this up - just adding this page in to introduce methods which need to be written for multigrid, such as touchVertexFirstTime
Specify each of these, and how they can be given extra functionality.
clear up wording here!
- Class peano4.toolbox.particles.UpdateParticle_MultiLevelInteraction_StackOfLists_ContiguousParticles.UpdateParticle_MultiLevelInteraction_StackOfLists_ContiguousParticles
- Some of the syntax here might be outdated and recipes might be redundant. Compare general toolbox docu.
- Class peano4.visualisation.input.Patch.Patch
- update
- Global peano4.visualisation.input.PatchFileParser.PatchFileParser.parse_file (self)
- : clean up the loop below. we can separate into getting metadata, and then reading patches
- Global peano4::grid::GridTraversalEventGenerator::createEnterCellTraversalEvent (GridVertex coarseGridVertices[TwoPowerD], GridVertex fineGridVertices[TwoPowerD], const AutomatonState &state, const SplitSpecification &splitTriggered, const std::set< int > &splitting, const std::set< int > &joinTriggered, const std::set< int > &joining, const std::set< int > &hasSplit, const tarch::la::Vector< Dimensions, int > &relativePositionToFather, bool spacetreeStateIsRunning) const
- Joining is not discussed or implemented yet.
- Global peano4::grid::Spacetree::getAdjacentRanksForNewVertex (GridVertex coarseGridVertices[TwoPowerD], const tarch::la::Vector< Dimensions, int > &vertexPositionWithin3x3Patch) const
-
- Global peano4::grid::Spacetree::sendUserData (const AutomatonState &state, TraversalObserver &observer, const GridTraversalEvent &enterCellTraversalEvent, GridVertex fineGridVertices[TwoPowerD])
- Was passiert bei dynamic AMR?
- Global peano4::grid::TraversalObserver::streamDataFromSplittingTreeToNewTree (int)
- Not clear how this works on the worker side.
- Global peano4::parallel::SpacetreeSet::answerQuestions ()
- replyToUnansweredMessages() sollte der Name sein
- Page Performance optimisation
Does the code use enclave tasks and yield many tasks? In this case, you might have encountered a scheduler flaw. Change the multicore scheduler on the command line.
This has to be worked out. Does usually not happen for OpenMP. But TBB/SYCL are different stories.
- Page Poisson tests
- Page Solver coupling (Single Schwarzschild black hole)
Remove the KO terms here
The calls above are not valid anymore. See benchmark docu further down which works.
- Global solvers.api.actionsets.DGCGCoupling.MultiplicativeDGCGCoupling.__init__ (self, dg_solver, cg_solver, prolongation_matrix, prolongation_matrix_scaling, restriction_matrix, restriction_matrix_scaling, injection_matrix, injection_matrix_scaling, use_fas, smoothing_steps_DG=4, smoothing_steps_CG=-1, vcycles=1)
- : Implement ignoring the tolerance when the number of cycles is set as a stopping criterion.
- Class solvers.api.actionsets.DGSolver.ProjectOntoFaces.ProjectOntoFaces
- There's documentation missing
- Class solvers.api.actionsets.DGSolver.UpdateFaceSolution.UpdateFaceSolution
- Docu missing
- Class solvers.api.actionsets.DGSolver.UpdateResidual.UpdateResidual
- Documntation missing
- Class solvers.api.actionsets.DGSolver.UpdateResidualWithTasks.UpdateResidualWithTasks
- Documntation missing
- Page SWIFT's task graph compiler
- Skip mechanism does not exist yet I think. Not sure if we need it at all. Not sure if the disadvantage is still there.
- Global swift2::committedGridControlEvents
- write some docu
- Global swift2::kernels::adoptInteractionRadiusAndTriggerRerun (const std::list< Particle * > &localParticles, const std::list< Particle * > &activeParticles, int targetNumberOfNeighbourParticles, double maxGrowthPerSweep=2.0, double shrinkingFactor=0.8)
- Mladen This docu is messed up, and we have to take into account here that we have to accommodate multiscale effects.
- Global swift2::kernels::flagBoundaryParticles (const ParticleContainer &localParticles, const double nparts, const tarch::la::Vector< Dimensions, double > &domainSize, const tarch::la::Vector< Dimensions, double > &domainOffset)
- Das ist falsch
- Global tarch::logging::CommandLineLogger::NumberOfIndentSpaces
-
- Global tarch::logging::CommandLineLogger::NumberOfStandardColumnSpaces
-
- Global tarch::mpi::BooleanSemaphore::_localRankLockRequestSemaphore
- explain why we lock locally first
- Global tarch::multicore::spawnAndWait (const std::vector< Task * > &tasks)
Speak to OpenMP. It would be totally great, if we could say that the task wait shall not(!) issue a new scheduling point. We would like to distinguish taskwaits which priorities throughput vs algorithmic latency.
Speak to OpenMP that we would like a taskyield() which does not (!) continue with a sibling. This is important for producer-consumer patterns.
- Global tbb::dynamic_task_graph_spawned_node::run ()
- Write something about lazy deletion
- Global toolbox::finiteelements::getElementWiseAssemblyMatrix (const ComplexStencil &complexStencil)
- Die Abbildung sollte man im Konstruktor einmal bauen und dann hier nur noch anwenden. Deshalb ist das Ding ja eine Methode und kein statisches Ding.
- Global toolbox::finiteelements::getElementWiseAssemblyMatrix (const Stencil &stencil)
- Die Abbildung sollte man im Konstruktor einmal bauen und dann hier nur noch anwenden. Deshalb ist das Ding ja eine Methode und kein statisches Ding.
- Page Tracers
- Requires update/revision
- Page Tutorial 3: Matrix-free Discontinuous Galerkin single-level solver on a regular grid
- Alex can you add something on the numerical flux employed and fix the equations above? I think we can copy n paste from the paper draft.
- Page Visualisation
- Yet to be written