Peano
Loading...
Searching...
No Matches
Todo List
Page 2D Noh implosion test

This is all to be rewritten

We have to extend this list

Class api.actionsets.ImposeDirichletBoundaryConditions.ImposeDirichletBoundaryConditions

This version is not yet implemented, and maybe we don't need it ever!

We need the projection matrix for the solution!

We have to add the corresponding matrix entries below, as well as the right-hand side

Class api.actionsets.ImposeDirichletBoundaryConditionsWithInteriorPenaltyMethod.ImposeDirichletBoundaryConditionsWithInteriorPenaltyMethod
Add comments which document from Slack
Class api.Project.Project

I think all projects should share certain commonalities

I want to have some feature to set the MPI timeout

Global api.solvers.Solver.Solver.create_readme_descriptor (self)
Has to be implemented properly, i.e. should at least report on the type and the name of the solver
Global applications::exahype2::ccz4::maxEigenvalue (const double *const Q, int normal, const double CCZ4e, const double CCZ4ds, const double CCZ4GLMc, const double CCZ4GLMd) InlineMethod

Han

Han

Page Assigning data to mesh entities

Something on store/load predicates

Something on send and receives

Something on specialisations (heap, smartpointers, ...)

Global benchmarks::mghype::poisson::DGPoissonPrecompute::DGPoissonPrecompute ()
Please add your documentation here.
Global benchmarks::mghype::poisson::DGPoissonPrecompute::~DGPoissonPrecompute ()
Please add your documentation here.
Global benchmarks::multigrid::petsc::poisson::DGPoisson::DGPoisson ()
Please add your documentation here.
Global benchmarks::multigrid::petsc::poisson::DGPoisson::initNode (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h, double &value, double &rhs, double &exactSol) override
Please add your documentation here
Global benchmarks::multigrid::petsc::poisson::DGPoisson::~DGPoisson ()
Please add your documentation here.
Global benchmarks::multigrid::petsc::poisson::Poisson::initVertex (const tarch::la::Vector< Dimensions, double > &x, const tarch::la::Vector< Dimensions, double > &h, double &value, double &rhs) override
Please add your documentation here
Global benchmarks::multigrid::petsc::poisson::Poisson::Poisson ()
Please add your documentation here.
Global benchmarks::multigrid::petsc::poisson::Poisson::~Poisson ()
Please add your documentation here.
Page Blockstructured
This docu should move closer to the source code
Page Boundary Conditions
Yet to be written/sketched
Page Boundary conditions and solver coupling
Yet to be written
Page Building your project from scratch

Document parameters either here or in the py file itself. If you document it there, make sure to write here for users to look there for documentation.

what are minimal required particle fields? Shall we hardcode them? If hardcoded, document what fields are always present. If not, document what fields need to be present.

all of this, once it converges. I want to move the majority of parameters away from users and from individual project .py files. No point in users specifying KERNEL_GAMMAs etc.

document units here: time and spatial.

elaborate

Go into plotter details. Maybe wait for swift-like hdf5 output.

this

Global ccz4_archived.add_constraint_RefinementFlag_verification (self)
legacy code
Global ccz4_archived.add_derivative_calculation (self)
legacy code
Global ccz4_archived.add_PT_ConsW_Psi4W_AMRFlag (self)
legacy code
Page Continuous Galerkin for the Poisson equation
If someone feels the need to write something here, please do so
Page Continuous Galerkin for the Poisson equation with PETSc
Sean can you write down stuff here?
Page Creating new particle types (solvers) with new algorithmic steps

Conflict of concurrent data access is here, but it is not really a conflict. It is a simple critical section.

Yet to be written, but, in principle, it is just a switch to another iterator.

Class dastgen2.aspects.MPI.MPI
I think we should introduce an Aspects interface, which makes it clear which routines are to be implemented
Page Discontinuous Galerkin for the Poisson equation with PETSc

Change the arguments' names to the "from" notation.

Tobias. Continue writing.

This section needs to be revisited or moved.

Page Discontinuous Galerkin Solver with PETSc

We need a nice illustration here of all the function spaces

Different polynomial degrees are simple to realise. This is something to do in the future. Note that +/- are to be taken from the same space as the cell, as they are projections, but maybe we want to alter this, and even provide completely different degrees for the result of the Riemann solve.

Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_equidistant_grid_projector (self)
Not used atm, but definitely useful
Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_fine_grid_projector (self, j)
Definitely useful, but not used atm
Global exahype2.solvers.LagrangeBasisWithDiagonalMassMatrix.LagrangeBasisWithDiagonalMassMatrix.__compute_K1 (self)
Not yet updated atm but seems also not to be used anywhere
Global exahype2.solvers.rkdg.RungeKuttaDG.RungeKuttaDG.add_actions_to_create_grid (self, step, evaluate_refinement_criterion)
: The boundary information is set only once. It is therefore important that we ues the face label and initialise it properly.
Class exahype2.solvers.rkfd.actionsets.PreprocessSolution.PreprocessReconstructedSolutionWithHalo
Das ist falsch
Global exahype2::dg::internal::copySolution (const double *__restrict__ cellQin, const int order, const int unknowns, const int auxiliaryVariables, const tarch::la::Vector< Dimensions, int > &node, double *__restrict__ cellQout)
Noch komplett auf diese Micro-Kernel-Notation umstellen, die Dominic im ADER eingefuehrt hat Das aber erst machen, nachdem wir die Enumeratoren da haben
Page ExaSeis with rupture coupling
Maybe we can model this differently using the FV infrastructure.
Page Explore data with PyVista
This is yet to be written.
Page Extend the visualisation (and checkpointing)
More docu here please.
Page GPU support
We have to write this
Page GPU support

Some of the content below should go into the Jupyter notebook

More details on teh maturity and working of coce

Discussion how to introduce your own GPU-specific kernels

This will change completely as we switch to xDSL. Notably, we will not support multiple target GPUs in one source code anymore.

This will change completely as we switch to xDSL. Notably, we will not support multiple target GPUs in one source code anymore.

Page Hybrid Galerkin for the Poisson equation with PETSc

Continue to write

Eike, Alex maybe you can assist here

Page Mask out where and when data are used
Consolidate this behaviour.
Page Matrix-free mixed DG for the Poisson equation

Tobias and Alex should discuss.

Tobias, please write.

Page Merging/coupling various applications

We need to figure out how to toggle and decide whether we want to run a step from either project. How to switch forth and back in arbitrary order.

Dmitry, if we find that attributes' names clash, we have to rename them too

Page MGHyPE
Needs cleanup in line with other bigger software packages
Global mghype.api.matrixgenerators.DLinear.DLinear._cell_dofs (self)
should this be 2**D or (p+1)**D? suppose it's not relevant, given this is a linear solver
Global mghype.api.matrixgenerators.DLinearMassIdentity.DLinearMassIdentity._cell_dofs (self)
should this be 2**D or (p+1)**D? suppose it's not relevant, given this is a linear solver
Page Parallelisation

Scatter fehlt

A lot to write here

Class peano4.dastgen2.MPIAndStorageAspect.MPIAndStorageAspect
A nicer implementation would introduce an Aspect interface. See MPI aspect in Peano 4. There's a similar comment in there.
Class peano4.toolbox.particles.UpdateParticle_MultiLevelInteraction_StackOfLists_ContiguousParticles.UpdateParticle_MultiLevelInteraction_StackOfLists_ContiguousParticles
Some of the syntax here might be outdated and recipes might be redundant. Compare general toolbox docu.
Class peano4.visualisation.input.Patch.Patch
update
Global peano4.visualisation.input.PatchFileParser.PatchFileParser.parse_file (self)
: clean up the loop below. we can separate into getting metadata, and then reading patches
Global peano4::grid::GridTraversalEventGenerator::createEnterCellTraversalEvent (GridVertex coarseGridVertices[TwoPowerD], GridVertex fineGridVertices[TwoPowerD], const AutomatonState &state, const SplitSpecification &splitTriggered, const std::set< int > &splitting, const std::set< int > &joinTriggered, const std::set< int > &joining, const std::set< int > &hasSplit, const tarch::la::Vector< Dimensions, int > &relativePositionToFather, bool spacetreeStateIsRunning) const
Joining is not discussed or implemented yet.
Global peano4::grid::Spacetree::getAdjacentRanksForNewVertex (GridVertex coarseGridVertices[TwoPowerD], const tarch::la::Vector< Dimensions, int > &vertexPositionWithin3x3Patch) const
Global peano4::grid::Spacetree::sendUserData (const AutomatonState &state, TraversalObserver &observer, const GridTraversalEvent &enterCellTraversalEvent, GridVertex fineGridVertices[TwoPowerD])
Was passiert bei dynamic AMR?
Global peano4::grid::TraversalObserver::streamDataFromSplittingTreeToNewTree (int)
Not clear how this works on the worker side.
Global peano4::parallel::SpacetreeSet::answerQuestions ()
replyToUnansweredMessages() sollte der Name sein
Page Poisson tests

Page Solver coupling (Single Schwarzschild black hole)

The calls above are not valid anymore. See benchmark docu further down which works.

Remove the KO terms here

Global solvers.api.actionsets.DGCGCoupling.MultiplicativeDGCGCoupling.__init__ (self, dg_solver, cg_solver, prolongation_matrix, prolongation_matrix_scaling, restriction_matrix, restriction_matrix_scaling, injection_matrix, injection_matrix_scaling, use_fas, smoothing_steps_DG=4, smoothing_steps_CG=-1, vcycles=1)
: Implement ignoring the tolerance when the number of cycles is set as a stopping criterion.
Class solvers.api.actionsets.DGSolver.ProjectOntoFaces.ProjectOntoFaces
There's documentation missing
Class solvers.api.actionsets.DGSolver.UpdateFaceSolution.UpdateFaceSolution
Docu missing
Class solvers.api.actionsets.DGSolver.UpdateResidual.UpdateResidual
Documntation missing
Class solvers.api.actionsets.DGSolver.UpdateResidualWithTasks.UpdateResidualWithTasks
Documntation missing
Page SWIFT's task graph compiler
Skip mechanism does not exist yet I think. Not sure if we need it at all. Not sure if the disadvantage is still there.
Class swift2.particle.SPHLeapfrogFixedSearchRadius.SPHLeapfrogFixedSearchRadius
Attribute missing >( marker, assignedParticles, numberOfCoalescedAssignedParticles ); } " " " particle.algorithm_steps_dict["Drift"].includes += " " " #include "swift2/boundaryconditions/FixedBoundary.h" #include "swift2/boundaryconditions/Utils.h" " " " particle._setup_algorithm_steps() particle._setup_initialisation_steps() ~~~~~~~~~~~~~~~~~~~~~~</dd> <dt> \_setscope swift2 Global \_internalref de/dac/namespaceswift2#a48c101b233963ffc1c3916d9f55c16ce "swift2::committedGridControlEvents" </dt><dd> \anchor _todo000021 write some docu </dd> <dt> \_setscope swift2::kernels Global \_internalref de/dd5/namespaceswift2_1_1kernels#affb356da0fa13f26a730f166387b3a3b "swift2::kernels::adoptInteractionRadiusAndTriggerRerun" (const std::list< Particle * > &localParticles, const std::list< Particle * > &activeParticles, int targetNumberOfNeighbourParticles, double maxGrowthPerSweep=2.0, double shrinkingFactor=0.8)</dt><dd> \anchor _todo000022 Mladen This docu is messed up, and we have to take into account here that we have to accommodate multiscale effects.</dd> <dt> \_setscope swift2::kernels Global \_internalref de/dd5/namespaceswift2_1_1kernels#ad877f2f98671742914fc4fc7460fe6ab "swift2::kernels::flagBoundaryParticles" (const ParticleContainer &localParticles, const double nparts, const tarch::la::Vector< Dimensions, double > &domainSize, const tarch::la::Vector< Dimensions, double > &domainOffset)</dt><dd> \anchor _todo000023 Das ist falsch </dd> <dt> \_setscope tarch::logging::CommandLineLogger Global \_internalref df/d72/classtarch_1_1logging_1_1CommandLineLogger#a181d384c043dcda9140260e24ec29c0c "tarch::logging::CommandLineLogger::NumberOfIndentSpaces" </dt><dd> \anchor _todo000024 </dd> <dt> \_setscope tarch::logging::CommandLineLogger Global \_internalref df/d72/classtarch_1_1logging_1_1CommandLineLogger#a3d038022144b5f1178eb89ff8319f09b "tarch::logging::CommandLineLogger::NumberOfStandardColumnSpaces" </dt><dd> \anchor _todo000025 </dd> <dt> \_setscope tarch::mpi::BooleanSemaphore Global \_internalref d2/ddf/classtarch_1_1mpi_1_1BooleanSemaphore#a466e0c8441d29151a3bfbfb4e8a58d5c "tarch::mpi::BooleanSemaphore::_localRankLockRequestSemaphore" </dt><dd> \anchor _todo000026 explain why we lock locally first </dd> <dt> \_setscope tarch::multicore::taskfusion::LogReadyTask Class \_internalref df/d7d/classtarch_1_1multicore_1_1taskfusion_1_1LogReadyTask "tarch::multicore::taskfusion::LogReadyTask" </dt><dd> \anchor _todo000027 </dd> <dt> \_setscope tbb::dynamic_task_graph_spawned_node Global \_internalref d2/d99/classtbb_1_1dynamic__task__graph__spawned__node#af1078ab56a90076c9d4ab857256c69d4 "tbb::dynamic_task_graph_spawned_node::run" ()</dt><dd> \anchor _todo000028 Write something about lazy deletion </dd> <dt> \_setscope toolbox::finiteelements Global \_internalref d0/dac/namespacetoolbox_1_1finiteelements#ac293b45f1a8ed3f3de7f4630b6e3bd20 "toolbox::finiteelements::getElementWiseAssemblyMatrix" (const Stencil &stencil)</dt><dd> \anchor _todo000029 Die Abbildung sollte man im Konstruktor einmal bauen und dann hier nur noch anwenden. Deshalb ist das Ding ja eine Methode und kein statisches Ding. </dd> <dt> \_setscope toolbox::finiteelements Global \_internalref d0/dac/namespacetoolbox_1_1finiteelements#a590c55e265ee8047b41806ba5932891f "toolbox::finiteelements::getElementWiseAssemblyMatrix" (const ComplexStencil &complexStencil)</dt><dd> \anchor _todo000030 Die Abbildung sollte man im Konstruktor einmal bauen und dann hier nur noch anwenden. Deshalb ist das Ding ja eine Methode und kein statisches Ding. </dd> <dt> Page \_internalref df/db9/page_exahype_tracer "Tracers" </dt><dd> \anchor _todo000083 Requires update/revision</dd> <dt> Page \_internalref d8/d66/tutorials_mghype_examples_03_matrix-free-dg "Tutorial 3: Matrix-free Discontinuous Galerkin single-level solver on a regular grid" </dt><dd> \anchor _todo000045 Alex can you add something on the numerical flux employed and fix the equations above? I think we can copy n paste from the paper draft.</dd> <dt> Page \_internalref df/d47/documentation_tarch_visualisation "Visualisation" </dt><dd> \anchor _todo000101 Yet to be written</dd> <dt> Page \_internalref dc/d93/tutorials_exahype2_hpc_tasking "Working with solvers that support tasking"
We have to write this