Peano

A fundamental technique employed in our solver is the unfolding of the compute steps over the mesh entities. Discontinuous Galerkin solvers and variants thereof combine different integrals and different solution representations: Some integrals run over the faces, some over cells, some take a function representation and map it back onto the cells, ... To reduce the complexity, to be able to combine different solver flavours quickly, and to parallelise things on an exascale solver, we unfold the individual compute steps over the mesh entities. That is, we
The term unfolding for us thus comprises the introduction of auxiliary variables on some mesh entities (faces and cells) plus the subsequent decomposition of the compute steps over these entities.
Before we start discussing individual solver steps, we follow very few guidelines:
The conventions can be generalised. Some auxiliary variables can be eliminated or don't have to be stored explicitly, as we can directly feed them into subsequent compute steps. All conventions are directly reflected by our terminology.
We now write down the compute steps into substeps. This decomposition or splitting follows one fundamental principle: Each step takes only input from one mesh entity and maps onto dofs from one other mesh entity. We may not write to two mesh entities at any point, no matter if they are of the same type or different.
If your Riemann solver combines the left and the cell's value and returns an output over the face, you first have to decompose it into three steps:
\( \mathcal{R}(u^+, u^) = \mathcal{\tilde R} \circ \left( P_{f \gets c} u^ + P_{f \gets c} u^+ \right) \)
The Riemann solver takes left plus right solution and hence accepts data from two input entities. This is forbidden. Therefore, we introduce two auxiliary operations. They take the solution in the cell and map it onto auxiliary variables on the face (cmp discussion above re auxiliary variables). In the formula above, I use \( P_{f \gets c} \) as generic symbol for this operation. The actual Riemann solver then takes these projections (they now both sit on the face and writes it back onto some (auxiliary) face data. It may not directly write back into the cells (as we integrate over the outcome in DG subject to a test function), as this would once again violate our constraints.
This methodology resembles techniques used in taskbased programming. In the code, we might slightly weaken the constraint above: If we have, for example, cell data, we might project it onto all \( 2d \) adjacent faces in one rush. Formally, we still have \( 2d \) operations, but we might combine them into one for convenience.
\( \left( \begin{array}{cccc} A^{cc} & & & P_{c \gets f} \\ P_{f \gets c} & id & & \\ P_{f \gets c} & & id \\ & id & id & A^{ff} \end{array} \right) \left( \begin{array}{c} u^c \\ u^+ \\ u^ \\ q \end{array} \right) = \left( \begin{array}{c} M b \\ 0 \\ 0 \\ 0 \end{array} \right) \)