Peano
Loading...
Searching...
No Matches
Discontinuous Galerkin Solver with PETSc

The weak formulation

To translate a PDE into an equation system, we express the solution

\( u = \sum _i u_i \phi _i \)

as sum over shape functions \( \phi _i \). The shape functions in this class are locally continuous, but their linear combination is not. We test this setup against test functions \( v \). Each test function is actually any of the shape functions \( \phi _i \). Drawing both shapes and test functions from the same function space means that we follow a Ritz-Galerkin approach.

Let \( \mathcal{L}(u) \) be the differential operator on the left-hand side of the PDE, i.e. let's assume that we solve

\( \mathcal{L}(u) = f. \)

A weak formulation means that we multiple both sides with a test function and then integrate over the whole thing. As we take all these tests from the same function space as the shape functions, we know that they all have local support.Consequently, our test integral "shrinks" down to the support of the test function

\( \int _\Omega \mathcal{L}(u) \cdot v dx = \int_{\text{cell}_v} \mathcal{L}(u) \cdot v dx \)

as the product disappears everywhere outside of the cell that hosts the test function of interest.

Once again, we can integrate by parts to get rid of the highest derivative in our partial differential operator \( \mathcal{L}(u) \), but this time, we will get an expression of the form

\begin{eqnarray*} \int _\Omega \mathcal{L}(u) \cdot v dx &=& - \int _\Omega \left( \tilde{\mathcal{L}}(u), \nabla v \right) dx + \int _{\partial \text{cell}_v} (n,\tilde{\mathcal{M}}(u)) v dS(x) \\ &=& - \int _{\text{cell}_v} \left( \tilde{\mathcal{L}}(u), \nabla v \right) dx + \int _{\partial \text{cell}_v} (n,\tilde{\mathcal{M}}(\hat u )) v dS(x) \\ & = & \int _{\text{cell}_v} f \cdot v dx. \end{eqnarray*}

We reduced the order of the deriative within \( \mathcal{L}(u) \) and got a new operator \( \tilde{\mathcal{L}} \). If \( \mathcal{L}(u) \) is dominated by a second derivative, \( \tilde{\mathcal{L}} \) hosts only first-order derivatives. But this is a Pyrrhic victory: We now get an additional PDE term over the cell faces.

We conclude that

  1. the jump terms along the faces do not disappear, i.e. they do not cancel out from left and right as they do in a Continuous Galerkin formulation.
  2. Even worse, they refer to a value \( u \) which technically does not exist, as there is no (unique) solution along the face. We only have a left \( u^- \) and a right \( u^+ \) solution. Therefore, I explicitly write \( \hat u \) in the equation above. It denotes that we haven't yet decided what the function looks like along the face.
  3. We now have cell integrals to evaluate, and there are additional face integrals. The latter ones couple neighbouring cells.

We end up with a linear equation system

\( A x = b \)

where x has \( |\mathbb{C}| \cdot K \cdot (p+1)^d \) entries. \( \mathbb{C} \) is set of cells in the mesh.

An operator language and temporary variables

Before we dive into examples or discuss how to map this formalism onto code, we introduce an operator language. That is, we break down the overall computation into small tasks. These tasks will be found 1:1 in the code later on. Furthermore, we gain some freedom by breaking things down into tiny bits and pieces: We can alter individual ones or puzzle them together in various ways.

We introduce a few operators (functions) to break down the overall computational scheme into a sequence/set of computations:

  • The operator \( A_{cc} \) takes the dofs within a cell and computes the term \( - \int _{\text{cell}_v} \left( \tilde{\mathcal{L}}(u), \nabla v \right) dx \). It takes the \( K (p+1)^d \) values per cell and yields \( K (p+1)^d \) equations/updates from the cell. In classes like petsc.solvers.DiscontinuousGalerkinDiscretisation this matrix is called cell_cell_lhs_matrix.
  • Let's introduce temporary variables on the face. Those are the \( u^+ \) and \( u^- \) values, i.e. we store \( 2K (p+1)^{d-1} \) degrees of freedom on the face. Those are not really degrees of freedom. They will be mere projection, i.e. simply encode what the solution along the face is.
  • The operator \( A_{cf} \) takes the solution from a cell and projects it onto a face. Let \( u^+ \) and \( u^- \) denote the solution left or right of a face (from the face's point of view). For each of the \( 2d \) faces of a cell, the operator \( P_{cf} \) yields either the \( u^+ \) or \( u^- \).
  • The operator \( A_{ff} \) operates on a face. It takes \( u^+ \) and \( u^- \) and yields something that "reconstructs" the solution and applies the flux operator on it. This is the Riemann solver and hence linear algebraises the \( \tilde{\mathcal{M}}(\hat u ) \) from above.
  • The operator \( A_{fc} \) takes the \( \hat u \) on the face and yields the updates for the cell. It evaluates \( \int _{\partial \text{cell}_v} (n,\tilde{\mathcal{M}}(\hat u )) v dS(x) \).
Within Peano, it is important to distinguish data on cell and faces carefully from each other. Furthermore (i) we can read data associated with faces within cells; (ii) we can never read data from neighbouring cells within a cell; (iii) we can never access adjacent cell data from a face.

The overall scheme in operator notation thus reads

\( A_{cc} + A_{fc} \circ A_{ff} \circ A_{cf} \)

i.e. we evaluate the cell contributions and add the face contributions which in turn results from a concatenation of three operators: Project solution onto face, solve a Riemann problem there, and then take the result and project it back onto the cell. The projection onto the face is a purely geoemtric thing: Once we know the polynomials chosen within the cell, we can evaluate what the solution looks like depending on the weights (unknowns within the degree of freedom). With that knowledge, we can determine what the solution on the face looks like. We use the same polynomial space there (restricted to the submanifold) and therefore can express the face representation through polynomial weights (left and right of the face), too. For each pair of the polynomial weights, we can solve the Riemann problem and store the outcome in further weights. That is, we represent the outcome of the Riemann problem again in the polynomial space on the faces. Finally, we take this representation, test it against the test functions, integrate over the face and write the contributions back into the cell.

Effectively, our approach blows up the equation system to solve into

\( Ax = \begin{bmatrix} A_{cc} & A_{fc} & 0 & 0 \\ 0 & id & -A_{ff}|_+ & -A_{ff}|_- \\ -id & 0 & A_{cf}|_+ & 0 \\ -id & 0 & 0 & A_{cf}|_- \end{bmatrix} \begin{bmatrix} x_{c} \\ x_f \\ x^+ \\ x^- \end{bmatrix} = \begin{bmatrix} b_{c} \\ 0 \\ 0 \\ 0 \end{bmatrix} \)

Todo
We need a nice illustration here of all the function spaces

where we have to specify the involved matrices further. The \( x_c \) are the real degrees of freedom, i.e. the weights within the cells which effectively span the solution. The \( x_\pm \) encode the projection of these polynomials onto the faces. These are not real degrees of freedom, as they always result directly from the \( x_c \). The \( x_f \) encode the reconstructed solution on the faces and hold the average of the \( x_\pm \) values. This setup means our total matrix is no longer square. \( A_{cc} \) is square, with \( |\mathbb{C}| \cdot (p+1)^d \) rows, where \( |\mathbb{C}| \) is the number of cells in the mesh. \( A_{fc} \) has the same number of rows, but \( |\mathbb{F}| \cdot (p+1)^{d-1} \) columns, since this part has to "hit" \( x_f \). \( A_{ff} \) has \( |\mathbb{F}| \cdot (p+1)^{d-1} \) rows, and twice as many columns (note that \( | x_\pm| \ = \ 2 |x_f| \) ). Finally, \( A_{cf} \) has \( |\mathbb{C}| \cdot (p+1)^d \) rows and \( 2|\mathbb{F}| \cdot (p+1)^{d-1} \) columns, since this part "hits" \( x_\pm \).

Our global matrix is no longer square. We have \( 2 |\mathbb{C}| \cdot (p+1)^d + |\mathbb{F}| \cdot (p+1)^{d-1} \) rows and \( |\mathbb{C}| \cdot (p+1)^d + 3|\mathbb{F}| \cdot (p+1)^{d-1} \) columns.

The write-down as one big matrix as above works if and only if the chosen Riemann solver yield a linear combination of the \( u^-/u^+ \) or \( x^-/x^+ \) values respectively. In many sophisticated solvers, it will not be linear. In this case, the system becomes a non-linear equation system.

Data storage

The data storage within a cell is not prescribed by Peano, i.e. you can, in principle decide if you want to reorder the nodes within the cell or you can decide to order the degrees of freedom overall as SoA as opposed to AoS. At the end of the day, Peano maintains an array of doubles per cell and does not really care about the semantics of those. However, we strongly recommend that you stick to the default configuration where possible:

  • Nodes are enumerated following our dimension-generic conventions.
  • Data are stored as an Array of Structs (AoS), i.e. first all the unknowns of the first node, then all of the second, and so forth.

If you change any ordering, you have to take this into account for all associated operators (such as projections onto the faces), but you also will have to amend the plotters which are tied to these conventions.

Data on the faces is

  • stored as AoS by default;
  • per face, we first store the left projection \( x^- \), then the right projection \( x^+ \) and then the outcome of the flux or associated reconstruction. That is, all the \( x^- \) are held en bloc first before we hold \( x^+ \).
  • Per \( x^\pm \), the storage of the nodes and unknowns follows again the dimension-generic conventions.

It is important to recognise that we store three types of quantities per unknown: Its left projection, its right projection, and its outcome which can be either the flux or a reconstructed income. The semantics of the outcome depend upon your operators, i.e. they are not baked into the formalism. Furthermore, there is no assumption about the spacing along the faces. Naturally, you might assume that a Gauss-Lobatto layout within the cell implies a Gauss-Lobatto distribution on the faces. However, no such thing is encoded within the code, as we solely work with operators. They have to encode the correct spacing.

Another assumptions is baked into the realisation as a default: The polynomial degree on the facet result \( x_f \) equals the polynomial degree within the cell.

Todo
Different polynomial degrees are simple to realise. This is something to do in the future. Note that +/- are to be taken from the same space as the cell, as they are projections, but maybe we want to alter this, and even provide completely different degrees for the result of the Riemann solve.

Examples

Multiple examples are shipped with Peano which demonstrate how to use the plain Discontiuous Galerking solver:

  • benchmarks/multigrid/petsc/poisson/dg.py Hosts a simple solver for the Poisson equation. All the rationale are discussed in Poisson equation.