Peano
|
To translate a PDE into an equation system, we express the solution
\( u = \sum _i u_i \phi _i \)
as sum over shape functions \( \phi _i \). The shape functions in this class are locally continuous, but their linear combination is not. We test this setup against test functions \( v \). Each test function is actually any of the shape functions \( \phi _i \). Drawing both shapes and test functions from the same function space means that we follow a Ritz-Galerkin approach.
Let \( \mathcal{L}(u) \) be the differential operator on the left-hand side of the PDE, i.e. let's assume that we solve
\( \mathcal{L}(u) = f. \)
A weak formulation means that we multiple both sides with a test function and then integrate over the whole thing. As we take all these tests from the same function space as the shape functions, we know that they all have local support.Consequently, our test integral "shrinks" down to the support of the test function
\( \int _\Omega \mathcal{L}(u) \cdot v dx = \int_{\text{cell}_v} \mathcal{L}(u) \cdot v dx \)
as the product disappears everywhere outside of the cell that hosts the test function of interest.
Once again, we can integrate by parts to get rid of the highest derivative in our partial differential operator \( \mathcal{L}(u) \), but this time, we will get an expression of the form
\begin{eqnarray*} \int _\Omega \mathcal{L}(u) \cdot v dx &=& - \int _\Omega \left( \tilde{\mathcal{L}}(u), \nabla v \right) dx + \int _{\partial \text{cell}_v} (n,\tilde{\mathcal{M}}(u)) v dS(x) \\ &=& - \int _{\text{cell}_v} \left( \tilde{\mathcal{L}}(u), \nabla v \right) dx + \int _{\partial \text{cell}_v} (n,\tilde{\mathcal{M}}(\hat u )) v dS(x) \\ & = & \int _{\text{cell}_v} f \cdot v dx. \end{eqnarray*}
We reduced the order of the deriative within \( \mathcal{L}(u) \) and got a new operator \( \tilde{\mathcal{L}} \). If \( \mathcal{L}(u) \) is dominated by a second derivative, \( \tilde{\mathcal{L}} \) hosts only first-order derivatives. But this is a Pyrrhic victory: We now get an additional PDE term over the cell faces.
We conclude that
We end up with a linear equation system
\( A x = b \)
where x has \( |\mathbb{C}| \cdot K \cdot (p+1)^d \) entries. \( \mathbb{C} \) is set of cells in the mesh.
Before we dive into examples or discuss how to map this formalism onto code, we introduce an operator language. That is, we break down the overall computation into small tasks. These tasks will be found 1:1 in the code later on. Furthermore, we gain some freedom by breaking things down into tiny bits and pieces: We can alter individual ones or puzzle them together in various ways.
We introduce a few operators (functions) to break down the overall computational scheme into a sequence/set of computations:
The overall scheme in operator notation thus reads
\( A_{cc} + A_{fc} \circ A_{ff} \circ A_{cf} \)
i.e. we evaluate the cell contributions and add the face contributions which in turn results from a concatenation of three operators: Project solution onto face, solve a Riemann problem there, and then take the result and project it back onto the cell. The projection onto the face is a purely geoemtric thing: Once we know the polynomials chosen within the cell, we can evaluate what the solution looks like depending on the weights (unknowns within the degree of freedom). With that knowledge, we can determine what the solution on the face looks like. We use the same polynomial space there (restricted to the submanifold) and therefore can express the face representation through polynomial weights (left and right of the face), too. For each pair of the polynomial weights, we can solve the Riemann problem and store the outcome in further weights. That is, we represent the outcome of the Riemann problem again in the polynomial space on the faces. Finally, we take this representation, test it against the test functions, integrate over the face and write the contributions back into the cell.
Effectively, our approach blows up the equation system to solve into
\( Ax = \begin{bmatrix} A_{cc} & A_{fc} & 0 & 0 \\ 0 & id & -A_{ff}|_+ & -A_{ff}|_- \\ -id & 0 & A_{cf}|_+ & 0 \\ -id & 0 & 0 & A_{cf}|_- \end{bmatrix} \begin{bmatrix} x_{c} \\ x_f \\ x^+ \\ x^- \end{bmatrix} = \begin{bmatrix} b_{c} \\ 0 \\ 0 \\ 0 \end{bmatrix} \)
where we have to specify the involved matrices further. The \( x_c \) are the real degrees of freedom, i.e. the weights within the cells which effectively span the solution. The \( x_\pm \) encode the projection of these polynomials onto the faces. These are not real degrees of freedom, as they always result directly from the \( x_c \). The \( x_f \) encode the reconstructed solution on the faces and hold the average of the \( x_\pm \) values. This setup means our total matrix is no longer square. \( A_{cc} \) is square, with \( |\mathbb{C}| \cdot (p+1)^d \) rows, where \( |\mathbb{C}| \) is the number of cells in the mesh. \( A_{fc} \) has the same number of rows, but \( |\mathbb{F}| \cdot (p+1)^{d-1} \) columns, since this part has to "hit" \( x_f \). \( A_{ff} \) has \( |\mathbb{F}| \cdot (p+1)^{d-1} \) rows, and twice as many columns (note that \( | x_\pm| \ = \ 2 |x_f| \) ). Finally, \( A_{cf} \) has \( |\mathbb{C}| \cdot (p+1)^d \) rows and \( 2|\mathbb{F}| \cdot (p+1)^{d-1} \) columns, since this part "hits" \( x_\pm \).
Our global matrix is no longer square. We have \( 2 |\mathbb{C}| \cdot (p+1)^d + |\mathbb{F}| \cdot (p+1)^{d-1} \) rows and \( |\mathbb{C}| \cdot (p+1)^d + 3|\mathbb{F}| \cdot (p+1)^{d-1} \) columns.
The data storage within a cell is not prescribed by Peano, i.e. you can, in principle decide if you want to reorder the nodes within the cell or you can decide to order the degrees of freedom overall as SoA as opposed to AoS. At the end of the day, Peano maintains an array of doubles per cell and does not really care about the semantics of those. However, we strongly recommend that you stick to the default configuration where possible:
If you change any ordering, you have to take this into account for all associated operators (such as projections onto the faces), but you also will have to amend the plotters which are tied to these conventions.
Data on the faces is
It is important to recognise that we store three types of quantities per unknown: Its left projection, its right projection, and its outcome which can be either the flux or a reconstructed income. The semantics of the outcome depend upon your operators, i.e. they are not baked into the formalism. Furthermore, there is no assumption about the spacing along the faces. Naturally, you might assume that a Gauss-Lobatto layout within the cell implies a Gauss-Lobatto distribution on the faces. However, no such thing is encoded within the code, as we solely work with operators. They have to encode the correct spacing.
Another assumptions is baked into the realisation as a default: The polynomial degree on the facet result \( x_f \) equals the polynomial degree within the cell.
Multiple examples are shipped with Peano which demonstrate how to use the plain Discontiuous Galerking solver: