Peano
|
We explain all the numerics in hands-on style for the toy problem
\( -\Delta u = f. \)
We don't solve this one directly (point-wisely), but we check it against a test function \( \phi \) and integrate:
\( \int (-\Delta u, \ \phi) dx \ = \ \int (f, \ \phi) dx \ \ \ \forall \phi. \)
If this holds for all test functions \( \phi \) we can think of, then the solution also fits to the original PDE exactly. Otherwise, it is a weak solution. As we have the integral and the product of two functions on the left-hand side, we can integrate by parts and get rid of the second derivative. The original solution to the PDE somehow has to have a second derivative everywhere which matches \( f \). The new weak solution then has to have only a first derivative almost everywhere, as we roll one derivative over to the test function and integrate. Small sets (nullsets) of points where there is no first derivative then don't count anymore.
We illustrate the next steps in one dimension: Let the domain split into cells. We create some shape functions over the domain. Each shape function is a hat, i.e. it is one in one vertex and then goes down linearly over the adjacent two cells to zero. So we end up with functions like the green ones below:
Any weighted linear combination over these guys yields a global function. We want to try to construct a weak solution this way. For this, we also have to decide where the test functions come from. A natural choice leading to Ritz-Galerkin Finite Elements is to use the same test and shape functions, i.e. the global linear combination spanning the solution has to fulfill the weak formulation for every single test against one of its shape functions.
\begin{eqnarray*} &\int_\text{cell}& ( \nabla u, \ \nabla \phi )dx + \int_\text{face} (\nabla u \cdot \hat{n}, \ \phi) dS(x) \ \ \ \forall \phi \\ =\sum_i u_i \ &\int_\text{cell}& (\nabla \phi_i, \ \nabla \phi) dx + \int_\text{face} (\nabla u \cdot \hat{n}, \ \phi) dS(x) \ \ \ \forall \phi \end{eqnarray*}
where in the second line we exchanged the definition of \( u \) for its expansion in basis functions.
From here onwards, we drop the surface integral term:
So all the surface integrals drop out. They don't drop out if we have boundary conditions that prescribe a derivative. But that's a different story for a more advanced textbook.
The basis of functions that we use in this example are piecewise linear functions, defined in each cell (which is an interval of width \( h \)):
\begin{eqnarray*} \phi_i(x) \ = \ \begin{cases} \frac{1}{h}(x - x_{i-1}) &\text{if} \ x_{i-1} \ < \ x < \ x_i \\ \frac{1}{h}(x_{i+1} - x) &\text{if} \ x_{i} \ < \ x < \ x_{i+1} \\ 0 &\text{otherwise} \end{cases} \end{eqnarray*}
And from this we can write down \( \nabla \phi_i \):
\begin{eqnarray*} \nabla \phi_i(x) \ = \ \begin{cases} \frac{1}{h} &\text{if} \ x_{i-1} \ < \ x < \ x_i \\ - \frac{1}{h} &\text{if} \ x_{i} \ < \ x < \ x_{i+1} \\ 0 &\text{otherwise} \end{cases} \end{eqnarray*}
So now we can finally compute the integrals. Here we integrate over the entire domain (every cell):
\begin{eqnarray*} \int (\nabla \phi_i, \ \nabla \phi_i) dx \ &=& \ \int_{c_{i}} \frac{1}{h} \cdot \frac{1}{h} dx + \int_{c_{i+1}} \frac{-1}{h} \cdot \frac{-1}{h} dx \ = \ \frac{2}{h} \\ \int (\nabla \phi_i, \ \nabla \phi_{i+1}) dx \ &=& \ \int_{c_{i+1}} \frac{-1}{h} \cdot \frac{1}{h} dx \ = \ \frac{1}{h} \end{eqnarray*}
With all other being 0, due to the lack of overlap of the support of the shape functions. We used here implicitly that the cell had a width of \( h \). One beautiful property of all of this is that we can precompute all of these integrals, as we know exactly what the shape functions look like.
We end up with a matrix \( Au \ = \ f \). The \( u \) values here are the weights of the shape functions.
Higher-order CG solvers are solvers where each cell of the domain hosts a polynomial. Along the faces, these polynomials are glued together with their neighbours such that they are continuous. That means: There's no derivative here along the face normals, but the function does not jump either.
In the illustration above, two adjacent cells host a polynomial each. The polynomial is continuous if we cross the connecting face along its normal (the face is marked as blue), but the solution is not differentiable along this direction. We only have a continuous function!
The sketch kind of suggests that continuous higher order is intrinsically tied to Gauss-Lobatto nodal points, i.e. an arrangements of the unknowns where nodes are placed on the faces. This is not true. There's no restriction on the faces, even though it is obviously easier to realise continuous Galerkin if we have Lobatto.
The sketch shows an example for quadratic splines, as we have three integration nodes per cell per direction. If we had only two, we'd end up with a d-dimensional shape function. That is, DG is a more general formalism, which degenerates to d-dimensional shape functions once we have $p=1$. In this special case, it it reasonable to forget about all the realisation details below and to use the description from above.
If we store the weights of the shape functions within the vertices and faces, we naturally get a continuous shape function: The weight of a function on a face determines the height of this function at this point both from the left cell's and the right cell's point of view.
The block structure yields a global equation system that resembles
\( \left( \begin{array}{ccccc} A_{11} & A_{12} & A_{13} & A_{14} & \cdots \\ A_{21} & A_{22} & A_{23} & A_{24} & \cdots \\ A_{31} & A_{32} & A_{33} & A_{34} & \cdots \\ A_{41} & A_{42} & A_{43} & A_{44} & \cdots \\ \cdots \end{array} \right) \left( \begin{array}{c} u_1 \\ u_2 \\ u_3 \\ ... \end{array} \right) = M \left( \begin{array}{c} b_1 \\ b_2 \\ b_3 \\ ... \end{array} \right) \)
where each entry \( u_i \) is a small vector in itself that hosts all weights from its respective cell i. Once we stick to Gauss-Lobatto, only the shape functions directly on the facets span multiple cells. All other shape functions have cell-local support. Therefore, the matrix entries \( A_{ii} \) are dense, but only those \( A_{ij} \) hold entries where cell i and cell j are connected through a facet. All arguments holds for the right-hand matrix M, too.
We can evaluate \( Au \) matrix-freely, and for all nodal points within the cell, we can do this within touchCellFirstTime(). The points on the face are a little bit more difficult. Here, we need to accumulate the residual as well as the diagonal and issue a Jacobi solver. A direct evaluation in one rush does not work. We can, obviously, also use Jacobi everywhere.
Per cell, we define a matrix \( \chi \) which gives us only the interior points of a cell. That is, we have \( \chi \in \mathbb{R}^{(p-2)^d \times p^d} \). Here, p is the number of nodes per axis. d is the spatial dimension. For \( p = 3 \), we obtain
\( \left( \begin{array}{cccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & & 0 & 0 & 0 & 0 \end{array} \right) \)
When we enter the cell, we evaluate
\( u_i^{(n+0.5)} = u_i^{(n)} + \chi ^T \left( \chi A_i \chi^T \right) ^{-1} \chi (b - Au^{(n)}) \)
which updates all the interior points of a patch. After that, we can update the outer points subject to a standard Jacobi iteration to obtain a \( u_i^{(n+1)} \).