Peano
Loading...
Searching...
No Matches
Peano 4 PETSc projects

A PETSc project is a wrapper around Peano 4 project, i.e. it holds a Peano 4 Python project. Each call to the PETSc API builds up an internal data representation which is eventually mapped onto a native Peano 4 project. To use PETSc with Peano, you need to configure it with

./configure --enable-finiteelements --with-petsc

and you will need to make additions to CXXFLAGS and LDFLAGS to ensure that we can compile with PETSc and link appropriately:

./configure CXXFLAGS="... -I$PETSC_DIR/include" LDFLAGS="... -Wl,-rpath,$PETSC_DIR/lib -L$PETSC_DIR/lib" $LIBS="... -lpetsc"

Where PETSC_DIR points to your PETSc installation as passed into prefix (see remarks below). Alteratively, replace PETSC_DIR with the path from above directly. Please do not omit the -Wl,... parts. If you do so, your code will link, but you'll get seg fault the very moment you try to allocate any PETSc object. Once the flags are in, any Peano Python script will pick up the right paths automatically.

Please note that most off-the-shelf PETSc packages in mainstream Linux distributions such as Ubuntu are compiled with MPI. That is, you will have to use MPI for Peano as well, as PETSc otherwise will complain.

In principle, that's all you have to do. In practice, getting PETSc to work can be cumbersome. The flag above assumes that you have PETSc ready-to-use on your system (through a module file, e.g.). If this is not the case or if you want to use another software stack than the one used by PETSc (different compiler, e.g., or no MPI), you will need to follow some of the steps below.

Preparing PETSc on your system

Downloading PETSc for your distribution

Most Linux distribution offer PETSc off-the-shelf through their package managers. These versions should be absolutely sufficient for our work within Peano.

  • Ubuntu: Install
        apt install petsc-dev
    
    and then set
        export PETSC_DIR=/usr/lib/petscdir/petsc3.15/x86_64-linux-gnu-real
    
    or similar.

The problem with these pre-manufactured PETSc installations is that they are built against one specific MPI implementation picked by the distribution. If you want to use a different MPI (or no MPI at all), you will likely run into problems and have to build your own PETSc version at one point.

Build PETSc locally

Alternatively, you can grab PETSc directly from the sources. The steps are summarised on their website (https://petsc.org/release/install/download/), but we will summarise the important ones here.

  1. Clone their repo. It's best to do this somewhere away from the Peano directory.

    git clone -b release https://gitlab.com/petsc/petsc.git petsc
  2. Change into this directory.

  3. Run the configure script inside the PETSc directory:

    ./configure --with-cc=(C-COMPILER) --with-cxx=(C++-COMPILER) --with-fc=0 --with-mpi=0 --prefix=/opt/petsc

    The command above disables the Fortran support and also MPI. The prefix

  4. Compile the actual code:

    make
  5. Install PETSc:

    make install

PETSc's documentation makes a lot of fuss around various architectures and systems, but I usually keep things simple. You might however prefer more sophisticated calls such as

make PETSC_DIR=/path/to/petsc PETSC_ARCH=system_architecture all

\but most of the time, I don't need these.

From hereon, we will use a variable PETSC_DIR which is the prefix directory. In the example above we would have

export PETSC_DIR=/opt/petsc

Whenever we use PETSC_DIR, you can replace it with the real path if you don't want to work with additional variables. However, the PETSc guidelines recommend that you stick to the variable. As we need it all the time, it might even be worth adding PETSC_DIR and PETSC_ARCH to .bashrc / .bash_profile.

Amend libraries and search paths

On most systems, the simple manipulation of the three flags CXXFLAGS, LDFLAGS and LIBS should be sufficient. However, you might need some more complex manipulations. Key insight here is that, depending on how PETSc itself is built, it will itself need further libraries. You can find a lot of these in the file $PETSC_DIR/lib/petsc/conf/variables. The build command also reports on those just after the compile has finished.

If you need further flags, you typically add

CXXFLAGS="(...) -I$PETSC_DIR/include -I$PETSC_DIR/$PETSC_ARCH/include"
LDFLAGS="(...) -lpetsc -Wl,-rpath,$PETSC_DIR/$PETSC_ARCH/lib -L$PETSC_DIR/$PETSC_ARCH/lib -L/path/to/gcc/libraries "

The first flag here comes from PETSc itself, and requires PETSC_DIR and PETSC_ARCH to be established already. The last flag is the path to the gcc runtime libraries. As an example, on a local machine, this libary looks like:

-L/usr/lib/gcc/x86_64-linux-gnu

We can then put these together:

./configure ... --enable-finiteelements --with-petsc CXXFLAGS=$(CXXFLAGS) LDFLAGS=$(LDFLAGS)

The following is an example call to the configure script that works on cosma (assuming the environment variables are present). We use the intel compiler.

./configure CC=icc CXX=icpx --enable-loadbalancing --enable-exahype --enable-blockstructured --enable-finiteelements --with-mpi=mpiicpc --with-multithreading=omp --with-petsc CXXFLAGS="--std=c++20 -w -fopenmp -g -funroll-loops -O3 -g3 -I$PETSC_DIR/include -I$PETSC_DIR/$PETSC_ARCH/include" LDFLAGS="-fopenmp -Wl,-rpath,$PETSC_DIR/$PETSC_ARCH/lib -L$PETSC_DIR/$PETSC_ARCH/lib -L/cosma/local/gcc/11.1.0/lib64 -lpetsc"

Typical PETSc code structure

Algorithmic steps

Each PETSc project consists of the following steps:

  1. Create the mesh. The mesh construction (and the associated load balancing) can require multiple mesh sweeps, so this logical step can be realised through multiple grid traversals.
  2. Enumerate all the grid entities, i.e. assign each vertex/face/cell (depending on solver type) a unique number. This step is realised through petsc.actionsets.EnumerateDoFs. This one realises the boundary conditions. Once complete, we know the dimensions of the global matrices and vectors.
  3. Initialise the mesh, i.e. assign each vertex/face/cell a value if the value is fixed and otherwise a right-hand side and material parameters. Depending on the discretisation, the steps are realised by different action sets such as petsc.actionsets.InitVertexDoFs.
  4. Initialise PETSc. Inform the library on the size of the required matrices and vectors. This step does not require a mesh traversal of its own.
  5. Assemble the linear equation system and the right-hand side. This is the tricky part and depends strongly the solver type chosen. Therefore, the assembly is typically tied to the particular solver class, i.e. can be found in the same source code file.
  6. Invoke PETSc's solver. This step does not require a mesh traversal.
  7. Run over the mesh and copy the outcome of the PETSc solve into the mesh, so we can create a geometric model with the results.
  8. Visualise the outcome.

Details on these steps are given in the class description of petsc.Project. It maps each step on to an observer, despite the fact that two steps do not require a mesh traversal.

Code structure

  • petsc.Project Base class behind any PETSc project. This class holds one observer per algorithm step, it holds some global properties (such as the domain size), and it administers a peano4.Project. It also holds an instance of petsc.PETScMain.
  • petsc.PETScMain represents the main C++ file. It is responsible to pick the reight sequence of the calculations.
  • The project holds an arbitrary number of PDE solvers.

When we add a solver to the PETSc project via petsc.Project.add_solver(), the project memorises this solver, but also asks the solver to add its actions to each observer. So the project holds for example an observer for the plotting. Its main will invoke this plotting as very last step. However, the project cannot know what to plot actually. When you add a solver, the project takes this solver object and asks it "hey, can you please add whatever you want to do to this observer so we get a meaningful output". This means that the solver adds its action set to the observer.

When we invoke petsc.Project.generate_Peano4_project(), we obtain one Peano 4 project which hosts all of these observers and action sets, and the main, and so forth. The original hierarchical information on the number of solvers and so forth are lost. This is a realisation representation, not a logical representation of our project.