Peano
Loading...
Searching...
No Matches
tarch::multicore Namespace Reference

This page describes Peano 4's multithreading namespace. More...

Namespaces

namespace  native
 
namespace  orchestration
 
namespace  taskfusion
 Task fusion.
 
namespace  tbb
 

Data Structures

class  BooleanSemaphore
 
class  Core
 Core. More...
 
class  Lock
 Create a lock around a boolean semaphore region. More...
 
class  MultiReadSingleWriteLock
 Create a lock around a boolean semaphore region. More...
 
class  MultiReadSingleWriteSemaphore
 Read/write Semaphore. More...
 
class  RecursiveLock
 Create a lock around a boolean semaphore region. More...
 
class  RecursiveSemaphore
 Recursive Semaphore. More...
 
class  Task
 Abstract super class for a job. More...
 
class  TaskComparison
 Helper class if you wanna administer tasks with in a queue. More...
 
class  TaskWithCopyOfFunctor
 Frequently used implementation for job with a functor. More...
 
class  TaskWithoutCopyOfFunctor
 Frequently used implementation for job with a functor. More...
 

Typedefs

using TaskNumber = int
 

Functions

int getNumberOfUnmaskedThreads ()
 This routine runs through the Unix thread mask and counts how many threads SLURM allows a code to use.
 
std::string printUnmaskedThreads ()
 Creates a string representation of those threads which are available to the processes.
 
void initSmartMPI ()
 Switch on SmartMPI.
 
void shutdownSmartMPI ()
 
void setOrchestration (tarch::multicore::orchestration::Strategy *realisation)
 
tarch::multicore::orchestration::StrategyswapOrchestration (tarch::multicore::orchestration::Strategy *realisation)
 Swap the active orchestration.
 
tarch::multicore::orchestration::StrategygetOrchestration ()
 
void spawnTask (Task *task, const std::set< TaskNumber > &inDependencies=tarch::multicore::NoInDependencies, const TaskNumber &taskNumber=tarch::multicore::NoOutDependencies)
 Spawns a single task in a non-blocking fashion.
 
void waitForTasks (const std::set< TaskNumber > &inDependencies)
 Wait for set of tasks.
 
void waitForTask (const int taskNumber)
 Wrapper around waitForTasks() with a single-element set.
 
void spawnAndWait (const std::vector< Task * > &tasks)
 Fork-join task submission pattern.
 
void waitForAllTasks ()
 

Variables

constexpr TaskNumber NoOutDependencies = -1
 
const std::set< TaskNumberNoInDependencies = std::set<TaskNumber>()
 

Detailed Description

This page describes Peano 4's multithreading namespace.

A more high-level overview is provided through Multicore programming.

Writing your own code with multithreading features

If you wanna distinguish in your code between multicore and no-multicore variants, please use

and

#if defined(SharedMemoryParallelisation)

With the symbol SharedMemoryParallelisation, you make your code independent of OpenMP, TBB or C++ threading.

Multicore architecture

The multithreading environment is realised through a small set of classes. User codes work with these classes. Each type/function has an implementation within src/multicore. This implementation is a dummy that ensures that all code works properly without any multithreading support. Subdirectories hold alternative implementations (backends) which are enabled once the user selects a certain multithreading implementation variant, i.e. depending on the ifdefs set, one of the subdirectories is used. Some implementations introduce further headers, but user code is never supposed to work against functions or classes held within subdirectories.

Backends

OpenMP

If you want to use the OpenMP backend, you have to embed your whole main loop within an

#pragma omp parallel
#pragma omp single
{

environment. Furthermore, you will have to use

export OMP_NESTED=true

on some systems, as we rely heavily on nested parallelism.

Statistics

If the Peano statistics are enabled, the tasking back-end will sample several quantities such as "tarch::multicore::bsp-concurrency-level" or "tarch::multicore::spawned-tasks". Depending on the chosen back-end, you might get additional counters on top. Please start to look into tarch/multicore/Tasks.cpp for an overview of the most generic counters.

Tasking model in Peano

Peano models all of its interna as tasks. Each Peano 4 task is a subclass of tarch::multicore::Task. However, these classes might not be mapped 1:1 onto native tasks. In line with other APIs such as OneTBB, we distinguish different task types or task graph types, respectively:

  • Tasks. The most generic type of tasks is submitted via spawnTask(). Each task can be assigned a unique number and incoming dependencies. The number in return can be used to specify outgoing dependencies.
  • Fork-join tasks. These are created via tarch::multicore::spawnAndWait() which accepts a sequence of tasks. They are all run in parallel but then wait for each other, i.e. we define a tree (sub)task graph. With fork-join calls, we mirror the principles behind bulk-synchronous programming (BSP).
  • Fusable tasks. A subtype of the normal tasks.

Tasks with dependencies

In Peano, task DAGs are built up along the task workflow. That is, each task that is not used within a fork-join region or is totally free is assigned a unique number when we spawn it.

Whenever we define a task, we can also define its dependencies. This is a sole completion dependency: you tell the task system which task has to be completed before the currently submitted one is allowed to start. A DAG thus can be built up layer by layer. We start with the first task. This task might be immediately executed - we do not care - and then we continue to work our way down through the graph adding node by node.

In line with OpenMP and TBB - where we significantly influenced the development of the dynamic task API - outgoing dependencies should be declared before we use them.

Orchestration and (auto-)tuning

The task orchestration is controlled via an implementation of tarch::multicore::orchestration::Strategy that you set via tarch::multicore::setOrchestration(). With the strategy you can control the fusion of tasks and its shipping onto GPUs. Whenever we hit a fork-join section, i.e. encounter tarch::multicore::spawnAndWait(), you can also pick from different scheduling variants:

  1. Execute the tasks serially. Do not exploit any concurrency.
  2. Handle the tasks parallel. Any fusion is done within the runtime.
  3. Run through the forked tasks in parallel and afterwards fuse those tasks created manually. So you enforce when the fusion is happening.

The orchestration of choice can control this behaviour via tarch::multicore::orchestration::Strategy::paralleliseForkJoinSection().

Typedef Documentation

◆ TaskNumber

Definition at line 157 of file multicore.h.

Function Documentation

◆ getNumberOfUnmaskedThreads()

int tarch::multicore::getNumberOfUnmaskedThreads ( )

This routine runs through the Unix thread mask and counts how many threads SLURM allows a code to use.

It returns this count. If you use multiple MPI ranks per node, each rank usually gets the permission to access the same number of cores exclusively.

Definition at line 32 of file Core.cpp.

References u.

◆ getOrchestration()

tarch::multicore::orchestration::Strategy & tarch::multicore::getOrchestration ( )

Definition at line 75 of file multicore.cpp.

Referenced by tarch::multicore::taskfusion::ProcessReadyTask::run().

Here is the caller graph for this function:

◆ initSmartMPI()

void tarch::multicore::initSmartMPI ( )

Switch on SmartMPI.

If you use SmartMPI, then the bookkeeping registers the the local scheduling. If you don't use SmartMPI, this operation becomes nop, i.e. you can always call it and configure will decide whether it does something useful.

Definition at line 33 of file multicore.cpp.

References tarch::mpi::Rank::getInstance(), and tarch::mpi::Rank::setCommunicator().

Referenced by main().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ printUnmaskedThreads()

std::string tarch::multicore::printUnmaskedThreads ( )

Creates a string representation of those threads which are available to the processes.

You get a string similar to

  0000xxxx0000xxxx00000000000000

The example above means that cores 4-7 and 12-15 are available to the process, the other cores are not.

Definition at line 11 of file Core.cpp.

References u.

◆ setOrchestration()

◆ shutdownSmartMPI()

void tarch::multicore::shutdownSmartMPI ( )

Definition at line 49 of file multicore.cpp.

Referenced by main().

Here is the caller graph for this function:

◆ spawnAndWait()

void tarch::multicore::spawnAndWait ( const std::vector< Task * > & tasks)

Fork-join task submission pattern.

The realisation is relatively straightforward:

  • Maintain nestedSpawnAndWaits which is incremented for every fork-join section that we enter.
  • Tell the orchestration that a BSP section starts.
  • Ask the orchestration which realisation to pick.
  • Either run through the task set sequentially or invoke the native parallel implementation.
  • If there are task pending and the orchestration instructs us to do so, map them onto native tasks.
  • Tell the orchestration that the BSP section has terminated
  • Tell the orchestration that a BSP section ends.
  • Maintain nestedSpawnAndWaits which is decremented whenever we leave a fork-join section.

Scheduling variants

The precise behaviour of the implementation is controlled through the orchestration. At the moment, we support three different variants:

  1. The serial variant tarch::multicore::orchestration::Strategy::ExecutionPolicy::RunSerially runs through all the tasks one by one. Our rationale is that a good orchestration picks this variant for very small task sets where the overhead of the join-fork makes a parallelisation counterproductive.
  2. The parallel variant tarch::multicore::orchestration::Strategy::ExecutionPolicy::RunParallel runs through all the tasks in parallel. Once all tasks are completed, the code commits all the further tasks that have been spawned into a global queue and then studies if to fuse them further or if to map them onto native tasks. This behaviour has to be studied in the context of tarch::multicore::spawnTask() which might already have mapped tasks onto native tasks or GPU tasks, i.e. at this point no free subtasks might be left over in the local queues even though there had been some. It is important to be careful with this "commit all tasks after the traversal" approach: In OpenMP, it can lead to deadlocks if the taskwait is realised via busy polling. See the bug description below.
  3. The parallel variant tarch::multicore::orchestration::Strategy::ExecutionPolicy::RunParallelAndIgnoreWithholdSubtasks runs through all the tasks in parallel. Different to tarch::multicore::orchestration::Strategy::ExecutionPolicy::RunParallel, it does not try to commit any further subtasks or to fuse them. This variant allows the scheduler to run task sets in parallel but to avoid the overhead introduced by the postprocessing.

I would appreciate if we could distinguish busy polling from task scheduling in the taskwait, but such a feature is not available within OpenMP, and we haven't studied TBB in this context yet.

Implementation flaws in OpenMP and bugs burried within the sketch

In OpenMP, the taskwait pragma allows the scheduler to process other tasks as it is a scheduling point. This way, it should keep cores busy all the time as long as there are enough tasks in the system. If a fork-join task spawns a lot of additional subtasks, and if the orchestration does not tell Peano to hold them back, the OpenMP runtime might switch to the free tasks rather than continue with the actual fork-join tasks. Which is not what we want and introduces runtime flaws later down the line. This phenomenon is described in our 2021 IWOMP paper by H. Schulz et al.

A more severe problem arises the other way round: Several groups have reported that the taskwait does not continue with other tasks. See in particular

Jones, Christopher Duncan (Fermilab): Using OpenMP for HEP Framework Algorithm Scheduling. http://cds.cern.ch/record/2712271

Their presentation slides can be found at https://zenodo.org/record/3598796#.X6eVv8fgqV4.

This paper clarifies that some OpenMP runtimes do (busy) waits within the taskwait construct to be able to continue immediately. They do not process other tasks meanwhile. Our own ExaHyPE 2 POP review came to the same conclusion.

This can lead to a deadlock in applications such as ExaHyPE which spawn bursts of enclave tasks and then later on wait for their results to drop in. The consuming tasks will issue a taskyield() but this will not help, if the taskyield() now permutes through all the other traversal tasks.

If you suffer from that, you have to ensure that all enclave tasks have finished prior to the next traversal.

Statistics

It is important to know how many BSP sections are active at a point. I therefore use the stats interface to maintain the BSP counters. However, I disable any statistics sampling, so I get a spot-on overview of the number of forked subtasks at any point.

Todo
Speak to OpenMP. It would be totally great, if we could say that the task wait shall not(!) issue a new scheduling point. We would like to distinguish taskwaits which priorities throughput vs algorithmic latency.
Todo
Speak to OpenMP that we would like a taskyield() which does not (!) continue with a sibling. This is important for producer-consumer patterns.

Definition at line 91 of file multicore.cpp.

References _log, tarch::multicore::Lock::free(), tarch::logging::Statistics::getInstance(), tarch::logging::Statistics::inc(), tarch::multicore::Lock::lock(), tarch::multicore::orchestration::Strategy::RunParallel, and tarch::multicore::orchestration::Strategy::RunSerially.

Referenced by peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSendsAndReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSendsAndReceives(), and peano4::parallel::SpacetreeSet::traverse().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ spawnTask()

void tarch::multicore::spawnTask ( Task * task,
const std::set< TaskNumber > & inDependencies = tarch::multicore::NoInDependencies,
const TaskNumber & taskNumber = tarch::multicore::NoOutDependencies )

Spawns a single task in a non-blocking fashion.

Ownership goes over to Peano's job namespace, i.e. you don't have to delete the pointer.

Handling tasks without outgoing dependencies

If taskNumber equals NoDependency, we know that noone is (directly) waiting for this task, i.e. we won't add dependencies to the task graph afterwards. In this case, the realisation is straightforward:

  1. If SmartMPI is enabled and the task should be sent away, do so.
  2. If the current orchestration strategy (an implementation of tarch::multicore::orchestration::Strategy says that we should hold back tasks, but the current number of tasks in the thread-local queue exceeds already this threshold, invoke the native tarch::multicore::native::spawnTask(task).
  3. If none of these ifs apply, enqueue the task in the thread-local queue.
  4. If we came through route (3), doublecheck if we should fuse tasks into GPUs.

spawnTask() will never commit a task to the global task queue and therefore is inherently thread-safe.

Tasks with a task number and incoming dependencies

Spawn a task that depends on one other task. Alternatively, pass in NoDependency. In this case, the task can kick off immediately. You have to specify a task number. This number allows other, follow-up tasks to become dependent on this very task. Please note that the tasks have to be spawned in order, i.e. if B depends on A, then A has to be spawned before B. Otherwise, you introduce a so-called anti-dependency. This is OpenMP jargon which we adopted ruthlessly.

You may pass NoDependency as taskNumber. In this case, you have a fire-and-forget task which is just pushed out there without anybody ever waiting for it later on (at least not via task dependencies).

See also
tarch::multicore and the section "Tasks with dependencies" therein for further documentation.
tarch::multicore::spawnAndWait() for details what happens with tasks that have no outgoing dependencies.
processPendingTasks(int) describing how we handle pending tasks.
Parameters
taskPointer to a task. The responsibility for this task is handed over to the tasking system, i.e. you are not allowed to delete it.
inDependenciesSet of incoming tasks that have to finish before the present task is allowed to run. You can pass the alias tarch::multicore::Tasks::NoInDependencies to make clear what's going on.
taskNumberAllow the runtime to track out dependencies. Only numbers handed in here may be in inDependencies in an upcoming call. If you do not expect to construct any follow-up in-dependencies, you can pass in the default, i.e. NoOutDependencies.

Definition at line 136 of file multicore.cpp.

References assertion, tarch::multicore::Task::canFuse(), tarch::logging::Statistics::getInstance(), and tarch::logging::Statistics::inc().

Here is the call graph for this function:

◆ swapOrchestration()

◆ waitForAllTasks()

void tarch::multicore::waitForAllTasks ( )

Definition at line 131 of file multicore.cpp.

◆ waitForTask()

void tarch::multicore::waitForTask ( const int taskNumber)

Wrapper around waitForTasks() with a single-element set.

Definition at line 85 of file multicore.cpp.

References waitForTasks().

Referenced by exahype2::EnclaveBookkeeping::waitForTaskToTerminateAndReturnResult().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ waitForTasks()

void tarch::multicore::waitForTasks ( const std::set< TaskNumber > & inDependencies)

Wait for set of tasks.

Entries in inDependencies can be NoDependency. This is a trivial implementation, as we basically run through each task in inDependencies and invoke waitForTask() for it. We don't have to rely on some backend-specific implementation.

Serial code

This routine degenerates to nop, as no task can be pending. spawnTask() always executed the task straightaway.

Definition at line 80 of file multicore.cpp.

Referenced by waitForTask().

Here is the caller graph for this function:

Variable Documentation

◆ NoInDependencies

const std::set<TaskNumber> tarch::multicore::NoInDependencies = std::set<TaskNumber>()

Definition at line 161 of file multicore.h.

◆ NoOutDependencies

constexpr TaskNumber tarch::multicore::NoOutDependencies = -1
constexpr