Peano
|
This page describes Peano 4's multithreading namespace. More...
Namespaces | |
namespace | native |
namespace | orchestration |
namespace | taskfusion |
Task fusion. | |
namespace | tbb |
Data Structures | |
class | BooleanSemaphore |
class | Core |
Core. More... | |
class | Lock |
Create a lock around a boolean semaphore region. More... | |
class | MultiReadSingleWriteLock |
Create a lock around a boolean semaphore region. More... | |
class | MultiReadSingleWriteSemaphore |
Read/write Semaphore. More... | |
class | RecursiveLock |
Create a lock around a boolean semaphore region. More... | |
class | RecursiveSemaphore |
Recursive Semaphore. More... | |
class | Task |
Abstract super class for a job. More... | |
class | TaskComparison |
Helper class if you wanna administer tasks with in a queue. More... | |
class | TaskWithCopyOfFunctor |
Frequently used implementation for job with a functor. More... | |
class | TaskWithoutCopyOfFunctor |
Frequently used implementation for job with a functor. More... | |
Typedefs | |
using | TaskNumber = int |
Functions | |
int | getNumberOfUnmaskedThreads () |
This routine runs through the Unix thread mask and counts how many threads SLURM allows a code to use. | |
std::string | printUnmaskedThreads () |
Creates a string representation of those threads which are available to the processes. | |
void | initSmartMPI () |
Switch on SmartMPI. | |
void | shutdownSmartMPI () |
void | setOrchestration (tarch::multicore::orchestration::Strategy *realisation) |
tarch::multicore::orchestration::Strategy * | swapOrchestration (tarch::multicore::orchestration::Strategy *realisation) |
Swap the active orchestration. | |
tarch::multicore::orchestration::Strategy & | getOrchestration () |
void | spawnTask (Task *task, const std::set< TaskNumber > &inDependencies=tarch::multicore::NoInDependencies, const TaskNumber &taskNumber=tarch::multicore::NoOutDependencies) |
Spawns a single task in a non-blocking fashion. | |
void | waitForTasks (const std::set< TaskNumber > &inDependencies) |
Wait for set of tasks. | |
void | waitForTask (const int taskNumber) |
Wrapper around waitForTasks() with a single-element set. | |
void | spawnAndWait (const std::vector< Task * > &tasks) |
Fork-join task submission pattern. | |
void | waitForAllTasks () |
Variables | |
constexpr TaskNumber | NoOutDependencies = -1 |
const std::set< TaskNumber > | NoInDependencies = std::set<TaskNumber>() |
This page describes Peano 4's multithreading namespace.
A more high-level overview is provided through Multicore programming.
If you wanna distinguish in your code between multicore and no-multicore variants, please use
and
With the symbol SharedMemoryParallelisation, you make your code independent of OpenMP, TBB or C++ threading.
The multithreading environment is realised through a small set of classes. User codes work with these classes. Each type/function has an implementation within src/multicore. This implementation is a dummy that ensures that all code works properly without any multithreading support. Subdirectories hold alternative implementations (backends) which are enabled once the user selects a certain multithreading implementation variant, i.e. depending on the ifdefs set, one of the subdirectories is used. Some implementations introduce further headers, but user code is never supposed to work against functions or classes held within subdirectories.
If you want to use the OpenMP backend, you have to embed your whole main loop within an
environment. Furthermore, you will have to use
on some systems, as we rely heavily on nested parallelism.
If the Peano statistics are enabled, the tasking back-end will sample several quantities such as "tarch::multicore::bsp-concurrency-level" or "tarch::multicore::spawned-tasks". Depending on the chosen back-end, you might get additional counters on top. Please start to look into tarch/multicore/Tasks.cpp for an overview of the most generic counters.
Peano models all of its interna as tasks. Each Peano 4 task is a subclass of tarch::multicore::Task. However, these classes might not be mapped 1:1 onto native tasks. In line with other APIs such as OneTBB, we distinguish different task types or task graph types, respectively:
In Peano, task DAGs are built up along the task workflow. That is, each task that is not used within a fork-join region or is totally free is assigned a unique number when we spawn it.
Whenever we define a task, we can also define its dependencies. This is a sole completion dependency: you tell the task system which task has to be completed before the currently submitted one is allowed to start. A DAG thus can be built up layer by layer. We start with the first task. This task might be immediately executed - we do not care - and then we continue to work our way down through the graph adding node by node.
In line with OpenMP and TBB - where we significantly influenced the development of the dynamic task API - outgoing dependencies should be declared before we use them.
The task orchestration is controlled via an implementation of tarch::multicore::orchestration::Strategy that you set via tarch::multicore::setOrchestration(). With the strategy you can control the fusion of tasks and its shipping onto GPUs. Whenever we hit a fork-join section, i.e. encounter tarch::multicore::spawnAndWait(), you can also pick from different scheduling variants:
The orchestration of choice can control this behaviour via tarch::multicore::orchestration::Strategy::paralleliseForkJoinSection().
using tarch::multicore::TaskNumber = int |
Definition at line 157 of file multicore.h.
int tarch::multicore::getNumberOfUnmaskedThreads | ( | ) |
This routine runs through the Unix thread mask and counts how many threads SLURM allows a code to use.
It returns this count. If you use multiple MPI ranks per node, each rank usually gets the permission to access the same number of cores exclusively.
Definition at line 32 of file Core.cpp.
References u.
tarch::multicore::orchestration::Strategy & tarch::multicore::getOrchestration | ( | ) |
Definition at line 75 of file multicore.cpp.
Referenced by tarch::multicore::taskfusion::ProcessReadyTask::run().
void tarch::multicore::initSmartMPI | ( | ) |
Switch on SmartMPI.
If you use SmartMPI, then the bookkeeping registers the the local scheduling. If you don't use SmartMPI, this operation becomes nop, i.e. you can always call it and configure will decide whether it does something useful.
Definition at line 33 of file multicore.cpp.
References tarch::mpi::Rank::getInstance(), and tarch::mpi::Rank::setCommunicator().
Referenced by main().
std::string tarch::multicore::printUnmaskedThreads | ( | ) |
Creates a string representation of those threads which are available to the processes.
You get a string similar to
0000xxxx0000xxxx00000000000000
The example above means that cores 4-7 and 12-15 are available to the process, the other cores are not.
Definition at line 11 of file Core.cpp.
References u.
void tarch::multicore::setOrchestration | ( | tarch::multicore::orchestration::Strategy * | realisation | ) |
Definition at line 56 of file multicore.cpp.
References assertion.
Referenced by peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSendsAndReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSends(), and peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSendsAndReceives().
void tarch::multicore::shutdownSmartMPI | ( | ) |
Definition at line 49 of file multicore.cpp.
Referenced by main().
Fork-join task submission pattern.
The realisation is relatively straightforward:
The precise behaviour of the implementation is controlled through the orchestration. At the moment, we support three different variants:
I would appreciate if we could distinguish busy polling from task scheduling in the taskwait, but such a feature is not available within OpenMP, and we haven't studied TBB in this context yet.
In OpenMP, the taskwait pragma allows the scheduler to process other tasks as it is a scheduling point. This way, it should keep cores busy all the time as long as there are enough tasks in the system. If a fork-join task spawns a lot of additional subtasks, and if the orchestration does not tell Peano to hold them back, the OpenMP runtime might switch to the free tasks rather than continue with the actual fork-join tasks. Which is not what we want and introduces runtime flaws later down the line. This phenomenon is described in our 2021 IWOMP paper by H. Schulz et al.
A more severe problem arises the other way round: Several groups have reported that the taskwait does not continue with other tasks. See in particular
Jones, Christopher Duncan (Fermilab): Using OpenMP for HEP Framework Algorithm Scheduling. http://cds.cern.ch/record/2712271
Their presentation slides can be found at https://zenodo.org/record/3598796#.X6eVv8fgqV4.
This paper clarifies that some OpenMP runtimes do (busy) waits within the taskwait construct to be able to continue immediately. They do not process other tasks meanwhile. Our own ExaHyPE 2 POP review came to the same conclusion.
This can lead to a deadlock in applications such as ExaHyPE which spawn bursts of enclave tasks and then later on wait for their results to drop in. The consuming tasks will issue a taskyield() but this will not help, if the taskyield() now permutes through all the other traversal tasks.
If you suffer from that, you have to ensure that all enclave tasks have finished prior to the next traversal.
It is important to know how many BSP sections are active at a point. I therefore use the stats interface to maintain the BSP counters. However, I disable any statistics sampling, so I get a spot-on overview of the number of forked subtasks at any point.
Definition at line 91 of file multicore.cpp.
References _log, tarch::multicore::Lock::free(), tarch::logging::Statistics::getInstance(), tarch::logging::Statistics::inc(), tarch::multicore::Lock::lock(), tarch::multicore::orchestration::Strategy::RunParallel, and tarch::multicore::orchestration::Strategy::RunSerially.
Referenced by peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSendsAndReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSendsAndReceives(), and peano4::parallel::SpacetreeSet::traverse().
void tarch::multicore::spawnTask | ( | Task * | task, |
const std::set< TaskNumber > & | inDependencies = tarch::multicore::NoInDependencies, | ||
const TaskNumber & | taskNumber = tarch::multicore::NoOutDependencies ) |
Spawns a single task in a non-blocking fashion.
Ownership goes over to Peano's job namespace, i.e. you don't have to delete the pointer.
If taskNumber equals NoDependency, we know that noone is (directly) waiting for this task, i.e. we won't add dependencies to the task graph afterwards. In this case, the realisation is straightforward:
spawnTask() will never commit a task to the global task queue and therefore is inherently thread-safe.
Spawn a task that depends on one other task. Alternatively, pass in NoDependency. In this case, the task can kick off immediately. You have to specify a task number. This number allows other, follow-up tasks to become dependent on this very task. Please note that the tasks have to be spawned in order, i.e. if B depends on A, then A has to be spawned before B. Otherwise, you introduce a so-called anti-dependency. This is OpenMP jargon which we adopted ruthlessly.
You may pass NoDependency as taskNumber. In this case, you have a fire-and-forget task which is just pushed out there without anybody ever waiting for it later on (at least not via task dependencies).
task | Pointer to a task. The responsibility for this task is handed over to the tasking system, i.e. you are not allowed to delete it. |
inDependencies | Set of incoming tasks that have to finish before the present task is allowed to run. You can pass the alias tarch::multicore::Tasks::NoInDependencies to make clear what's going on. |
taskNumber | Allow the runtime to track out dependencies. Only numbers handed in here may be in inDependencies in an upcoming call. If you do not expect to construct any follow-up in-dependencies, you can pass in the default, i.e. NoOutDependencies. |
Definition at line 136 of file multicore.cpp.
References assertion, tarch::multicore::Task::canFuse(), tarch::logging::Statistics::getInstance(), and tarch::logging::Statistics::inc().
tarch::multicore::orchestration::Strategy * tarch::multicore::swapOrchestration | ( | tarch::multicore::orchestration::Strategy * | realisation | ) |
Swap the active orchestration.
Different to setOrchestration(), this operation does not delete the current orchestration. It swaps them, so you can use setOrchestration() with the result afterwards and re-obtain the original strategy.
Definition at line 65 of file multicore.cpp.
References assertion.
Referenced by peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSendsAndReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSends(), and peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSendsAndReceives().
void tarch::multicore::waitForAllTasks | ( | ) |
Definition at line 131 of file multicore.cpp.
Wrapper around waitForTasks() with a single-element set.
Definition at line 85 of file multicore.cpp.
References waitForTasks().
Referenced by exahype2::EnclaveBookkeeping::waitForTaskToTerminateAndReturnResult().
void tarch::multicore::waitForTasks | ( | const std::set< TaskNumber > & | inDependencies | ) |
Wait for set of tasks.
Entries in inDependencies can be NoDependency. This is a trivial implementation, as we basically run through each task in inDependencies and invoke waitForTask() for it. We don't have to rely on some backend-specific implementation.
This routine degenerates to nop, as no task can be pending. spawnTask() always executed the task straightaway.
Definition at line 80 of file multicore.cpp.
Referenced by waitForTask().
const std::set<TaskNumber> tarch::multicore::NoInDependencies = std::set<TaskNumber>() |
Definition at line 161 of file multicore.h.
|
constexpr |
Definition at line 159 of file multicore.h.
Referenced by swift2::TaskNumber::flatten(), tarch::multicore::native::processFusedTask(), swift2::TaskNumber::TaskNumber(), and swift2::TaskNumber::toString().