Peano 4
No Matches
tarch::multicore::Core Class Reference

Core. More...

#include <Core.h>

Collaboration diagram for tarch::multicore::Core:

Public Member Functions

 ~Core ()
void configure (int numberOfThreads=UseDefaultNumberOfThreads)
 Configure the whole node, i.e.
void shutdown ()
 Shutdown parallel environment.
bool isInitialised () const
int getNumberOfThreads () const
 Returns the number of threads that is used.
int getCoreNumber () const
int getThreadNumber () const
void yield ()
 Wrapper around backend-specific yield.

Static Public Member Functions

static CoregetInstance ()

Static Public Attributes

static constexpr int UseDefaultNumberOfThreads = 0
 The default is what the system management typically gives you.

Private Member Functions

 Core ()

Private Attributes

int _numberOfThreads

Static Private Attributes

static tarch::logging::Log _log
 Logging device.

Detailed Description


Any shared memory implementation has to provide a singleton Core. Its full qualified name is tarch::multicore::Core. If no shared memory variant is switched on, Peano provides a default Core implementation that does nothing.

If you don't configure the core explicitly, it will try to use some meaningful default.

See also
Tobias Weinzierl

Definition at line 50 of file Core.h.

Constructor & Destructor Documentation

◆ Core()

tarch::multicore::Core::Core ( )

Definition at line 51 of file Core.cpp.

References tarch::multicore::internal::configureInternalTaskQueues().

Here is the call graph for this function:

◆ ~Core()

tarch::multicore::Core::~Core ( )


Definition at line 53 of file Core.cpp.

Member Function Documentation

◆ configure()

void tarch::multicore::Core::configure ( int numberOfThreads = UseDefaultNumberOfThreads)

Configure the whole node, i.e.

all cores available on a node. If numberOfThreads equals the default, the routine will use the hardware concurrency to determine the number of threads that should be used. On SLURM-based HPC platforms, this will be wrong if multiple MPI ranks are placed on one node. It is also a bad choice if hyperthreading should not/can not be used. Use the helper function getNumberOfUnmaskedThreads().

numberOfThreadsNumber of threads that shall be used. This parameter either is greater than zero (which defines the number of threads) or it equals DefaultNumberOfThreads which means that the code should use the default number of threads.

Definition at line 60 of file Core.cpp.

References tarch::multicore::internal::configureInternalTaskQueues().

Referenced by main(), and swift2::parseCommandLineArguments().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ getCoreNumber()

int tarch::multicore::Core::getCoreNumber ( ) const
Physical core the process is running on

Definition at line 68 of file Core.cpp.

◆ getInstance()

tarch::multicore::Core & tarch::multicore::Core::getInstance ( )
Singleton instance

Definition at line 55 of file Core.cpp.

Referenced by tarch::multicore::orchestration::GeneticOptimisation::createInitialConfigurations(), examples::regulargridupscaling::MyObserver::enterCell(), tarch::logging::Log::error(), toolbox::loadbalancing::strategies::Hardcoded::finishStep(), toolbox::loadbalancing::strategies::SpreadOutHierarchically::getAction(), toolbox::loadbalancing::strategies::SpreadOutHierarchically::getNumberOfSplitsOnLocalRank(), tarch::multicore::orchestration::GeneticOptimisation::getNumberOfTasksToFuseAndTargetDevice(), toolbox::loadbalancing::strategies::SpreadOut::getNumberOfTreesPerRank(), toolbox::loadbalancing::strategies::SpreadOutOnceGridStagnates::getNumberOfTreesPerRank(), toolbox::loadbalancing::strategies::SplitOversizedTree::getTargetTreeCost(), tarch::logging::CommandLineLogger::indent(), tarch::logging::Log::info(), peano4::initParallelEnvironment(), main(), swift2::parseCommandLineArguments(), peano4::grid::TraversalVTKPlotter::plotCell(), runParallel(), peano4::shutdownParallelEnvironment(), tarch::multicore::spawnAndWait(), tarch::multicore::native::spawnAndWaitAsTaskLoop(), tarch::multicore::spawnTask(), step(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithBlockingSendsAndReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingReceives(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSends(), peano4::parallel::tests::PingPongTest::testMultithreadedPingPongWithNonblockingSendsAndReceives(), exahype2::LoadBalancingConfiguration::translateSetMaxNumberOfTreesIntoRealNumberOfTrees(), toolbox::loadbalancing::strategies::SpreadOutHierarchically::updateLoadBalancing(), toolbox::loadbalancing::strategies::SpreadOutHierarchically::updateState(), exahype2::EnclaveBookkeeping::waitForTaskToTerminateAndReturnResult(), tarch::logging::Log::warning(), and peano4::writeCopyrightMessage().

Here is the caller graph for this function:

◆ getNumberOfThreads()

◆ getThreadNumber()

int tarch::multicore::Core::getThreadNumber ( ) const
Logical thread number

Definition at line 77 of file Core.cpp.

Referenced by tarch::multicore::spawnTask().

Here is the caller graph for this function:

◆ isInitialised()

bool tarch::multicore::Core::isInitialised ( ) const
Shared memory environment is up and running. Most shared memory implementations work properly with the defaults. They just return true always.

Definition at line 64 of file Core.cpp.

◆ shutdown()

void tarch::multicore::Core::shutdown ( )

Shutdown parallel environment.

Definition at line 62 of file Core.cpp.

Referenced by peano4::shutdownParallelEnvironment().

Here is the caller graph for this function:

◆ yield()

void tarch::multicore::Core::yield ( )

Wrapper around backend-specific yield.

For most backends, this should not really be a yield(). It should interrupt the current task, not the current thread, and tell the runtime to continue with another task.

Definition at line 79 of file Core.cpp.

Referenced by tarch::multicore::spawnAndWait(), tarch::multicore::native::spawnAndWaitAsTaskLoop(), and exahype2::EnclaveBookkeeping::waitForTaskToTerminateAndReturnResult().

Here is the caller graph for this function:

Field Documentation

◆ _log

tarch::logging::Log tarch::multicore::Core::_log

Logging device.

Definition at line 55 of file Core.h.

◆ _numberOfThreads

int tarch::multicore::Core::_numberOfThreads

Definition at line 59 of file Core.h.

◆ UseDefaultNumberOfThreads

constexpr int tarch::multicore::Core::UseDefaultNumberOfThreads = 0

The default is what the system management typically gives you.

So if you run four ranks on a 24 core node, then each MPI rank will get 6 threads if you choose this constant.

Multiply with two to exploit hyperthreading.

Definition at line 69 of file Core.h.

Referenced by main(), and swift2::parseCommandLineArguments().

The documentation for this class was generated from the following files: