|
Peano
|
Core. More...
#include <Device.h>

Public Member Functions | |
| ~Device () | |
| Destructor. | |
| void | configure (const std::set< int > &devicesToUse={}) |
| void | shutdown () |
| Shutdown parallel environment. | |
| bool | isInitialised () const |
| int | getNumberOfDevices () const |
| Return the number of GPUs that are available. | |
| ~Device () | |
| Destructor. | |
| void | configure (const std::set< int > &devicesToUse={}) |
| void | shutdown () |
| Shutdown parallel environment. | |
| bool | isInitialised () const |
| int | getNumberOfDevices () const |
| Return the number of GPUs that are available. | |
| int | getLocalDeviceId () const |
| ~Device () | |
| Destructor. | |
| void | configure (const std::set< int > &devicesToUse) |
| void | shutdown () |
| Shutdown parallel environment. | |
| bool | isInitialised () const |
| int | getNumberOfDevices () const |
| Return the number of GPUs that are available. | |
Static Public Member Functions | |
| static Device & | getInstance () |
| static Device & | getInstance () |
| static Device & | getInstance () |
Static Public Attributes | |
| static constexpr int | HostDevice = -1 |
| Accelerator devices (GPUs) are enumerated starting from 0. | |
Private Member Functions | |
| Device () | |
| Device () | |
| Device () | |
Private Attributes | |
| int | _numberOfDevices |
| std::set< int > | _devicesToUse |
Static Private Attributes | |
| static tarch::logging::Log | _log |
| Logging device. | |
| static int | _localDeviceId |
Core.
Any shared memory implementation has to provide a singleton Core. Its full qualified name is tarch::multicore::Core. If no shared memory variant is switched on, Peano provides a default Core implementation that does nothing.
If you don't configure the core explicitly, it will try to use some meaningful default.
|
private |
| tarch::accelerator::Device::~Device | ( | ) |
Destructor.
|
private |
| tarch::accelerator::Device::~Device | ( | ) |
Destructor.
|
private |
| tarch::accelerator::Device::~Device | ( | ) |
Destructor.
| void tarch::accelerator::Device::configure | ( | const std::set< int > & | devicesToUse | ) |
In the TBB context, the GPU setting are ignored. If you however combine TBB with SYCL, then the SYCL policies (see below) hold.
By default, OpenMP makes all devices visible to the user.
In SYCL, you cannot alter the core count atm on the node. Obviously, we could split the queue on the host according to this spec, but then we'd end up with some cores idling. So we do not support resetting the thread count on the host with SYCL.
By default, SYCL makes no device visible to the user. You have to call configure() manually to make GPUs available to Peano.
| void tarch::accelerator::Device::configure | ( | const std::set< int > & | devicesToUse = {} | ) |
In the TBB context, the GPU setting are ignored. If you however combine TBB with SYCL, then the SYCL policies (see below) hold.
By default, OpenMP makes all devices visible to the user.
In SYCL, you cannot alter the core count atm on the node. Obviously, we could split the queue on the host according to this spec, but then we'd end up with some cores idling. So we do not support resetting the thread count on the host with SYCL.
By default, SYCL makes no device visible to the user. You have to call configure() manually to make GPUs available to Peano.
| void tarch::accelerator::Device::configure | ( | const std::set< int > & | devicesToUse = {} | ) |
In the TBB context, the GPU setting are ignored. If you however combine TBB with SYCL, then the SYCL policies (see below) hold.
By default, OpenMP makes all devices visible to the user.
In SYCL, you cannot alter the core count atm on the node. Obviously, we could split the queue on the host according to this spec, but then we'd end up with some cores idling. So we do not support resetting the thread count on the host with SYCL.
By default, SYCL makes no device visible to the user. You have to call configure() manually to make GPUs available to Peano.
If Peano is configured with SYCL as multithreading back end, we know that the multicore component has already created a SYCL queue on the CPU. See tarch::multicore::getHostSYCLQueue for details. In this case, we expect the accelerator package to use this queue if there are tasks for the CPU. We don't want to have two CPU queues. If we use another threading back end in combination with SYCL, then we have to construct an explict CPU queue to be prepared to handle kernels that shall go onto the host. tarch::accelerator::getSYCLQueue() provides more information on that.
|
static |
|
static |
|
static |
| int tarch::accelerator::Device::getLocalDeviceId | ( | ) | const |
| int tarch::accelerator::Device::getNumberOfDevices | ( | ) | const |
Return the number of GPUs that are available.
It is then Peano's policy that you access these GPUs with numbers from 0 to ... We work with logical GPU numbers, i.e. they might be mapped onto different numbers internally. How this mapping is realised however is strongly depending on the used GPU backend.
| int tarch::accelerator::Device::getNumberOfDevices | ( | ) | const |
Return the number of GPUs that are available.
It is then Peano's policy that you access these GPUs with numbers from 0 to ... We work with logical GPU numbers, i.e. they might be mapped onto different numbers internally. How this mapping is realised however is strongly depending on the used GPU backend.
| int tarch::accelerator::Device::getNumberOfDevices | ( | ) | const |
Return the number of GPUs that are available.
It is then Peano's policy that you access these GPUs with numbers from 0 to ... We work with logical GPU numbers, i.e. they might be mapped onto different numbers internally. How this mapping is realised however is strongly depending on the used GPU backend.
| bool tarch::accelerator::Device::isInitialised | ( | ) | const |
| bool tarch::accelerator::Device::isInitialised | ( | ) | const |
| bool tarch::accelerator::Device::isInitialised | ( | ) | const |
| void tarch::accelerator::Device::shutdown | ( | ) |
Shutdown parallel environment.
| void tarch::accelerator::Device::shutdown | ( | ) |
Shutdown parallel environment.
| void tarch::accelerator::Device::shutdown | ( | ) |
Shutdown parallel environment.
|
private |
|
staticprivate |
|
staticprivate |
|
staticconstexpr |