76 template<
typename inType=
double,
typename outType=
double>
static constexpr int HostDevice
Accelerator devices (GPUs) are enumerated starting from 0.
For the generic kernels that I use here most of the time.
@ Heap
Create data on the heap of the local device.
Representation of a number of cells which contains all information that's required to process the sto...
outType ** QOut
Out values.
const tarch::MemoryLocation memoryLocation
We might want to allocate data on the heap or an accelerator, therefore we save the target device id.
inType ** QIn
QIn may not be const, as some kernels delete it straightaway once the input data has been handled.
const int numberOfCells
As we store data as SoA, we have to know how big the actual arrays are.
CellData(inType *QIn_, const tarch::la::Vector< Dimensions, double > &cellCentre_, const tarch::la::Vector< Dimensions, double > &cellSize_, double t_, double dt_, outType *QOut_, tarch::MemoryLocation memoryLocation_=tarch::MemoryLocation::Heap, int targetDevice_=tarch::accelerator::Device::HostDevice)
Construct patch data object for one single cell.
std::string toString() const
CellData(const CellData< inType, outType > ©)=delete
int * id
Id of underlying task.
const int targetDevice
We might want to allocate data on an accelerator, therefore we save the target device id.
double * maxEigenvalue
Out values.
tarch::la::Vector< Dimensions, double > * cellCentre
tarch::la::Vector< Dimensions, double > * cellSize