Peano 4
Loading...
Searching...
No Matches
toolbox::particles::memorypool::GlobalContinuousMemoryPool< T > Class Template Reference

Memory pool offering continuous global memory for a particle species. More...

#include <GlobalContinuousMemoryPool.h>

Collaboration diagram for toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >:

Data Structures

struct  GlobalMemory
 Represents the global memory. More...
 

Public Types

typedef std::list< T * > Container
 

Public Member Functions

 GlobalContinuousMemoryPool ()
 
void scatter ()
 Scatter the data.
 
Container::iterator scatterAndUpdateIterator (const typename Container::iterator &p)
 
void gather ()
 Gather the particle.
 
bool isGathered () const
 Is the vertex data gathered.
 
void replace (typename Container::iterator p, T *newCopy)
 Replace particle.
 
void clearAndReset ()
 Clear dataset tied to one vertex.
 

Static Public Member Functions

static bool requestCompleteScatter ()
 Recommend complete scatter.
 

Data Fields

Container container
 List of pointers.
 

Private Attributes

T * _gatheredDataPointer
 Is nullptr as long as data is not gathered.
 
int _globalMemoryPage
 Equals UndefinedPoolReference as long as data is not gathered.
 

Static Private Attributes

static tarch::logging::Log _log
 
static constexpr int UndefinedMemoryPage = -1
 
static GlobalMemory _globalMemory
 This class attribute represents the global data space for all of the particles.
 

Detailed Description

template<class T>
class toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >

Memory pool offering continuous global memory for a particle species.

Consult the generic description of memory pools before you continue to read this documentation. This class is used by every "vertex" object, i.e. each vertex holds one instance of GlobalContinuousMemoryPool. It's field container is used to point to all the particles scattered over the memory after we've created the particles. However, whenever we call gather(), we take all of these particles from the heap, put them into one continuous container and then make the pointers within container point to those guys.

Originally, I had planned to work with one big memory chunk indeed, and then to subdivide this memory into chunks to serve the memory requests. Unfortunately, such an approach is doomed to fail: We don't know the particle distribution a priori. So we have to grow the memory dynamically. However, once a vertex points to a continuous memory location for its particle, these particles may not move under no circumstances. This however is unavoidable if we want to work with one huge memory chunk. At the same time, every gather() is guaranteed to return to continue(). So we cannot return something like "sorry, wasn't able to reserve that continuous chunk of memory". The only solution to this problem is, atm, to make the global continuous memory model a strict extension of VertexWiseContinousMemory:

We hold a global memory pool with pages that we try to befill with the memory requests. If this does not succeed, we fall back to VertexWiseContinousMemory's strategy, i.e. hand out a new continuous piece of memory on the heap. However, we learn from such mistakes. The class keeps books of requests that it has not been able to host locally and then grows its memory upon the next opportunity accordingly. Next time, it should be able to serve all requests.

The global memory is a static object of type GlobalMemory. It is a wrapper around an std::vector which holds the data. So this vector can grow upon demand. On top of this vector, we maintain a list of tuples (int,booleans). They divide the memory into pages, whereas the first index defines the size of the pages. Per page, we also take notes if this page is occupied at the moment.

Overall, this memory pool is more complex than the its vertex counterpart. It also might consume slightly more memory in total, as it might overestimate the total memory footprint. However, there are a few advantages, too:

  • Memory is less scattered compared to other approaches, as we try to store everything in one huge chunk of memory.
  • We don't need a lot of allocs and frees anymore, as most of the memory requests can be served from one huge chunk of memory.

It is very important that you read through the implications for the particlesorting" that arise if you use a global memory pool.

Definition at line 73 of file GlobalContinuousMemoryPool.h.

Member Typedef Documentation

◆ Container

template<class T >
typedef std::list<T*> toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::Container

Definition at line 75 of file GlobalContinuousMemoryPool.h.

Constructor & Destructor Documentation

◆ GlobalContinuousMemoryPool()

template<class T >
toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::GlobalContinuousMemoryPool ( )

Definition at line 17 of file GlobalContinuousMemoryPool.cpph.

Member Function Documentation

◆ clearAndReset()

template<class T >
void toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::clearAndReset ( )

Clear dataset tied to one vertex.

This routine is used by shallow clears(). It does not actually free any data, but basically just resets the local vertex and unties it to any data it might reference to.

Definition at line 58 of file GlobalContinuousMemoryPool.cpph.

◆ gather()

template<class T >
void toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::gather ( )

Gather the particle.

If we invoke this routine on a gathered memory pool, it becomes nop (no operation). Otherwise, the routine's implementation is rather straightforward:

  • Allocate one big chunk of memory
  • Run over the list of pointers. This run-through has to be done with a reference, as we will change the pointers on our way. So we use auto& as type.
  • Copy the current iterator element over into the big chunk of memory and increment the counter in there.
  • Delete the original piece of data on the global heap.
  • Make the list pointer point to the new fragment within the memory.

Please note that it makes no sense to gather an empty set, so we explicitly take that into account as well.

Definition at line 120 of file GlobalContinuousMemoryPool.cpph.

References assertion, tarch::Heap, and logDebug.

◆ isGathered()

template<class T >
bool toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::isGathered ( ) const

Is the vertex data gathered.

This routine returns false if the underlying container is empty.

Definition at line 167 of file GlobalContinuousMemoryPool.cpph.

References assertion.

◆ replace()

template<class T >
void toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::replace ( typename Container::iterator p,
T * newCopy )

Replace particle.

Takes the particle identified via p and copies the content of newCopy over.

If our data is gathered, we copy stuff over. After that, newCopy is deleted. If our data is scattered, we can just delete the original one and push back the new copy.

Definition at line 174 of file GlobalContinuousMemoryPool.cpph.

References assertion.

◆ requestCompleteScatter()

template<class T >
bool toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::requestCompleteScatter ( )
static

Recommend complete scatter.

A memory pool can recommend a complete scatter. The memory pool should do this if the memory is managed becomes too scattered, or any pre-allocated memory region is too small.

Definition at line 251 of file GlobalContinuousMemoryPool.cpph.

References toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::_globalMemory, and toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::GlobalMemory::additionalEntriesRequested.

◆ scatter()

template<class T >
void toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::scatter ( )

Scatter the data.

If the data is not gathered yet, nothing is to be done. container already points to particles all over the heap. However, if _globalMemoryPage points to a valid page, we put all the particles of this page onto the heap, make the entries of _container point to these heap locations, and then mark the page as freed. Eventually, we call the garbage collection.

Definition at line 24 of file GlobalContinuousMemoryPool.cpph.

References assertion1, tarch::freeMemory(), tarch::Heap, and logDebug.

Here is the call graph for this function:

◆ scatterAndUpdateIterator()

template<class T >
toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::Container::iterator toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::scatterAndUpdateIterator ( const typename Container::iterator & p)

Definition at line 67 of file GlobalContinuousMemoryPool.cpph.

References assertion, tarch::freeMemory(), tarch::Heap, and logDebug.

Here is the call graph for this function:

Field Documentation

◆ _gatheredDataPointer

template<class T >
T* toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::_gatheredDataPointer
private

Is nullptr as long as data is not gathered.

We either store this pointer or we make _globalMemoryPage hold something meaningful.

Definition at line 232 of file GlobalContinuousMemoryPool.h.

◆ _globalMemory

This class attribute represents the global data space for all of the particles.

I can initialise the global space with a certain size and hence ensure that the first few particles fit in. However, it seems that this is not clever: It is better to let the storage scheme use scattered heap accesses in the first time step. After that, we know exactly how much space we need and can allocate the heap accordingly.

Definition at line 225 of file GlobalContinuousMemoryPool.h.

Referenced by toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::requestCompleteScatter().

◆ _globalMemoryPage

template<class T >
int toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::_globalMemoryPage
private

Equals UndefinedPoolReference as long as data is not gathered.

Definition at line 237 of file GlobalContinuousMemoryPool.h.

◆ _log

Definition at line 162 of file GlobalContinuousMemoryPool.h.

◆ container

List of pointers.

If data is scattered, that's our "only" link to the particles on the heap. Otherwise, it indexes a continuous sequence of particles from one page.

Definition at line 82 of file GlobalContinuousMemoryPool.h.

◆ UndefinedMemoryPage

template<class T >
constexpr int toolbox::particles::memorypool::GlobalContinuousMemoryPool< T >::UndefinedMemoryPage = -1
staticconstexprprivate

The documentation for this class was generated from the following files: