Stefan Luding
last collision. Because this algorithm is “asynchronous” in so far that an event, i.e.
the next event, can occur anywhere in the system, it is so complicated to parallelize
it [57]. For the serial algorithm, a double buffering data structure is implemented,
which contains the ‘old’ status and the ‘new’ status, each consisting of: time of
event, positions, velocities, and event partners. When a collision occurs, the ‘old’
and ‘new’ status of the participating particles are exchanged. Thus, the former ‘new’
status becomes the actual ‘old’ one, while the former ‘old’ status becomes the ‘new’
one and is then free for the calculation and storage of possible future events. This
seemingly complicated exchange of information is carried out extremely simply and
fast by only exchanging the pointers to the ‘new’ and ‘old’ status respectively. Note
that the ‘old’ status of particle i has to be kept in memory, in order to update the time
of the next contact, t
i j
, of particle i with any other object j if the latter, independently,
changed its status due to a collision with yet another particle. During the simulation
such updates may be necessary several times so that the predicted ‘new’ status has
to be modified.
The minimum of all t
i j
is stored in the ‘new’ status of particle i, together with the
corresponding partner j. Depending on the implementation, positions and velocities
after the collision can also be calculated. This would be a waste of computer time,
since before the time t
i j
, the predicted partners i and j might be involved in several
collisions with other particles, so that we apply a delayed update scheme [28]. The
minimum times of event, i.e. the times, which indicate the next event for a certain
particle, are stored in an ordered heap tree, such that the next event is found at
the top of the heap with a computational effort of O(1); changing the position of
one particle in the tree from the top to a new position needs O(logN) operations.
The search for possible collision partners is accelerated by the use of a standard
linked-cell data structure and consumes O(1) of numerical resources per particle. In
total, this results in a numerical effort of O(N logN) for N particles. For a detailed
description of the algorithm see Ref. [28]. Using all these algorithmic tricks, we
are able to simulate about 10
5
particles within reasonable time on a low-end PC
[45], where the particle number is more limited by memory than by CPU power.
Parallelization, however, is a means to overcome the limits of one processor [57].
As a final remark concerning ED, one should note that the disadvantages con-
nected to the assumptions made that allow to use an event driven algorithm limit the
applicability of this method. Within their range of applicability, ED simulations are
typically much faster than DEM simulations, since the former account for a colli-
sion in one basic operation (collision matrix), whereas the latter require about one
hundred basic steps (integration time steps). Note that this statement is also true in
the dense regime. In the dilute regime, both methods give equivalent results, because
collisions are mostly binary [41]. When the system becomes denser, multi-particle
collisions can occur and the rigidity assumption within the ED hard sphere approach
becomes invalid.
The most striking difference between hard and soft spheres is the fact that soft
particles dissipate less energy when they are in contact with many others of their
kind. In the following chapter, the so called TC model is discussed as a means to
account for the contact duration t
c
in the hard sphere model.
470