
180 J.C. Martin, R. Veldman, and D. B~roule
Given the general definition of a symbol,
a figurative sign which stands for
certain values,
the question of its computational implementation can be consid-
ered beyond the von Neumann architecture framework. In yon Neumann com-
puters, a symbol can be represented by a given memory register filled with one
of several possible values. For binding symbols, this content-based coding re-
quires an extra dimension, aimed at allowing their mutual addressing. Initially
restricted to the management of data by the processor, pointers have been in-
cluded in the data on which symbolic programming languages work, so as to
temporarily bind memory registers. The spatial relations, or virtual connectiv-
ity of these registers in working memory, constitute a topological coding which
has been added to the content-based one. Accordingly, an architecture, already
based on the topological relations of its memory units, such as a GPN, can be
completed by the content of these units for the same symbolic programming
purpose. The use of these two coding dimensions is inverse: whereas a symbol is
represented in (von Neumann-) computers by a given memory location receiving
a variable content, the same symbol is represented in GPNs by a given signal
having a particular content, and being propagated towards variable memory lo-
cations (Beroule, 1990). The main advantage of this alternative representation
lies in the possible simultaneous assignment of several values it allows for a given
symbol, since a signal can be propagated in parallel towards several locations.
The adequacy of every symbol in the current context can thus be evaluated in
one shot, provided an appropriate architecture. Besides, a register cannot receive
more than one value at a time in the sequential computer. Another advantage
of the proposed representation is that it does not require any change of the
data structure, whereas a new combination of symbols to be memorized must be
stored in the knowledge base of (yon Neumann-) computers. Compared with clas-
sical AI approaches such as conceptual graphs (Sowa, 1983), symbols are bound
through the propagation of the same signal content instead of building specific
graphs. But then a new signal content must be dynamically allocated whenever
a new combination of symbols occurs. There should also be inferential mecha-
nisms inherent to the architecture for retrieving specific symbol combinations in
working memory. This manifestation of semantic compositionality is addressed
now, together with the type of 'content' an internal signal could convey.
Although GPN internal signals can be specified in several ways for represent-
ing symbols, the precise location in time of a pulse has been demonstrated to
be the most convenient format (Martin, 1995). First, with regard to the man-
agement of symbols, this temporal coding can be processed through the basic
computational mechanism of GPNs: coincidence detection. Second, this solu-
tion allows several symbols (pulses) to share the same value (memory location)
without interfering, since each symbol occupies a specific time slice. Third, it
is compatible with the GPN management of noisy and partial data, as shown
in the previous section. Because of coincidence detection, a pulse signal propa-
gating through a frame structure for data-retrieval purposes will activate all the
previously assigned slot values with which this pulse is synchronized (synchrony
coding). As shown in Fig. 16, a semantic frame could be represented by a tree-like