17.16 Neural Network 617
These are some of the areas where artificial neural network can be and is
being used. This subject as such is vast and it is outside the scope of this
book. Here only a few basic points regarding the use of neural network for
global optimization of geophysical data will be highlighted very briefly.
There are mill ions of structure elements within the brain called neurons.
Millions of synapses are formed to connect these neurons so that the brain
becomes a parallel computing system. Synapses are also elements of structural
units, which are responsible for the interaction between the neurons. These
neurons with variable synaptic connection functions as information processing
units in the human brain. In neural network, the structure is made with
artificial neuron interconnected to each other. These artificial neural networks
are designed to function the way the brain do es a particular job. Human brain
is an immensely superior system. ANN networks are trained to a particular
job and it tries to do that job only. Hopfields ( 1975 ) procedure is discussed
briefly.
A Neural networ k can be defined as follows: A neural network is a system
composed of many simple processing elements operating in parallel whose
function is determined by network structure, strengths of the connecting links.
The processing is performed at the nodes or neurons. In global optimization
problem, the structure of ANN consists of an input layer, one or more hidden
layers and an output layer. The number of hidden layers can vary from 1 to
n where the value of n will b e problem dependent. For geophysical inverse
problems the input layer will contain the data and the output layer contains
all the models parameters obtained. Each of the nodes of the input layer is
connected with each of the nodes of the first hidden layer. Nodes of the input
layers are not connected to each other.
Each of the nodes of the output layer is connected to each of the nodes of
the nth hid den layer. That is how a neural net is constructed. The number of
data points of an inverse problem will dictate the number of nodes in the input
layer and the number of parameters to b e retrieved from a set of data will be
equal to the number of nodes in the output layer. How many hidden layers
and how many nodes, required in each hidden layers, are decided thro ugh
repeated experiment while trying to solve a particular problem in a particular
field. Number of nodes in a hidden layer and number of hidden layers in a
problem ar e highly variable parameters. Each of these connections between
the nodes is assigned a certain weightage. These weights are initially selected
at random. Then the learning process or the training process starts. It starts
with known synthetic models for which both inputs and outputs are known.
The message about the discrepancy between actual output and the outputs
obtained with randomly chosen synaptic weights are back propagated through
preceptors and the weights are changed in successive iterations till the actual
output and computed output becomes more or less the same with already
prescribed minimum error level. This learning process is done several times
with noise fr ee synthetic data and then with data mixed with Gaussian noise.