Random Neural Networks (RNN) are Neural Networks (NN) of a new kind,
proposed by E. Gelenbe in the early 90s and having nice mathematical
properties and very good performances. Another very specific property of
these tools is that they are also Markovian open queuing networks with
two different types of customers: positive and negative ones (negative
ones actually behave as signals in the model; they are not observable,
only their effects are). The previous sentence means that the same
mathematical object can be seen as a particular NN or as an open network
of queues. When we look at RNN as networks of queues, they are called
G-networks. A critical feature of these models is that they belong to
the product form family, that is, their steady-state distribution is a
product of factors one for each node in the system. This is the key of
their nice mathematical properties which allow for very efficient analysis.
As other NN, RNN have been used in many fields that can be roughly
classified into two very different types: learning (pattern recognition,
pattern synthesis, classification problems...) and optimization. In the
first type of applications, the RNN is trained to perform some task
(formally, a sequence of RNN is iteratively built that converges towards
a RNN that "knows" -some way- how to do the task). In the second one,
the RNN is tuned to allow finding a pseudo-optimum of some complex
function f() (formally, again, a sequence of RNN is iteratively built in
such a way that with each RNN is associated a new point x of f()'s
domain such that f(x) is better than previous obtained values).
This tutorial will present the mathematical model and with some detail,
one example of each of the two types of applications. Both examples come
from the networking area and have been developed by the author and
co-workers. Both examples address performance issues.
Both examples illustrate the excellent performance of these tools.
- The first application example (in learning) describes how to use RNN
in order to quantify the quality of a multimedia stream as perceived by
the final user, after the stream was (perhaps) perturbed by an IP
network. In other words, it deals with the performance of a streaming
communication, focusing on the perceived final quality (and not on
indirect -with respect to the user- performance metrics such as losses
- The second example (in optimization) shows how to use the tool to
design the topology of the access sub-network of a communication one,
satisfying different constraints. In other words, it deals with
designing a communication network having good performance or
dependability properties at minimal costs.