Alt-BEAM Archive

Message #00746



To: Noam Rudnick rudnick1@cwix.com, beam beam@corp.sgi.com
From: Sean Rigter rigter@cafe.net
Date: Sun, 21 Feb 1999 15:51:38 -0800
Subject: [alt-beam] Re: associative memories



Wilf Rigter wrote:

>What's the question?

Noam Rudnick wrote:

>Good question!

Hi Noam,

I'm a bit out of my element when it comes to the details of associative
memory networks but I can sketch out requirements and a possible
mechanism for such networks based on intuition and common sense (maybe
others higher up on the learning curve could help):

> At what point does it become impossible to hold any more
> memories? Never?

The amount of data you can store will depend on the configuration
(number of nodes and layers of the associative memory (AM) network. The
larger the number of nodes the higher the resolution of the decision
process.

>I was thinking more like visual memories.

Let's say you have a bot with an AM network with inputs connected to a
visual matrix of 16 photo sensors. backup control of the bot. In this
simple case the sensors are either on or off dependent on the light
level of each sensor and the thresholds. In addition there is a "learn"
input which is connected to a "danger" sensor. The "danger" signal
occurs for example when sensing high current drain indicating someone's
robbing your energy stores (or you've just fallen into the ocean or your
motors are stalled). To make the AM useful and do something in response
to danger, an output is connected to the bot's reverse motion control.

Anyway the idea is to have a circuit which can be trained with visual
patterns which represent danger conditions and which can be avoided by
backing up. After sufficient "experience", a visual pattern similar to
the danger condition produces an output which backs up the bot before
the danger signal itself is triggered (anticipation/avoidance).

The AM network's association of a visual pattern with a "danger
response" (run away!) comes from the learned "pathways" between input
and the output layers.

So imagine a AM network 16 bits (nodes) wide and 16 layers deep for a
total of 256 AM nodes. The visual input is a 16 bit word that is applied
to the network input layer. Whenever a "danger" condition occurs, each
network layer are "updated". The input layer with the output of the 16
bit visual matrix and the next layer with the state of the prior layer.
This means that the nodes in each layer change state depending on the
state of the inputs to the node including their current state
(feedback), the state of their "lateral neighbours" (weight/threshold),
and the state of the upstream layer nodes (feedforward). Therefore the
network nodes in various layers are turned on or off depending on the
history of all previous "danger" events establishing a pathway for
matching patterns to reach the output layer to turn on the "reverse"
control signal.
So the network now responds to patterns which closely match previous
patterns associated with danger and produce a corresponding "reverse"
output signal.

>(to paraphrase Noam): In order to associate two visual patterns is this >just a matter of adjusting the resistances between each neuron?

Yes and no. Adjusting weighting factors between AM nodes is part of the
filtering process that sorts the information from the noise or tailors
the response of the AM to information but after filtering, the
information itself is learned and stored in the network node memories.

The remembered pathway becomes the "function" of the network to
propagate visual patterns to the output layer when the patterns closely
match previous patterns remembered at danger time in the previous
example.
To associate 2 patterns and initiate the same response, the "danger"
signal can be replaced with the output of another network which
remembers a different pattern with a response. So when the pattern in
network A is present at it's inputs, the pattern at the network B inputs
is propagated into the B network associating the B pattern with the A
pattern and output.

The filtering ( weighting) is needed to concentrate information content.
Since there may be random visual patterns (noise) at danger time which
don't add to useful information to the AM network, additional input
layer processing can be used to increase the "information to noise
ratio". For example, a logical function (weighing factor) at the pattern
inputs can be used to permit "mostly significant" patterns such as
visual edges or repeating patterns like stripes to propagate pathways to
the output layer.

The end result is an AM network which can recognize visual input
patterns through a process of learning and initiates avoidance control
based on the patterns' association with "danger".

> I hope these examples cleared up any misundersatndings about my original
> question.

Yes! The answers are as good as the examples and questions of the guy
who is asking and the knowledge of the guy who is answering.

So: excellent examples and questions but warning: I'll now go and read
the literature and find out if I'm even close 8^)

> > From: Sean Rigter
> > To: Noam Rudnick ; beam
> > Subject: Re: associative memories
> > Date: Sunday, February 21, 1999 3:49 PM
> >
> > Hi Noam,
> >
> > It's Wilf , Sean is my oldest (1 of 5) offspring who is kind enough to
> > let me use his fast internet access computer while I'm home.
> >
> > The question is : what is the question? Memory of what?
>

------------------------------------------------------------------------
eGroup home: http://www.eGroups.com/list/alt-beam
Free Web-based e-mail groups by eGroups.com

Home