Memory: retrieval, learning and forgetting


Copyright George Christos 2003


return to my home page



In the brain, memory is a representation of the past. 

Clearly memory serves an important biological function as it gives animals a survival edge in that they can use previously acquired knowledge.  

In the brain, when a particular channel between two neurons is used often, this channel becomes ‘enlarged’ so that it is even more accessible in the future, and conversely if a particular channel is not used, it diminishes in its capacity.  This is how learning takes place in a nervous system.  By directing the flow of electricity through the same channels as before, the brain is able to reinitiate (or recall) those same electrical patterns, or memory states, which caused this change in the first place. 

In the human brain there are on the order of one million billion connections between neurons which can be varied in this way.  The flow of information (or electrical current) across these junctions, where two neurons (almost) touch each other (called the synaptic clefts), is controlled by the transfer of chemicals (called neurotransmitters).  The amount of chemical current that is transferred across the synaptic gap is variable, and this is how memory is actually stored in the brain. Without synaptic gaps and variable neurotransmitter flow, memory cannot be stored.

A memory is represented by a certain stabilized pattern of firing neurons.  When the brain receives a meaningful input, it processes the input until it settles into a stationary state (also called an attractor) where the activation states of the neurons collectively stabilise, or persist in the same state of excitation (or quiescence).  This is how memory is recalled, or how we recognize that an input was familiar to us.  Memories are not stored in any one particular neuron, but are distributed over a wide area of the brain, involving many neurons, possibly many millions.  Different memories may also share certain active neurons in their representations, so in this sense memories overlap each other.  This is quite unlike the way memory is stored in a computer where it has a unique address and a separate location where it is stored.  When a programmer wants to retrieve this memory he just uses its address.  In the brain, memory is retrieved instead by the content of the input (termed 'content addressable'). 

If sufficient information or cues are provided, the brain will be able to retrieve that memory.  This is a useful attribute of brain function because one can retrieve a memory with only part of its ‘address’, whereas a computer would not respond if it was not given the entire address precisely.  This explains why we are able to recognise someone we ‘know’, even though they may look quite different to when we last saw them.  Because neurons work together, or collectively, the brain is quite robust to errors in the input and to noise in general.  If a small number of neurons are in the incorrect firing mode for a particular memory, the other neurons collectively correct those neurons.  A computer on the other hand stops executing (what it was meant to do) if a single solitary instruction, or a single bit, is wrong." 

Most of my actual research work in this area involves 'attractor neural networks', like the Hopfield model.  It is quite amazing that these simple models offers heuristic explanations of some of our most important and amazing brain functions.  Many of my own ideas about how the brain works have originated from studying simple models like this.