Why do some neurons respond so selectively to words, objects and faces?
Press release issued: 26 February 2014
Some neurons in the brain respond to words, objects and faces in a highly selective manner, consistent with the so-called 'grandmother cell' theory whereby a particular neuron activates when a person sees, hears or otherwise senses a specific entity, such as his or her grandmother. For example, a neuron in a human was found to respond to images of Jennifer Aniston but not to other people, objects or scenes.
So why do neurons respond in this remarkable way? A new study by Professor Jeff Bowers and colleagues at the University of Bristol argues that highly selective neural representations are well suited to co-activating multiple things, such as words, objects and faces, at the same time in short-term memory.
The researchers trained an artificial neural network to remember words in short-term memory. Like a brain, the network was composed of a set of interconnected units that activated in response to inputs; the network 'learnt' by changing the strength of connections between units. The researchers then recorded the activation of the units in response to a number of different words.
When the network was trained to store one word at a time in short-term memory, it learned highly distributed codes such that each unit responded to many different words. However, when it was trained to store multiple words at the same time in short-term memory it learned highly selective ('grandmother cell') units – that is, after training, single units responded to one word but not any other. This is much like the neurons in the cortex that respond to one face amongst many.
Why did the network learn such highly specific representations when trained to co-activate multiple words at the same time? Professor Bowers and colleagues argue that the non-selective representations can support memory for a single word, given that a pattern of activation across many non-selective units can uniquely represent a specific word. However, when multiple patterns are mixed together, the resulting blend pattern is often ambiguous (the so-called 'superposition catastrophe').
This ambiguity is easily avoided, however, when the network learns to represent words in a highly selective manner, for example, if one unit codes for the word RACHEL, another for MONICA, and yet another JOEY, there is no ambiguity when the three units are co-activated.
Professor Bowers said: "Our research provides a possible explanation for the discovery that single neurons in the cortex respond to information in a highly selective manner. It's possible that the cortex learns highly selective codes in order to support short-term memory."
The study is published in Psychological Review.
'Neural Networks Learn Highly Selective Representations in Order to Overcome the Superposition Catastrophe' by Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian, and Colin J. Davis in Psychological Review