Browse/search for people

Publication - Dr Ella Gale

    Neuromorphic computation with spiking memristors

    Habituation, experimental instantiation of logic gates and a novel sequence-sensitive perceptron model

    Citation

    Gale, EM, 2019, ‘Neuromorphic computation with spiking memristors: Habituation, experimental instantiation of logic gates and a novel sequence-sensitive perceptron model’. Faraday Discussions, vol 213., pp. 521-551

    Abstract

    Memristors have been compared to neurons and synapses, suggesting they would be good for neuromorphic computing. A change in voltage across a memristor causes a current spike which imparts a short-term memory to a memristor, allowing for through-time computation, which can do arithmetical operations and sequential logic, or model short-time habituation to a stimulus. Using simple physical rules, simple logic gates such as XOR, and novel, more complex, gates such as the arithmetic full adder (AFA) can be instantiated in sol-gel TiO 2 plastic memristors. The adder makes use of the memristor's short-term memory to add together three binary values and outputs the sum, the carry digit and even the order they were input in, allowing for logically (but not physically reversible) computation. Only a single memristor is required to instantiate each gate, as additional input/output ports can be replaced with extra time-steps allowing a single memristor to do a hitherto unexpectedly large amount of computation, which may mitigate the memristor's slow operation speed and may relate to how neurons do a similarly large computation with slow operation speeds. These logic gates can be understood by modelling the memristors as a novel type of perceptron: one which is sensitive to input order. The memristor's short-term memory can change the input weights applied to later inputs, and thus the memristor gates cannot be accurately described by a single perceptron, requiring either a network of time-invariant perceptrons, or a sequence-sensitive self-reprogrammable perceptron. Thus, the AFA is best described as a sequence-sensitive perceptron that sorts binary inputs into classes corresponding to the arithmetical sum of the inputs. Co-development of memristor hardware alongside software (sequence-sensitive perceptron) models in trained neural networks would allow the porting of modern deep-neural networks architecture to low-power hardware neural net chips.

    Full details in the University publications repository