Extracting invariant features in an unsupervised manner is crucial to perform complex computation such as object recognition, analysing music or understanding speech. While various algorithms have been proposed to perform such a task, Slow Feature Analysis (SFA) uses time as a mean of detecting those invariants, extracting the slowly time-varying components in the input signals. In this work, we address the question of how such an algorithm can be implemented by neurons, and apply it in the context of audio stimuli. We propose a projected gradient implementation of SFA that can be adapted to a Hebbian like learning rule dealing with biologically plausible neuron models. Furthermore, we show that a Spike Tim Dependent Plasticity learning rule, shaped as a smoothed second derivative, implements SFA for spiking neurons with plastic synapses. The theory is supported by numerical simulations, and to illustrate a simple use of SFA, we applied it to auditory signals. We show that a single SFA neuron can learn to recognize the pitch and the tempo in sound recordings.
Guillaume Bellec got a master degree in Mathematical Optimization at ENSTA Paristech (Paris, France) and completed the highly competitive master program MVA (Maths, Learning and Vision) at ENS (Cachan, France). These were opportunities to complete various lab rotation in the fields of Machine learning and Music Information Retrieval with Tillman Weyde at City University (London, UK) and Anders Friberg at KTH (Stockholm, Sweden). Before starting his PhD he recently stayed 9 months in the lab of Computational Neuroscience of the Sensory Systems with RomainBrette(Paris).