Computing with large recurrent neuronal networks
Our research is focused on understanding how cognition and neuronal realtime information processing can arise from the collective self-organization of components interacting across spatial and temporal scales. I firmly believe that this requires the combination of data analytics, and time series analysis of cognitive processes,behaviour and neuronal activity in conjunction with machine learning and computational models to achieve a unifying understanding.
(a) A stimulus pattern is defined, and uncorrelated random spike trains are sent to the subset of cells specified by the pattern (black cells, b). (b) Activity in the excitatory network is measured in various locations—here, two fixed measurement sites shown in blue. (c) The degree of zero time lag synchrony of the measured locations varies with the stimulus pattern. (cf: Neural Computation, Volume 29, Issue 9, 'Cortical Spike Synchrony as a Measure of Input Familiarity ')
For that we develop and uses analytical and canonical computational models for perception and motor control. This research fuses machine-learning and complex system science. The aim is to identify and understand principles of emergent network dynamics and computations, in the presence of plasticity, neuronal delays and noise.
While in a traditional view, noise and especially delays are often ignored or minimized to reduce their impact, we are interested in principles of neuronal information processing that can take advantage of these properties. To study such principle canonical computations, we use reservoir computing that allows a simple separation of computational aspects, such as supervised versus unsupervised learning, as well as nonlinear computation and memory. Recently we started to extend this by using real time reservoir computing to control the flight AR-drones to demonstrate real world applicability of our model based research. As an outreach to the engineering aspects, we cooperate with engineers to inspire the development of neuro-inspired hardware. In this respect, we co-developed over the last years delay-coupled reservoir computing that can emulate computations of a recurrent network based on just a single delay-coupled nonlinear neuronal component, and that was built as a photonic and electronic delay-coupled reservoir.
Synchrony reflects the match between input patterns and network structure. (a) Subpopulations with different average path lengths on the given network are driven by random spike inputs (black dots) as the synchrony of the activated cells is measured. (b) Synchrony measured in growing windows. Middle lines denote the median, colored bands the two middle and two outer quartiles, of 50 trials where synchrony was measured in intervals from t = 0 to the corresponding time on the lower axis. (c) Speed of synchrony divergence between stimuli A and C. The histogram shows in how many trials these conditions could be successfully classified by synchrony after observing a certain average number of spikes. Here, the conditions could be discerned by the synchrony of the first spike wave in most trials. (cf: Neural Computation, Volume 29, Issue 9, 'Cortical Spike Synchrony as a Measure of Input Familiarity ')
weakly connected network (click on image to see the video)
strongly connected network (click on image to see the video)
Computations based on cortical spiking activity
Depending on the behavioral context, specific cortical neurons fire in synchrony. In the sensory cortex, this depends on qualities of the sensory input: sounds evoke simultaneous activity in auditory cortex cells with matching receptive fields, distant cells in somatosensory cortex synchronize when particular skin regions are stimulated, and synchrony in the primary visual cortex (V1) varies with geometrical stimulus features such as spatial continuity or similarity of orientation. This can be observed as soon as 30 ms after a stimulus change, that is, within only a few spikes.
We argue for a general function of such spike syncchrony: a measure of the prior probability of incoming stimuli, implemented by long-range, horizontal, intracortical connections. We show that networks of this kind—pulse-coupled excitatory spiking networks in a noisy environment—can provide a sufficient substrate for stimulus-dependent spike synchrony. This allows for a quick (few spikes) estimate of the match between inputs and the input history as encoded in the network structure. Given the ubiquity of small, strongly excitatory subnetworks in cortex, we thus propose that many experimental observations of spike synchrony can be viewed as signs of input patterns that resemble long-term experience—that is, of patterns with high prior probability.
Below you can find the activity of three types of networks illustrating this bebaviour:
In more abstract terms, we have proposed that various cortical networks have access to a synchrony-encoded estimate of the prior probability of observing the current input pattern. A first estimate is available directly after the onset of the pattern (since synchrony in the first few spike waves is often already informative), after which precision continually improves. Hence, such a signal could be used early after input onset in a feedforward fashion—for example, to guide attention toward stimuli composed of plausible parts. More generally, estimates of prior probabilities are a prerequisite
in Bayesian accounts of perception and learning, but it is unclear how such probabilities are represented neurally. We suggest that a spike-based encoding with the presented mechanism allows to rapid generation and transmission of such signals.
see original paper: 'Cortical Spike Synchrony as a Measure of Input Familiarity'
C. Korndörfer, E. Ullner, J. García-Ojalvo, G. Pipa, 'Cortical Spike Synchrony as a Measure of Input Familiarity', Neural Computation, Volume 29, Issue 9
A Lazar, G Pipa, J Triesch, 'SORN: a self-organizing recurrent neural network', Frontiers in computational neuroscience 3
H Toutounji, G Pipa. 'Spatiotemporal computations of an excitable and plastic brain: neuronal plasticity leads to noise-robust and noise-constructive computations', PLoS computational biology 10 (3), e1003512
W Aswolinskiy, G Pipa.'RM-SORN: a reward-modulated self-organizing recurrent neural network', Frontiers in computational neuroscience 9
AD Kovac, M Koall, G Pipa, H Toutounji, 'Persistent Memory in Single Node Delay-Coupled Reservoir Computing', PloS one 11 (10), e0165170
H Toutounji, J Schumacher, G Pipa,'Homeostatic plasticity for single node delay-coupled reservoir computing', Neural computation
P Nieters, J Leugering, G Pipa, 'Neuromorphic computation in multi-delay coupled models', IBM Journal of Research and Development 61 (2/3), 8: 7-8: 9