Home > Conferences > CNS*2023-ITW
18-19 July, 2023 Leipzig |
Aims and topicsMethods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited. The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work. The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience. For the program of the past IT workshops see the Previous workshops section. |
The workshop will be held as a part of the wider CNS*2023 in Leipzig, Germany. Please see the CNS*2023 website for registration to the workshops (this is required to attend).
The workshop will take place on the 18th and 19th of July. The exact time and sessions will be annouced later.
We would like to thank the Entropy journal for sponsoring our Best Presentation Award for ECRs, which we have awarded to:
The following are invited and contributing speakers for the workshop.
Now Closed!
Jürgen Jost - "Encoding and decoding with Poisson neurons"
After reviewing the encoding and decoding of information in Poisson neurons, I shall report on recent work with Masud Ehsani on when a spiking model neuron reacts with Poisson output to Poisson input.
Sarah Marzen - "How do neurons, humans, and artificial neural networks predict?"[Slides]
Sensory prediction is thought to be vital to organisms, but few studies have tested how well organisms and parts of organisms efficiently predict their sensory input in an information-theoretic sense. In this talk, we report results on how well cultured neurons ("brain in a dish") and humans efficiently predict artificial stimuli. We find that both are efficient predictors of their artificial input. That leads to the question of why, and to answer this, we study artificial neural networks, finding that LSTMs show similarly efficient prediction but do not model how humans learn well. Instead, it appears that an existing model of cultured neurons and a model of humans as order-R Markov modelers explain their performance on these prediction tasks.
Demian Battaglia - "Is the chronnectome a higher-order connectome?"
The investigation of dynamic Functional Connectivity of the resting state in control subjects and patients has attracted a considerable interest, has given rise to the new field of "chronnectomics" and bears the promise of yielding superior biomarkers. Indeed the variability of functional networks may be perturbed by pathological progression even before the networks are so damaged to have alterations visible at the level of their time-averages. Even more recently, another attempt to go beyond the investigation of "classic" functional networks has been the rise of a multiplicity of methods to investigate higher-order interactions between brain regions and neuronal populations, also bringing novel potential biomarkers.
These two fields, dynamic connectivity and higher-order connectivity, are most of the times seen as separate. However they are tightly linked, as functional pairwise links do not fluctuate independently but in a coordinated manner, so that their dynamic fluctuations can be seen as constrained by an over-arching static skeleton of third and fourth order interactions ("trimers" and "tetramers"). Here, we review research from the lab showing evidence of the biomarking potentials of dynamic connectivity measures, describing a baseline organization between order and disorder, which is disrupted toward greater randomness in conditions as Alzheimer's disease. We also show that the higher order interactions shaping dynamic resting state connectivity are not the trivial byproduct of underlying motifs of second-order interactions, but are "genuine trimers" and "tetramers", indicative of multi-regional synergies, which are disrupted by progressing pathologies earlier than pairwise redundancies.
Marilyn Gatica - "Disruptions in the synergistic and redundant brain interdependencies after focused ultrasound stimulation" [Slides]
Brain interdependencies are usually studied using Pearson's correlation, neglecting statistical causality, especially if the variables carry redundant and synergistic information about their future. Here, we quantify the redundant and synergistic interactions using Integrated Information Decomposition [1, 2] in functional MRI data in macaques before and after focused ultrasound stimulation (FUS), targeting different regions. Our preliminary results show disruptions in the synergy and redundancy after FUS while presenting heterogeneities across brain areas and macaques.
References
[1] Pedro A.M. Mediano, Fernando Rosas, Robin L. Carhart-Harris, Anil K. Seth, Adam B. Barrett Towards an extended taxonomy of information dynamics via integrated information decomposition. Preprint at https://arxiv.org/abs/2109.13186 (2021).
[2] Luppi, A.I., Mediano, P.A.M., Rosas, F.E. et al. A synergistic core for human brain evolution and cognition. Nat Neurosci 25, 771–782 (2022). https://doi.org/10.1038/s41593-022-01070-0
Pedro Mediano - The burning question of consciousness [Slides]
Physicists in the 18th century faced two challenging problems: inventing temperature, and inventing thermometers -- at the same time. Contemporary neuroscience faces a similar challenge in the field of consciousness science: we need both a scientifically and clinically relevant definition of consciousness, as well as a way to measure it. In this talk I will introduce novel tools, based on information theory, to describe and measure consciousness from neural dynamics. First, we will cover applications of entropy measures to brain activity, and argue that it provides a robust and generalisable measure of conscious level. Then, we will briefly describe the framework of information decomposition and highlight possible future directions to quantify various dimensions of consciousness.
Nadine Spychala - "Making sense of multi-scale relationships in theory and application" [Slides]
Multi-scale relationships in emergent phenomena and complex systems are studied across various disciplines. They explore the unresolved subject of how macro- and micro-scales relate to each other – are they independent, reducible, or is their relationship more complex? Historically, the lack of formalism hindered empirical investigation, but recent research introduced quantitative measures based on information theory to quantify emergence in systems whose components evolve over time.
In this talk, I'll review approaches and present a conceptual framework for studying emergence that is compatible with, and productive for, empirical inquiries. Measuring emergence is new terrain, therefore requiring extensive testing and cumulative evidence. I'll discuss initial progress towards this by presenting a series of applications of measures of emergence – alongside more established measures of complexity – to different system models that bear relevance to neuroscientific questions: multivariate autoregressive networks, Kuramoto oscillators and variational inference.
In variational inference, we find increased information integration – a measure of complexity – for moderately to strongly correlated target distributions, but surprisingly, decreased double-redundancy (a key quantity for measures based on Partial Information Decomposition) for highly correlated ones. In autoregressive networks, two measures of emergence – Dynamical Independence and Causal Emergence – show diverse relations between macro- and micro-scales, especially with larger systems and and a more exhaustive variation of coupling strengths. We also find substantial similarities to simpler measures such as total correlation, raising the question of how much we gain from using measures operationalizing more sophisticated concepts such as complexity or emergence. Regarding Kuramoto oscillators, I will talk about different candidate emergent macro variables in those systems, and what the general challenges more broadly are in dealing with non-linear and non-stationary systems in the context of information-theoretic measures.
All results taken together, a complex pattern of multi-scale relationships emerges which evades simple conclusions and questions the applicability of measures to empirical data. Identifying suitable macro-scales in simulation models remains a challenge, reflecting the broader issue of engineering emergence. Testing to what extent multi-scale measures, particularly newly developed measures of emergence, yield reasonable results is a very challenging endeavour requiring iterative operationalisation and testing on both ”paradigmatic” and more challenging cases. This, more than anything else, calls for sophisticated benchmarking procedures.
Andreas C. Schneider - "Infomorphic Networks: A Locally Learning Approach to Neural Computation based on Partial Information Decomposition" [Slides]
Neural networks rely on coordination among individual neurons to perform complex tasks, but in the brain, they must operate within the constraints of locality for both computation and learning. Our research uses an information-theoretic approach to better understand how locality affects neural networks' structure and operation. We employ Partial Information Decomposition (PID) to quantify unique, redundant, and synergistic information contributions to a neuron's output from multiple groups of inputs. Using this conceptualization, we derive a very general, parametric local learning rule that allows for the construction of networks that consist of locally learning neurons, which can perform tasks from supervised, unsupervised, and associative memory learning. We have recently scaled our approach, demonstrating its potential as an alternative to deep neural networks in machine learning. Our framework provides a powerful tool for investigating the information-theoretic principles underlying the operation of living neural networks and may facilitate the development of locally learning artificial neural networks that function more closely to the brain.
Maria Pope - "Multivariate information theory reveals synergistic subsystems of the cerebral cortex" [Slides]
One of the most well-established tools for modeling the brain is the functional connectivity network, which is constructed from pairs of interacting brain regions. While powerful, the network model is limited by the restriction that only pairwise dependencies are considered and potentially higher-order structures are missed. In this talk, I will explore how multivariate information theory reveals higher-order dependencies in the human brain. I will present results from applying the recently introduced O-information to resting state brain data, showing that synergistic subsystems are widespread in the human brain, and that highly synergistic subsystems typically sit between canonical functional networks. I will show similar results using the partial entropy decomposition that confirm this finding. Additionally, I will show that maximally synergistic subsystems (found by optimizing for synergy-dominance) typically comprise ≈10 brain regions recruited from multiple canonical brain systems. Finally, I will touch on some novel results applying the O-information to movie-watching data and the localized version of the O-information to resting state data. Though ubiquitous, highly synergistic subsystems are invisible when considering pairwise functional connectivity, and our results suggest that highly synergistic systems do not fall along canonical system lines. As a whole, my talk will argue that higher-order interactions in the brain represent an under-explored space that, accessible with tools of multivariate information theory, may offer novel scientific insights.
Joel Frohlich - "Sex Differences in prenatal development of neural complexity in the human brain" [Slides]
The human brain undergoes an increase in anatomical and functional complexity during gestation, progressing from a system with limited sensory processing and network integration at 25 - 30 weeks gestation to one capable of multisensory integration and predictive processing during the perinatal period and early infancy. An intuitive hypothesis to draw from this progression is that the complexity of brain dynamics found in auditory evoked neural signals should increase with maturation during late gestation and continue after birth. We tested this hypothesis using magnetoencephalography (MEG) data from human fetuses and newborns, recorded during an auditory local-global paradigm that generates prediction errors. Several different entropy measures were applied to MEG signals: 1) Lempel-Ziv complexity, 2) context tree weighting, 3) sample entropy, 4) multiscale entropy, and 5) permutation entropy with a 32 ms lag (corresponding to 4 - 10 Hz activity) and a 64 ms lag (corresponding to 2 - 5 Hz activity). Surprisingly, we found the opposite result as we hypothesized: neural signal complexity decreased with gestational age in late fetal development according to all entropy measures and this decline continued after birth. More surprisingly, we found that complexity decreased significantly faster in male fetuses for most entropy measures, and three entropy measures even showed a significant main effect of sex, with lower entropy for males. Our results are consistent with earlier work showing that visual evoked responses are more easily detected in male fetuses (Eswaran et al. 2021), as the greater complexity of MEG signals recorded from female fetuses may mask evoked responses.
Benjamin Lindner - "Effects of network dynamics and network noise on neural information transmission" [Slides]
We consider random networks of integrate-and-fire neurons in different regimes of operation (asynchronous irregular dynamics, bistable firing dynamics) and discuss how much information such networks transmit about a common time-dependent stimulus. It is shown that for networks in the asynchronous state a finite synaptic coupling optimizes the information transmission (recurrence-mediated superthreshold stochastic resonance). Different effects are found for a network with bistable firing dynamics, in which information transmission about a slow signal depends very strongly on the balance between the two firing states; we discuss the origin of this dependence.
Andres Canales- Johnson - "Distributed representations of prediction error signals across the cortical hierarchy are synergistic"
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Patricio Orio - "The emergence of high-order interdependences and its interaction with noise: from elementary automata to neuronal oscillators" [Slides]
High-order interdependencies — such as synergy and redundancy — play important roles in a range of naturally occurring information-processing systems, including the human brain. How can they emerge from local elementary rules or pair-wise interactions, and how can they withstand stochastic perturbations, are still outstanding questions important for the understanding of complex systems. Using the Wilson-Cowan model for neuronal oscillations, we found that the presence of synergistic interdependences is related to the existence of a high-dimensional attractor. However, quasi-periodic attractors generate a spurious synergy that is preserved when time-shifting the data series. Nevertheless, this synergy becomes significant when oscillators are simulated with noise. On the other hand, chaotic attractors with non-integer dimension display statistically significant synergistic interdependencies, with or without noise. Using elementary cellular automata as a testbed, we found that dynamical noise can enhance the statistical regularities between agents and, intriguingly, even alter the prevailing character (synergistic or redundant) of their interdependencies. We also found that the character of the interdependences is related to the high-order structure of the local rules, which also affect the system’s susceptibility to noise. The rules that become synergistic in the presence of noise, also show a predominance of synergy at lower orders of interaction and shift to redundancy at higher orders. Our results give important insights into the conditions for the emergence of synergistic interdependencies and their possibility of being enhanced by noise. This contributes to understand how the neuronal circuits in the brain can perform complex computations while being sustained by a dynamic with stochastic contributions.
Rainer Engelken - "A time-resolved theory of information encoding in recurrent neural networks"
Information encoding in neural circuits depends on how well time-varying stimuli are encoded by neural populations. Slow neuronal timescales, noise, and network chaos can compromise reliable and rapid population response to external stimuli. A dynamic balance of externally incoming currents by strong recurrent inhibition was previously proposed as a mechanism to accurately and robustly encode a time-varying stimulus in balanced networks of binary neurons, but a theory for recurrent rate networks was missing.
Here, we develop a non-stationary dynamic mean-field theory that transparently explains how a tight balance of excitatory currents by recurrent inhibition improves information encoding. We demonstrate that the mutual information rate of a time-varying input increases linearly with the tightness of balance, both in the presence of additive noise and with recurrently generated chaotic network fluctuations. We corroborated our findings in numerical experiments and demonstrated that recurrent networks with positive firing rates trained to transmit a time-varying stimulus generically use recurrent inhibition to increase the information rate. We also found that networks trained to transmit multiple independent time-varying signals spontaneously form multiple local inhibitory clusters, one for each input channel. Lastly, we confirm our theoretical findings in recurrent spiking networks.
In conclusion, our findings suggest that feedforward excitatory input and local recurrent inhibition--as observed in many biological circuits--is a generic circuit motif for encoding and transmitting time-varying information in recurrent neural circuits.
|
|
This workshop has been run at CNS for over a decade now -- links to the websites for the previous workshops in this series are below:
Image modified from an original credited to dow_at_uoregon.edu, obtained here (distributed without restrictions); modified image available here under CC-BY-3.0