Home > Conferences > CNS*2021-ITW

CNS*2021 Workshop on Methods of Information Theory in Computational Neuroscience

Information in the brain. Modified from an original credited to dow_at_uoregon.edu (distributed without restrictions)

06-07 July, 2021

Online

CNS*2021

Aims and topics

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.
A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.
The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.
The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.
For the program of the past IT workshops see the Previous workshops section.

Registration and Access

The workshop will be held as a part of the wider CNS*2021 online meeting. Please see the CNS*2021 website for registration to the workshops (this is required to attend).

Time

The workshop will take place on the 6th and 7th of July. The exact time and sessions will be annouced later.

Best presentation award sponser: Entropy

Awards

We would like to thank the Entropy journal for sponsoring our Best Presentation Award for ECRs, which we have awarded jointly to:

Organising committee

Speakers

The following are invited and contributing speakers for the workshop.

Call for Contributed talks

Now Closed!

Program

Our program is as shown in the following table.

CEST (UTC+2) Tuesday, July 06 Wednesday, July 07 CEST (UTC+2)
Chair: Michael Wibral Chair: Joseph T. Lizier
15:00-15:45 Tatyana Sharpee
The Salk Institute for Biological Studies
"Towards lossless transformation of neural responses"
Aaron Gutknecht
University of Goettingen
"Bits and Pieces: Understanding Information Decomposition from Part-Whole Relationships"
09:30-10:15
15:45-16:30 Fleur Zeldenrust
Radboud University, Nijmegen
"Estimating information transfer in vitro: results from barrel cortex"
Lucas Rudelt

Max Planck Institute for Dynamics and Self-organization, Goettingen
"Embedding optimization reveals long-lasting history dependence in neural spiking activity"
10:15-11:00
16:30-17:00 "Coffee" break "Coffee" break 11:00-11:30
Chair: Justin Dauwels Chair: Abdullah Makkeh
17:00-17:30 Shaui Shao
Max Planck Institute for Brain Research, Frankfurt
"Efficient neuronal coding of static stimuli with different noise types and different objective functions"
Daniel Braun
University of Ulm
"Information-theoretic bounded rationality models for perception-action systems"
11:30-12:00
17:30-17:00 Madhavun Candadai
Indiana University, Bloomington
"Sources of Prediction Information in Dynamical Neural Networks"
Jake Witter
Univeristy of Bristol
"A Nearest-Neighbours Estimator for Conditional Mutual Information"
12:00-12:30
18:00-18:30 Thomas Varley
Indiana University, Bloomington
"Neural Information Dynamic and Topological Correlates of Complex Behaviors in Macaques"
Claudius Gros
Univeristy of Frankfurt
"Information Theory vs. Stationarity - a Devil's Advocate View"
12:30-13:00
Lunch break 13:00-14:00
Chair: Abdullah Makkeh
Aline Viol
International School for Advanced Studies (SISSA), Trieste
Psychedelics and functional brain networks: employing concepts from information theory to study altered states of consciousness
14:00-14:45
Li Zhaoping
Max Planck Institute for Biological Cybernetics, Tuebingen
"Information deletion in the visual system"
14:45-15:30
"Coffee" break 15:30-16:00
Chair: Michael Wibral
Rodrigo Cofre Torres
University of Valparaiso
"High-Order Interdependencies in the Aging Brain"
16:00-16:30
Fernando Rosas
Imperial College London
"Towards an informational architecture of the human brain"
16:30-17:15
Sarah Marzen
University of Southern California
"Time cells as predictors: improvements to preediction algorithms based on their time constants"
17:15-17:45

Abstracts

Li Zhaoping - "Information deletion in the visual system"
I will discuss the amount of visual input information along the visual pathway from retinal inputs, to retinal outputs, to visual perception. In humans, there are about one megabytes per second (compressed from about 20 MB/second of raw visual inputs) of visual input information being transmitted in the optic nerve from the retina. One megabytes can contain the amount of information in the text in a large book. It has been measured, since 1950s, that only about 40 bits per second of information is admitted to the attentional bottleneck, due to the limited processing power in the brain (40 bits contains roughly the amount of information in two short sentences of text). This bottleneck is manifested in the inattentional blindness behaviorally. An important question is, where along the visual pathway, from the optic nerve to the perceptual outcome and awareness, is the information being lost. I will argue that, to a substantial extent, the bottleneck starts around the output of the primary visual cortex (V1). This is partly suggested by the recent evidence supporting the V1 Saliency Hypothesis (V1SH) that the attentional selection of visual inputs, by gaze shifts, is partly guided exogenenously by a bottom-up saliency map created by V1. Secondly, it is apparent that much of the information available in V1 is not available in visual perception, or not even in V2, the extrastriate cortex immediately downstream from V1 along the visual pathway. For example, the information about the eye of origin of visual inputs is lost by V2, whose neurons are mostly binocular and thus blind to the eye of origin of visual inputs. Recognizing this bottleneck starting at V1's output suggests a new framework to understand vision.
Zhaoping L. Theoretical Understanding of the early visual processes by data compression and data selection in Network: Computation in neural systems 17(4):301-334 (2006).
Zhaoping L. Understanding Vision: theory, models, and data. Oxford University Press, USA, 2014.
Zhaoping, L. A new framework for understanding vision from the perspective of the primary visual cortex Current Opinion in Neurobiology, 58:1-10 (2019).

Fernando Rosas - "Towards an informational architecture of the human brain"
Finding principled methods to illuminate the intricate dynamics that characterise the orchestrated activity of the human brain is an important open challenge in contemporary neuroscience. Information-theoretic methods are particularly well-suited to this endeavour, as they offer a privileged approach towards bridging dynamical and information-processing aspects of the brain, which in turn results in interpretable signatures that bring new insights about brain function. In this talk we review recent work related to the Integrated Information Decomposition (ΦID) decomposition, a multi-target extension of the Partial Information Decomposition (PID) that is particularly well-suited for time-series analysis. After reviewing the bases of PID and ΦID, we discuss how it can be used to establish a synergy-to-redundancy gradient across brain regions, which displays properties linking functional, structural, and genetic domains shedding new light on the neurocognitive architecture of the human brain. Building on these findings, we posit the existence of a “synergistic workspace” that unifies ideas from the Global Workspace Theory and Integrated Information Theory in an empirically testable way. We show how a data-driven analysis of multiple datasets reveals that the topographical location of this workspace corresponds mainly to the default mode and frontoparietal networks, which play roles of gateways and broadcasters, respectively. Furthermore, our results show that events of loss of consciousness impact these networks differently, suggesting a new interpretation of the informational bases of these phenomena. Put together, these results show how information decomposition establishes new promising avenues of inquiry, of broad applicability in neuroimage analysis.

Tatyana Sharpee - "Towards lossless transformation of neural responses"
TBA

Claudius Gros - "Information Theory vs. Stationarity - a Devil's Advocate View"
The brain operates in a stationary flow equilibrium. Activities, cross-correlation, and objective functions, are fluctuating on the time scales characterizing information processing, which includes learning, with the respective time-averaged statistics being stationary. As a basic condition, it is argued that the need for a stationary operating modus takes prerogative relative to information-theoretical principles.
In particular, it is shown that Hebbian learning follows directly from the stationarity principle, without the need to optimize, e.g., information transfer, and that the Fisher information can be used as an objective function for stationarity. Furthermore, the argument is made that stationary neuronal activity favors critical brain states [1]. Enhanced information processing capabilities of critical states are in this view a byproduct of a more fundamental demand, but not the evolutionary drives.
[1] C. Gros A devil's advocate view on 'self-organized' brain criticality Journal of Physics: Complexity 2, 031001 (2021).

Fleur Zeldenrust - "Estimating information transfer in vitro: results from barrel cortex"
Understanding the relation between (sensory) stimuli and the activity of neurons (i.e. 'the neural code') lies at the heart of understanding the computational properties of the brain. However, quantifying the information between a stimulus and a spike train has proven to be challenging, due to the limited life span of a cell in an in vitro setup. In 2017 in this workshop, I presented a new in vitro method to measure how much information a single neuron transfers from the input it receives to its output spike train. This method has as an advantage over traditional methods that it is fast (~10 minutes). This decrease in recording time is obtained by generating an input by an artificial neural network that responds to a randomly appearing and disappearing `sensory stimulus': the hidden state. The low entropy of this hidden state allows for a fast estimate of transferred information. Using this method, we have now recorded over 850 trials in almost 300 inhibitory and excitatory neurons of barrel cortex using this method, using different pharmacological modulations (including dopamine, acetylcholine and serotonin receptor agonists). Here, I will firstly present the first conclusions of this large database of recordings. Secondly, I will show how this method can be extended to dynamic clamp and the effects this has on modelled neurons.

Daniel Braun - "Information-theoretic bounded rationality models for perception-action systems"
Recent advances in movement neuroscience suggest that sensorimotor behavior can be considered as a continuous decision-making process in complex environments in which uncertainty and task variability play a key role. Leading theories of motor control assume that the motor system learns probabilistic models and that motor behavior can be explained as the optimization of payoff or cost criteria under the expectation of these models, crucially ignoring the problem of limited computational resources. Here we consider bounded rationality approaches for decision-making that model limited resources with information constraints. We investigate sensorimotor learning and decision-making experiments and discuss how information-theoretic bounded rationality models provide a general framework for perception-action systems that can explain deviations from Bayes-optimal behavior.

Sarah Marzen - "Time cells as predictors: improvements to preediction algorithms based on their time constants"
TBA

Rodrigo Corfe Torres - "High-Order Interdependencies in the Aging Brain"
Brain interdependencies can be studied from either a structural/anatomical perspective ("structural connectivity") or by considering statistical interdependencies ("functional connectivity", FC). While structural connectivity is by definition pairwise (white-matter fibers project from one region to another), FC is not. However, most FC analyses only focus on pairwise statistics and they neglect higher order interdependencies. A promising tool to study high-order interdependencies is the recently proposed O-Information, which can quantify the intrinsic statistical synergy and the redundancy in groups of three or more interacting variables. In this talk, I will present the analysis of functional magnetic resonance imaging (fMRI) data obtained at rest from 164 healthy subjects with ages ranging in 10 to 80 years using the O-Information to investigate how high-order statistical interdependencies are affected by age. We showed that older participants (from 60 to 80 years old) exhibited a higher predominance of redundant dependencies compared with younger participants, an effect that seems to be pervasive as it is evident for all orders of interaction. In addition, while there is strong heterogeneity across brain regions, we found a "redundancy core" constituted by the prefrontal and motor cortices in which redundancy was evident at all the interaction orders studied.
Our methodology can be used for a broad range of applications in neuroscience.

Aline Viol - "Psychedelics and functional brain networks: employing concepts from information theory to study altered states of consciousness"
Aiming to understand the functioning of the human brain in non-ordinary states of consciousness, we developed tools to assess brain connectivity using complex networks and information theory approaches. As a result, we observed that some aspects of the functional brain networks of individuals under psychedelic influences exhibit higher entropy and high information parity.

Aaron Gutknecht - "Bits and Pieces: Understanding Information Decomposition from Part-Whole Relationships"
Partial Information Decomposition (PID) seeks to decompose the information that a set of source variables contains about a target variable into basic pieces, the so called "atoms of information". It not only promises to provide a complete description of the informational relationships between source and target variables but may also be conceived of as a unifying framework for systematically testing and comparing theories about neuronal processing in terms of their information theoretic predictions. This talk describes how the entire theory of partial information decomposition can be derived from considerations of elementary parthood relationships between information contributions. The underlying idea is that any theory should be grounded in as simple concepts as possible. Being one of the most basic relationships in nature, the part-whole relation is a prime candidate for such a foundational concept. It appears on all spatial and temporal scales, and is ubiquitous not only in science but also in ordinary life. Starting from this vantage point may therefore provide a particularly accessible exposition of partial information decomposition and it helps answering a range of questions that have been much discussed in the PID-field. The talk does not assume any previous familiarity with the PID-framework as we will re-derive the theory from first principles.

Lucas Rudelt - "Embedding optimization reveals long-lasting history dependence in neural spiking activity"
Information processing can leave distinct footprints on the statistics of neural spiking. For example, efficient coding minimizes the statistical dependencies on the spiking history, while temporal integration of information may require the maintenance of information over different timescales. To investigate these footprints, I will present an approach that quantifies history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking. On different neuron models, I will demonstrate that this approach captures a footprint of information processing that is beyond classical time-lagged measures of temporal dependence like the autocorrelation. I will then explain the practical challenges of estimating the mutual information, in particular when quantifying temporal dependencies. For experimental data, which are necessarily of limited size, a reliable estimation is only possible for a coarse temporal binning of past spiking, a so-called past embedding. To still account for the vastly different spiking statistics and potentially long dependencies of living neurons, I will introduce a novel approach to optimize the past embedding. Finally, I will show results on extra-cellular spike recordings, where we found that the strength and the timescale of history dependence showed striking differences between different neural systems. This work introduces a toolkit for the estimation of history dependence in recorded spike trains, which could provide valuable insights on how information processing is organized in the brain.

Madhavun Candadai - "Sources of Prediction Information in Dynamical Neural Networks"
Behavior involves the ongoing interaction between an organism and its environment. One of the prevailing theories of adaptive behavior is that organisms are constantly making predictions about their future environmental stimuli. However, how they acquire that predictive information is still poorly understood. Two complementary mechanisms have been proposed: predictions are generated from an agent’s internal model of the world or predictions are extracted directly from the environmental stimulus. In this work, we demonstrate that predictive information, measured using bivariate mutual information, cannot distinguish between these two kinds of systems. Furthermore, we show that predictive information cannot distinguish between organisms that are adapted to their environments and random dynamical systems exposed to the same environment. To understand the role of predictive information in adaptive behavior, we need to be able to identify where it is generated. To do this, we decompose information transfer across the different components of the organism-environment system and track the flow of information in the system over time. To validate the proposed framework, we examined it on a set of computational models of idealized agent-environment systems. Analysis of the systems revealed three key insights. First, predictive information, when sourced from the environment, can be reflected in any agent irrespective of its ability to perform a task. Second, predictive information, when sourced from the nervous system, requires special dynamics acquired during the process of adapting to the environment. Third, the magnitude of predictive information in a system can be different for the same task if the environmental structure changes. These results indicate that decomposing multivariate information and measuring these quantities in time is not simply powerful but also necessary to understand neural network operation.

Thomas Varley - "Neural Information Dynamic and Topological Correlates of Complex Behaviors in Macaques"
The brain is often described as “processing” information. Practically, however, “information processing” is often vaguely defined, making an exact model connecting neural dynamics, “computation”, and behavior difficult to pin down. Much of the previous empirical work on this topic has been done in systems or models that do not display complex behavior (e.g. in-vitro cultures, or anesthetized animals), leaving the relationship with complex behavior unclear. Here, we describe work done on cortical spiking activity recorded from three macaques engaged in multiple complex behaviors, including: symbolic recognition, active memory maintenance, and instructed motor action. We report a full complement of information dynamics phenomena, including active information storage, multivariate transfer entropy, as well as redundant and synergistic processing and find robust differences between them across different behavioral states. By making use of the inferred multivariate transfer entropy network, we can also assess how information processing in the system as a whole changes with a high degree of specificity (e.g. global network structure discriminates between the specific contents of memory being maintained, such as different instructed, but not executed, motor behaviors). Finally we show that the effective connectivity network approach and the information-dynamic approach intersect: the structure of the local neighborhood of a given neuron (such as local clustering coefficient) informs on the particular kinds of information dynamic it participates in (such as synergistic processing). This suggests a deeper relationship between network structure and function that combines elements of topology with distributed computation.

Shuai Shao - "Efficient neuronal coding of static stimuli with different noise types and different objective functions"
In the view of evolution, neurons should be optimally configured to encode information most efficiently. This is the main statement of the efficient coding theory. Here we analyze the efficient neuronal coding problem with different noise types: input noise, which comes from the uncertainty of external stimuli, and output noise, which is due to intrinsic spike statistics of neurons. We generalize previous work limited to binary neurons and Poisson spiking noise (Gjorgjieva et al. PLOS CB 2019) to any shape of tuning curves and any forms of spiking noise. We also investigate different objective functions, the Shannon mutual information and mean square estimation error, to tackle different noise types and investigate how coding depends on the optimality measure. Finally, we show that the two objective functions are almost proportional to each other in some circumstances, which suggests that the choice of objective function might not be a critical element in efficient neuronal coding problems.

Jake Witter - "A Nearest-Neighbours Estimator for Conditional Mutual Information"
The conditional mutual information quantifies the conditional dependence of two random variables. It has numerous applications; it forms, for example, part of the definition of transfer entropy, a common measure of the causal relationship between time series and important to neuroscience. It does, however, require a lot of data to estimate accurately, much more than is typically available from a neuroscience experiment, and suffers the curse of dimensionality - limiting its application in machine learning and data science. We have used a Kozachenko-Leonenko approximation to derive a nearest-neighbours estimator. This estimator depends only on the distance between data points, something somewhat understood for neural recordings, and not on the dimension of the data; furthermore, for this estimator it is possible to calculate the bias analytically.

Photos

MDPI Entropy Best ECR Presentation winner Lucas Rudelt (bottom right), with workshop organizers Abdullah Makkeh (top right), Micahel Wibral (top left), and Joe Lizier (Bottom left)
MDPI Entropy Best ECR Presentation winner Fleur Zeldenrust

CNS*2021 Related Tutorials

We are happy to point out for a CNS*2021 tutorial that looks at the early visual cortex from an information-theortic lens.

Title: Understanding early visual receptive fields from efficient coding principles

Organizers: Li Zhaoping, Max Planck Institute for Biological Cybernetics, University of Tuebingen, Germany, li.zhaoping@tuebingen.mpg.de

Description: Understanding principles of efficient coding can enable you to answer questions such as: why should the input contrast response function of a neuron take its particular form? Why do retinal ganglion cells have center-surround receptive fields? How correlated or decorrelated should the visual responses from different retinal ganglion cells be? Why do receptive fields of retinal ganglion cells increase their sizes in dim light? How could visual coding depend on animal species? Why are color selective V1 neurons less sensitive to visual motion signals? How can one predict the ocular dominance properties of V1 neurons from developmental conditions? How should neurons adapt to changes in visual environment? This tutorial guides you on how to answer such questions. The detailed content can be seen from the titles of the video clips at this link. For more details check the webpage of the workshop.

Please email Li Zhaoping at zhaoping.li.admin@tuebingen.mpg.de for enquiries.

Previous workshops

This workshop has been run at CNS for over a decade now -- links to the websites for the previous workshops in this series are below:

  1. CNS*2020 Workshop, July 21-22, 2020, Online!
  2. CNS*2019 Workshop, July 16-17, 2019, Barcelona, Spain.
  3. CNS*2018 Workshop, July 17-18, 2018, Seattle, USA.
  4. CNS*2017 Workshop, July 19-20, 2017, Antwerp, Belgium.
  5. CNS*2016 Workshop, July 6-7, 2016, Jeju, South Korea.
  6. CNS*2015 Workshop, July 22-23, 2015, Prague, Czech Republic.
  7. CNS*2014 Workshop, July 30-31, 2014, Québec City, Canada.
  8. CNS*2013 Workshop, July 17-18, 2013, Paris, France.
  9. CNS*2012 Workshop, July 25-26, 2012, Atlanta/Decatur, GA, USA.
  10. CNS*2011 Workshop, July 27-28, 2011, Stockholm, Sweden.
  11. CNS*2010 Workshop, July 29-30, 2010, San Antonio, TX, USA.
  12. CNS*2009 Workshop, July 22-23, 2009, Berlin, Germany.
  13. CNS*2008 Workshop, July 23-24, 2008, Portland, OR, USA.
  14. CNS*2007 Workshop, July 11-12, 2007, Toronto, Canada.
  15. CNS*2006 Workshop, June 19-20, 2006, Edinburgh, U.K.

Image modified from an original credited to dow_at_uoregon.edu, obtained here (distributed without restrictions); modified image available here under CC-BY-3.0