KIAS Lecture Series on Computational Neuroscience and AI
Program Home > Program

 

 

SueYeon Chung (Harvard University / Flatiron Institute)

 

Title Computing with Neural Manifolds: 

Video Link : Part1 | Part2

          A Multi-Scale Framework for Understanding Biological and Artificial Neural Networks
Abstract Recent breakthroughs in experimental neuroscience and machine learning have opened new frontiers in understanding the computational principles governing neural circuits and artificial neural networks (ANNs). Both biological and artificial systems exhibit an astonishing degree of orchestrated information processing capabilities across multiple scales - from the microscopic responses of individual neurons to the emergent macroscopic phenomena of cognition and task functions. At the mesoscopic scale, the structures of neuron population activities manifest themselves as neural representations. Neural computation can be viewed as a series of transformations of these representations through various processing stages of the brain. The primary focus of my lab's research is to develop theories of neural representations that describe the principles of neural coding and, importantly, capture the complex structure of real data from both biological and artificial systems.

In this talk, I will present three related approaches that leverage techniques from statistical physics, machine learning, and geometry to study the multi-scale nature of neural computation. First, I will introduce new statistical mechanical theories that connect geometric structures that arise from neural responses (i.e., neural manifolds) to the efficiency of neural representations in implementing a task. Second, I will employ these theories to analyze how these representations evolve across scales, shaped by the properties of single neurons and the transformations across distinct brain regions. Finally, I will show how these insights extend efficient coding principles beyond early sensory stages, linking representational geometry to efficient task implementations. This framework not only help interpret and compare models of brain data but also offers a principled approach to designing ANN models for higher-level vision. This perspective opens new opportunities for using neuroscience-inspired principles to guide the development of intelligent systems.

 

 

Hannah Choi (School of Mathematics Georgia Institute of Technology)

Video Link : Part1

Title Efficient coding in cortical networks with diverse cell types
Abstract The brain efficiently processes sensory information by generating an internal model of the environment, which is continuously updated through prediction errors and the suppression of expected information-- a process known as predictive coding. While experimental evidence supports cortical implementation of predictive coding, how it is shaped by the connectivity across the cortical hierarchy, layers, and diverse neuronal subtypes remains unclear. The first part of the talks will introduce previous experimental and theoretical studies on predictive coding. In the second part, I will present recent work from my group that investigates mechanisms underlying predictive coding in data-driven cortical network models. By constructing a biologically grounded network model with realistic connectivity among various cortical cell types, our work uncovers changing functional connectivity across the cortical circuits representing communications of predictions and prediction errors. Furthermore, by mapping algorithmic components of predictive coding onto subtypes of excitatory and inhibitory neurons, our study generates experimentally testable predictions about cell-type-specific responses to expected and unexpected stimuli during both active and passive sensing.

 

 

Kisuk Lee, Alexander Bae (Zetta AI)

Video Link : Part1 | Part2

Title Building Connectome: Mapping neural structure to function at scale
Abstract Neuronal connectivity can be reconstructed from 3D electron microscopy (EM) images of brain tissue. Connectomics, a modern revival of neuroanatomy, aims to densely reconstruct neurons, comprehensively detect synapses, and extract complete wiring diagrams from brain volumes. Over the past decade, deep learning has become a central tool for reconstructing neural circuits from EM datasets. In the first part of this lecture, we will introduce our deep learning-based computational pipeline for processing EM images in connectomics. We will highlight automated reconstructions from both terascale and petascale EM datasets of the fly and mouse brains, and discuss how connectomes may shape the future of neuroscience.

Understanding the structure of these reconstructed circuits is essential for understanding brain function. Biological neural circuits perform reliable and precise computations despite inherent noise, supported by networks of neurons interconnected through thousands of synapses per cell. Although light microscopy has provided partial insight into these circuits, it often lacks the structural completeness required to fully explain function. Therefore, a comprehensive structural map, known as the connectome, is critical. To uncover conserved circuit architecture across animals, it is necessary to identify 1) the fundamental computational units, or cell types, and 2) the connectivity rules that govern their interactions. In the second part of the lecture, we will present findings from the entire fly brain and a portion of mouse cortical circuits, showing how structural features can predict circuit function. We will also explore how these biological structures relate to artificial neural networks.