Goedeke, Sven Ole: Transitions in the dynamics of recurrent neural networks. - Bonn, 2025. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-86892
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-86892
@phdthesis{handle:20.500.11811/13767,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-86892,
author = {{Sven Ole Goedeke}},
title = {Transitions in the dynamics of recurrent neural networks},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = dec,
note = {The brain processes information and performs computations through the collective activity of large populations of interconnected neurons. The dynamics of these biological neural networks can exhibit qualitatively distinct behaviors or states. Transitions between these behaviors can occur at specific points when parameters or external conditions change. Moreover, transitions between coexisting states in a system can occur due to perturbations or noise. This thesis investigates how such transitions give rise to chaos, criticality, and drifting memory representations in recurrent neural networks. In four studies, we use methods from dynamical systems theory and statistical physics to analyze biologically relevant network models.
First, we study how time-varying inputs shape the transition to chaos in large rate-based neural networks with random connectivity. Treating the input as a stochastic drive, we analyze the resulting nonautonomous dynamics using dynamic mean-field theory. Our analysis yields an exact condition for the transition and the phase diagram. The input dynamically suppresses chaos: the transition shifts to significantly higher coupling strengths than predicted by local stability analysis. This leads to a distinctive regime of locally expansive yet stable, nonchaotic dynamics with optimal memory for past inputs.
Second, turning to spiking neural networks, we study how the intrinsic dynamics of individual neurons influence the collective irregular spiking dynamics in networks where recurrent inhibition balances constant excitation. We combine mean-field approaches and stability analysis to characterize both the activity statistics and the detailed dynamical stability of mixed networks with two types of inhibitory neurons. Remarkably, even a single neuron whose intrinsic dynamics accelerates towards spike generation induces a transition from stable to chaotic dynamics.
Third, we investigate how critical dynamics, neuronal avalanches, can emerge during development. With a simple model of activity-dependent network formation, we show that networks of stochastically spiking neurons can robustly self-organize into a critical state near the onset of unstable recurrent excitation. The resulting activity generates spike avalanches with power-law statistics. We map the activity dynamics to a self-exciting stochastic point process and analytically derive the distributions of avalanche sizes and durations, which exhibit power-law exponents consistent with experimental observations.
Fourth, we examine how assemblies of strongly interconnected neurons can drift due to synaptic plasticity or turnover while still stably representing a memory. We demonstrate such drifting assemblies in several spiking network models, where the same plasticity mechanisms, driven by ongoing activity in the absence of structured inputs, lead to both drift and self-organized compensation. We describe the gradual exchange of neurons in a drifting assembly as noise-activated transitions between metastable states and construct reduced models that explain a transition from drifting to static assemblies.},
url = {https://hdl.handle.net/20.500.11811/13767}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-86892,
author = {{Sven Ole Goedeke}},
title = {Transitions in the dynamics of recurrent neural networks},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = dec,
note = {The brain processes information and performs computations through the collective activity of large populations of interconnected neurons. The dynamics of these biological neural networks can exhibit qualitatively distinct behaviors or states. Transitions between these behaviors can occur at specific points when parameters or external conditions change. Moreover, transitions between coexisting states in a system can occur due to perturbations or noise. This thesis investigates how such transitions give rise to chaos, criticality, and drifting memory representations in recurrent neural networks. In four studies, we use methods from dynamical systems theory and statistical physics to analyze biologically relevant network models.
First, we study how time-varying inputs shape the transition to chaos in large rate-based neural networks with random connectivity. Treating the input as a stochastic drive, we analyze the resulting nonautonomous dynamics using dynamic mean-field theory. Our analysis yields an exact condition for the transition and the phase diagram. The input dynamically suppresses chaos: the transition shifts to significantly higher coupling strengths than predicted by local stability analysis. This leads to a distinctive regime of locally expansive yet stable, nonchaotic dynamics with optimal memory for past inputs.
Second, turning to spiking neural networks, we study how the intrinsic dynamics of individual neurons influence the collective irregular spiking dynamics in networks where recurrent inhibition balances constant excitation. We combine mean-field approaches and stability analysis to characterize both the activity statistics and the detailed dynamical stability of mixed networks with two types of inhibitory neurons. Remarkably, even a single neuron whose intrinsic dynamics accelerates towards spike generation induces a transition from stable to chaotic dynamics.
Third, we investigate how critical dynamics, neuronal avalanches, can emerge during development. With a simple model of activity-dependent network formation, we show that networks of stochastically spiking neurons can robustly self-organize into a critical state near the onset of unstable recurrent excitation. The resulting activity generates spike avalanches with power-law statistics. We map the activity dynamics to a self-exciting stochastic point process and analytically derive the distributions of avalanche sizes and durations, which exhibit power-law exponents consistent with experimental observations.
Fourth, we examine how assemblies of strongly interconnected neurons can drift due to synaptic plasticity or turnover while still stably representing a memory. We demonstrate such drifting assemblies in several spiking network models, where the same plasticity mechanisms, driven by ongoing activity in the absence of structured inputs, lead to both drift and self-organized compensation. We describe the gradual exchange of neurons in a drifting assembly as noise-activated transitions between metastable states and construct reduced models that explain a transition from drifting to static assemblies.},
url = {https://hdl.handle.net/20.500.11811/13767}
}





