Züge, Paul: Biologically plausible learning and dynamics in neural networks. - Bonn, 2025. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-86461
Online-Ausgabe in bonndoc: https://nbn-resolving.org/urn:nbn:de:hbz:5-86461
@phdthesis{handle:20.500.11811/13696,
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-86461,
doi: https://doi.org/10.48565/bonndoc-715,
author = {{Paul Züge}},
title = {Biologically plausible learning and dynamics in neural networks},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = nov,
note = {The plasticity and dynamics of biological neural networks enable remarkable computations despite physiological constraints. Here we study plasticity and dynamics that satisfy such constraints using analytically tractable linear rate networks and numerical simulations.
Biologically plausible models of reinforcement learning must be local and able to learn from sparse, delayed rewards. Weight (WP) and node perturbation (NP) achieve this by correlating fluctuations in synaptic strength or neuronal activity, respectively, with reward changes. Because the number of weights massively exceeds the number of neurons, NP was believed to be far superior to WP and more likely neurobiologically realized. We develop a clear, mathematically-grounded understanding of these versatile learning rules applied to linear rate networks learning a student-teacher task. Our analytical results show that WP can perform similarly to or even better than NP for many temporally extended and low-dimensional tasks, which we confirm in simulations of more complex networks and tasks. We further find qualitative differences in the weight and learning dynamics of WP and NP that might allow to distinguish them experimentally. The generated insights allow us to formulate modified learning rules that in certain situations combine the advantages of WP and NP. Together, our findings indicate WP as competitive or even preferable to NP for many relevant biological and machine learning tasks, suggesting it as a useful benchmark and plausible candidate for learning in the brain.
Biologically plausible models of neural computation must reflect experimentally observed network characteristics. One such characteristic is that principal neurons in sensory cortices encode continuous variables with overlapping responses, featuring predominant excitation between neurons with strongly overlapping responses. The reasons underlying such connectivity are still unclear, and there are even known disadvantages to it. To address this knowledge gap, we develop a novel cooperative coding scheme that relies on like-to-like excitation to implement a desired response. Neurons cooperatively share their computations and access to feedforward input with similarly-tuned neurons that also need them. This allows to exchange many feedforward and less specific recurrent connections for few specific recurrent ones, thereby minimizing the total number of synapses. By comparing cooperatively coding and feedforward networks achieving the same network response, we find that synaptic savings come at the cost of increased network response times. This trade-off improves in magnitude and scaling when introducing delayed, balancing inhibition or spike frequency adaptation, or when encoding higher-dimensional stimuli. Our results suggest the number of synapses as an important constraint that can explain observed connectivity patterns in a novel cooperative coding scheme, possibly enabled by balancing inhibition.},
url = {https://hdl.handle.net/20.500.11811/13696}
}
urn: https://nbn-resolving.org/urn:nbn:de:hbz:5-86461,
doi: https://doi.org/10.48565/bonndoc-715,
author = {{Paul Züge}},
title = {Biologically plausible learning and dynamics in neural networks},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2025,
month = nov,
note = {The plasticity and dynamics of biological neural networks enable remarkable computations despite physiological constraints. Here we study plasticity and dynamics that satisfy such constraints using analytically tractable linear rate networks and numerical simulations.
Biologically plausible models of reinforcement learning must be local and able to learn from sparse, delayed rewards. Weight (WP) and node perturbation (NP) achieve this by correlating fluctuations in synaptic strength or neuronal activity, respectively, with reward changes. Because the number of weights massively exceeds the number of neurons, NP was believed to be far superior to WP and more likely neurobiologically realized. We develop a clear, mathematically-grounded understanding of these versatile learning rules applied to linear rate networks learning a student-teacher task. Our analytical results show that WP can perform similarly to or even better than NP for many temporally extended and low-dimensional tasks, which we confirm in simulations of more complex networks and tasks. We further find qualitative differences in the weight and learning dynamics of WP and NP that might allow to distinguish them experimentally. The generated insights allow us to formulate modified learning rules that in certain situations combine the advantages of WP and NP. Together, our findings indicate WP as competitive or even preferable to NP for many relevant biological and machine learning tasks, suggesting it as a useful benchmark and plausible candidate for learning in the brain.
Biologically plausible models of neural computation must reflect experimentally observed network characteristics. One such characteristic is that principal neurons in sensory cortices encode continuous variables with overlapping responses, featuring predominant excitation between neurons with strongly overlapping responses. The reasons underlying such connectivity are still unclear, and there are even known disadvantages to it. To address this knowledge gap, we develop a novel cooperative coding scheme that relies on like-to-like excitation to implement a desired response. Neurons cooperatively share their computations and access to feedforward input with similarly-tuned neurons that also need them. This allows to exchange many feedforward and less specific recurrent connections for few specific recurrent ones, thereby minimizing the total number of synapses. By comparing cooperatively coding and feedforward networks achieving the same network response, we find that synaptic savings come at the cost of increased network response times. This trade-off improves in magnitude and scaling when introducing delayed, balancing inhibition or spike frequency adaptation, or when encoding higher-dimensional stimuli. Our results suggest the number of synapses as an important constraint that can explain observed connectivity patterns in a novel cooperative coding scheme, possibly enabled by balancing inhibition.},
url = {https://hdl.handle.net/20.500.11811/13696}
}





