In this project, we investigate the role of the membrane time constant in spiking neuron models and its impact during network training. Unlike conventional weights, the membrane time constant controls the temporal dynamics of neurons, influencing how information is integrated and propagated over time. Naive optimization of the membrane time constant can lead to unstable training dynamics, resulting in degraded accuracy and performance of the network. In this work, we propose a theoretically grounded parameterization method for the membrane time constant that ensures it remains within a stable regime while preserving differentiability for gradient-based optimization. By accounting for the non-linear effect of the time constant on the neuron’s response, we can ensure smoother and more stable training. Experimental results show that the proposed method results in improved performance. Moreover, its theoretical formulation is not constrained by the network architecture, providing a novel optimization technique for spiking neural networks.
[Picture]
Parameterized Learning of Time Constants