Differentiable Optimal Discrete Value Quantization for Spiking Neural Networks on Weight-Constrained Neuromorphic Hardware

This paper introduces Differentiable Optimal Discrete Value Quantization (DODVQ), a training method for Spiking Neural Networks (SNNs) for inference on neuromorphic hardware with strict synaptic weight constraints. Unlike conventional approaches that derive a discrete set of weights by applying clustering algorithms (e.g., k-Means) to full-precision or QAT-trained weights, DODVQ treats the discrete values themselves as learnable parameters co-optimized with the model. We demonstrate that for models constrained to only four unique weight values (M=4), our DODVQ-SNN drastically outperforms a baseline SNN created with a conventional QAT-plus-clustering pipeline. On the FashionMNIST and EEG Eye State datasets, DODVQ recovers nearly all the performance of full-precision models, whereas the baseline method suffers a catastrophic drop in accuracy. Our work provides a robust pathway for developing high-performance SNNs that are directly compatible with the hardware constraints of energy-efficient neuromorphic processors. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.

Авторы
Kartashov Ivan 1, 2 , Pushkareva Maria M. 4 , Karandashev Iakov M. 1, 3
Издательство
Springer Verlag
Язык
English
Страницы
202-213
Статус
Published
Том
1241 SCI
Год
2026
Организации
  • 1 National Research Centre "Kurchatov Institute", Moscow, Moscow Oblast, Russian Federation
  • 2 HSE University, Moscow, Russian Federation
  • 3 RUDN University, Moscow, Moscow Oblast, Russian Federation
  • 4 Polus Trading Company LLC, Saint Petersburg, Russian Federation
Ключевые слова
Learnable Parameters; Model Compression; Neuromorphic Hardware; Quantization; Spiking Neural Networks
Цитировать
Поделиться

Другие записи