This paper introduces Differentiable Optimal Discrete Value Quantization (DODVQ), a training method for Spiking Neural Networks (SNNs) for inference on neuromorphic hardware with strict synaptic weight constraints. Unlike conventional approaches that derive a discrete set of weights by applying clustering algorithms (e.g., k-Means) to full-precision or QAT-trained weights, DODVQ treats the discrete values themselves as learnable parameters co-optimized with the model. We demonstrate that for models constrained to only four unique weight values (M=4), our DODVQ-SNN drastically outperforms a baseline SNN created with a conventional QAT-plus-clustering pipeline. On the FashionMNIST and EEG Eye State datasets, DODVQ recovers nearly all the performance of full-precision models, whereas the baseline method suffers a catastrophic drop in accuracy. Our work provides a robust pathway for developing high-performance SNNs that are directly compatible with the hardware constraints of energy-efficient neuromorphic processors. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.