Binary Quantization Analysis of Neural Networks Weights on MNIST Dataset
DOI:
https://doi.org/10.5755/j02.eie.28881Keywords:
Image classification, Multilayer perceptron, Neural network, Quantization, Source codingAbstract
This paper considers the design of a binary scalar quantizer of Laplacian source and its application in compressed neural networks. The quantizer performance is investigated in a wide dynamic range of data variances, and for that purpose, we derive novel closed-form expressions. Moreover, we propose two selection criteria for the variance range of interest. Binary quantizers are further implemented for compressing neural network weights and its performance is analysed for a simple classification task. Good matching between theory and experiment is observed and a great possibility for implementation is indicated.
Downloads
Published
How to Cite
Issue
Section
License
The copyright for the paper in this journal is retained by the author(s) with the first publication right granted to the journal. The authors agree to the Creative Commons Attribution 4.0 (CC BY 4.0) agreement under which the paper in the Journal is licensed.
By virtue of their appearance in this open access journal, papers are free to use with proper attribution in educational and other non-commercial settings with an acknowledgement of the initial publication in the journal.
Funding data
-
Science Fund of the Republic of Serbia
Grant numbers 6527104, AI- Com-in-AI -
Ministarstvo Prosvete, Nauke i Tehnološkog Razvoja
Grant numbers TR32035