To build these kinds of an FPGA implementation, we compress the second CNN by applying dynamic quantization procedures. Rather than good-tuning an already experienced network, this action involves retraining the CNN from scratch with constrained bitwidths for the weights and activations. This course of action is termed quantization-conscious training (QAT). https://franciscoo100gri7.wikijm.com/user