Color constancy is an essential part of the Image Signal Processor (ISP) pipeline, which removes the color bias of the captured image generated by scene illumination. Recently, several supervised algorithms, including Convolutional Neural Networks (CNN)-based methods, have been proved to work correctly on this problem. It is time-consuming and costly to collect many raw images of various scenes with different lighting conditions and measure corresponding illumination values. To reduce the dependence on a large scale labeled dataset and take advantage of standard CNNs architectures, we proposed an approach to create an efficient color constancy algorithm. Firstly, we utilized a structure channel pruning method to thin our baseline model. We iteratively pruned 75% channels of a specific Mobilenet version used as our model's backbone, trained on a large-scale classification dataset. It means the backbone with the classification head is used to deal with our network pruning task. Then the resulted compact model was transferred and trained on a small dataset doing color constancy. During training on the color constancy task, we applied the DSD technique. The proposed method reaches comparative performance with other state-of-The-Art models, produces fewer MACs, and can significantly decrease computational costs.