Research Article |
An Efficient Method for Object Detection with Automatic Illumination Correction
Author(s): Chinthakindi Kiran Kumar1, Gaurav Sethi2, Kirti Rawal3
Published In : International Journal of Electrical and Electronics Research (IJEER) Volume 13, Issue 2
Publisher : FOREX Publication
Published : 25 August 2025
e-ISSN : 2347-470X
Page(s) : 393-401
Abstract
Recent advancements in artificial intelligence and computer vision have led to the automation of many conventional surveillance techniques, especially in smart city applications. However, object detection systems often struggle in poorly lit environments and may suffer from slow processing speeds. To address these restrictions, this paper proposes an adaptive illumination correction technique using an enhanced Logarithmic Image Processing model. The method improves object visibility in low-light video frames and enhances detection performance. Additionally, a customized deep convolutional neural network is developed to accurately detect objects after applying the illumination correction. The combined framework is evaluated on standard datasets and demonstrates superior robustness to varying illumination conditions compared to existing state-of-the-art methods. The results show significant improvements in metrics such as accuracy, recall, precision, and F-measure, proving the effectiveness of the proposed approach.
Keywords: Illumination correction
, LIP model
, Deep Network
, Object detection
.
Chinthakindi Kiran Kumar, School of Electronics & Electrical Engineering, Lovely Professional University, India; Email: ckkmtech11@gmail.com
Chinthakindi Kiran Kumar, Department of Electronics & Communication Engineering, Malla Reddy College of Engineering & Technology, Hyderabad, India;
Gaurav Sethi, School of Electronics & Electrical Engineering, Lovely Professional University, India; Email: gaurav.11106@lpu.co.in
Kirti Rawal, School of Electronics & Electrical Engineering, Lovely Professional University, India; Email: kirti.20248@lpu.co.in
-
[1] G. Han, M. Guizani, J. Lloret, S. Chan, L.Wan, and W. Guibene, “Emerging trends, issues, and challenges in big data and its implementation toward future smart cities,'' IEEE Commun. Mag., vol. 55, no. 12, pp. 16-17, Dec. 2017.
-
[2] X. Chang, Y.-L. Yu, Y. Yang, and E. P. Xing, “Semantic pooling for complex event analysis in untrimmed videos,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 8, pp. 1617-1632, Aug. 2017.
-
[3] K. Muhammad, J. Ahmad, and S. W. Baik, ``Early fire detection using convolutional neural networks during surveillance for effective disaster management,'' Neuro computing, vol. 288, pp. 30-42, May 2017.
-
[4] X. Chang, Z. Ma, M. Lin, Y. Yang, and A. G. Hauptmann, “Feature interaction augmented sparse learning for fast Kinect motion detection,'' IEEE Trans. Image Process, vol. 26, no. 8, pp. 3911-3920, Aug. 2017.
-
[5] X. Chang, Z. Ma, Y. Yang, Z. Zeng, and A. G. Hauptmann, “Bi-level semantic representation analysis for multimedia event detection,'' IEEE Trans. Cybern., vol. 47, no. 5, pp. 1180-1197, May 2017.
-
[6] X. Chang and Y. Yang, “Semi supervised feature analysis by mining correlations among multiple tasks,'' IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 10, pp. 2294-2305, Oct. 2017.
-
[7] A. Yilmaz, O. Javed, and M. Shah, ``Object tracking: A survey,'' ACM Comput. Surv., vol. 38, no. 4, Art. no. 13, 2006.
-
[8] Z. Pan, S. Liu, and W. Fu, ``A review of visual moving target tracking,'' Multimedia Tools Appl., vol. 76, no. 16, pp. 1-30, 2017.
-
[9] Z. Chen, Z. Hong, and D. Tao. (2015). “An experimental survey on correlation filter-based tracking.'' [Online]. Available: https://arxiv. org/abs/1509.05520.
-
[10] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 7, pp. 1409-1422, Jul. 2012.
-
[11] N. Dalal and B. Triggs, ``Histograms of oriented gradients for human detection,'' in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), vol. 1, pp. 886-893. Jun. 2005.
-
[12] R.Mantiuk, K.Myszkowski, H.P.Seidel, “A perceptual framework for contrast processing of high dynamic range images”, ACMTrans.Appl. Percept.3(3), 2006, 286–308.
-
[13] Vertan, C., Oprea, A., Florea, C. and Florea, L. “A pseudo-logarithmic framework for edge detection”, in J.B. Talon, S. Bourennane, W. Philips, D. Popescu and P. Scheunders (Eds.), Advances in Computer Vision, Lecture Notes in Computer Science, Vol. 5259, Springer-Verlag, Juan-les-Pins, pp. 637–644,2008.
-
[14] Patras incu, V. and Voicu, I. “An algebraical model for gray level images”, Proceedings of the Exhibition on Optimization of Electrical and Electronic Equipment, OPTIM, Brasov, Romania, pp. 809–812,2000.
-
[15] Panetta, K., Zhou, Y., Agaian, S. and Wharton, E. “Parameterized logarithmic framework for image enhancement”, IEEE Transactions on Systems, Man, and Cybernetics, B: Cybernetics 41(2): 460–472,2011.
-
[16] M. S. Farid, M. Lucenteforte and M. Grangetto, "Blind depth quality assessment using histogram shape analysis," 2015 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Lisbon, 2015, pp. 1-5.
-
[17] Z. Li and J. Zheng, "Single Image De-Hazing Using Globally Guided Image Filtering," in IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 442-450, Jan. 2018.
-
[18] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS. 2012.
-
[19] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR). 2015.
-
[20] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR. 2015.
-
[21] Bertasius, G., Shi, J., Torresani, L.: Deepedge: A multi-scale bifurcated deep network for top-down contour detection. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) June 2015.
-
[22] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2016, 770–778.
-
[23] Babaee, M., Tung Dinh, D., Rigoll, G.: ‘A deep convolutional neural network for video sequence background subtraction’, Pattern Recognit., 2018, 76, pp. 635–649.
-
[24] Lim, L.A., Keles, H.Y.: ‘Foreground segmentation using a triplet convolutional neural network for multiscale feature encoding’, Pattern Recognit. Lett., 112, pp. 256–262,2018.
-
[25] S. Ren, K. He, R. Girshick, and J. Sun, ``Faster R-CNN: Towards real- time object detection with region proposal networks,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137-1149, Jun. 2017.
-
[26] R. Girshick, “Fast R-CNN,'' in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. pp. 1440-1448, 2015.
-
[27] Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection”, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 779-788, Jun. 2016.
-
[28] G. Li, Z. Song, and Q. Fu, ``A new method of image detection for small datasets under the framework of YOLO network,'' in Proc. IEEE 3rd, Adv. Inf. Technol., Electron. Automat. Control Conf. (IAEAC), pp. 1031-1035, Oct. 2018.
-
[29] K. Zhao, X. Ren, Z. Kong, and M. Liu, “Object detection on remote sensing images using deep learning: An improved single shot multi-box detector method'', J. Electron. Imag., vol. 28, no. 03, p. 1, Jun. 2019.
-
[30] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, ``SSD: Single shot multibox detector,'' in Proc. Eur. Conf. Comput. Vis., 2016, pp. 21-37.
-
[31] Optiview. (2021, August 4). CCTV Camera Resolution | CCTV Resolution Chart for Cameras. Optiview - Professional Surveillance & Security Solutions Provider. https://optiviewusa.com/cctv-video-resolutions/
-
[32] N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, changedetection.net: A new change detection benchmark dataset,in Proc. IEEE Workshop on Change Detection (CDW-2012) at CVPR-2012, Providence, RI, 16-21 Jun., 2012.
-
[33] Gala, Nikhil & Desai, Kamalakar., “Abnormal Region Extraction from MR Brain Images using Hybrid Approach”, International Journal of Advanced Computer Science and Applications. 9. 10.14569/IJACSA.2018.091230, 2018.
-
[34] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 1409.1556, 2014.
-
[35] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, June 2016.