Review Article |
Facial Expression Analysis and Estimation Based on Facial Salient Points and Action Unit (AUs)
Author(s) : Shashank M Gowda1, H N Suresh2
Published In : International Journal of Electrical and Electronics Research (IJEER) Volume 10, Issue 1
Publisher : FOREX Publication
Published : 30 March 2022
e-ISSN : 2347-470X
Page(s) : 7-17
Abstract
Humans use their facial expressions as one of the most effective, quick, and natural ways to convey their feelings and intentions to others. In this research, presents the analyses of human facial structure along with its components using Facial Action Units (AUs) and Geometric structures for identifying human facial expressions. The approach considers facial components such as Nose, Mouth, eyes and eye brows for FER. Nostril contours such as left lower tip, right lower tip, and centre tip are considered as salient points of Nose. Various salient points for Mouth are extracted from the left and right end point, upper and lower lip mid points along with curve. These salient points are extracted for all facial expression of the same subject considering neutral face as reference. The Geometric structure for neutral face is mapped along with other facial expression faces. The deformation is estimated using the Euclidean distance. The classification algorithms such as LibSVM, MLP, RF has achieved classification accuracy of 86.56% on an average. The findings of the experiments show that the extraction of picture characteristics is more efficient in terms of computing and gives promising outcomes.
Keywords: Facial Expression
, Action Unit
, Salient points
, Random Forest
, Pre-processing
, Facial Landmark
.
Shashank M Gowda, AP, DoECE, YIT, India ; Email: shashank.m.gowda.91@gmail.com
H N Suresh, Prof, DoEIE, BIT., Bengaluru, Karnataka, India; Email: pdimri1@gmail.com
[1] Chen W, Joo EM, Shiqian W (2006) Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain. IEEE transactions on systems, man, and cybernetics, 36: 458–66. https://doi.org/10.1109/ TSMCB.2005.857353. [Cross Ref]
[2] Gangolli SH, AJL Fonseca, Sonkusare R (2019). Image Enhancement using Various Histogram Equalization Techniques, In: Global Conference for Advancement in Technology (GCAT), Bangalore, India, pp. 1-5, doi: 10.1109/GCAT47503.2019.8978413. [Cross Ref]
[3] Jain U, Choudhary K, Gupta S, Privadarsini MJP, (2018). Analysis of Face Detection and Recognition Algorithms Using Viola Jones Algorithm with PCA and LDA, In: 2nd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, pp. 945-950, doi: 10.1109/ICOEI.2018.8553811 [Cross Ref]
[4] Singh, M. Singh, Singh B (2016). Face detection and eyes extraction using Sobel edge detection and morphological operations, In: Conference on Advances in Signal Processing (CASP), Pune, India, pp. 295-300, doi: 10.1109/CASP.2016.7746183. [Cross Ref]
[5] Ekman P, Friesen W (1978) Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists, Palo Alto, CA, USA, Tech. Rep.
[6] Li X, Ji Q (2005). Active affective state detection and user assistance with dynamic Bayesian networks, IEEE Transactions on Systems, Man, and Cybernetics, A, Syst., Humans, 35(1):93–105. [Cross Ref]
[7] SumaLakshmi CH, Vasuki P (2020). Performance Improving of ANN with Preprocessing Stage in Human Face Recognition System. International Journal of Innovative Technology and Exploring Engineering (IJITEE), 9(4):2278-3075. [Cross Ref]
[8] Narayan T, Deshpande, Ravishankar S (2017). Face detection and Recognition using Viola-Jones algorithm and Fusion of PCA and ANN. Advances in Computational Sciences and Technology, 10(5):1173-1189.
[9] Wagh P, Chaudhari J, Thakare R, Patil S (2015). Attendance System based on Face Recognition using Eigen Face and PCA Algorithms. In: IEEE International Conference on Green Computing and Internet of Things (ICGCloT), pp. 303-308. [Cross Ref]
[10] Kumar GS, Prasad Reddy PVGD, Gupta S, Kumar RA (2012). Position Determination and Face detection using Image Processing Techniques and SVM Classifier. In: Proceedings of the International Conference on Information Systems Design and Intelligent Applications, pp. 227-236. [Cross Ref]
[11] Kavinmathi B, Varadharajan E, Dharani R, Jeevitha S, Hemalatha S (2016). Automatic Attendance Management System using Face Detection. In: 2016 Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, India, pp. 1-3, doi: 10.1109/GET.2016.7916753. [Cross Ref]
[12] Zeng D, Veldhuis R, Spreeuwers L (2020). A survey of face recognition techniques under occlusion, IET Biometrics, https://doi.org/10.1049/bme2.12029. [Cross Ref]
[13] Kahatapitiya K, Tissera D, Rodrigo R (2019). Context-Aware Automatic Occlusion Removal, In: IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, pp. 1895-1899, doi: 10.1109/ICIP.2019.8803141. [Cross Ref]
[14] Srinivasan A, Balamurugan V, (2014), Occlusion Detection and Image Restoration in 3D Face Image, TENCON 2014 IEEE Region 10 Conference, Bangkok, Thailand, pp. 1-6, DOI: 10.1109/TENCON.2014.7022477. [Cross Ref]
[15] Ma R, Mohamed A (2015), Image Processing Pipeline for Facial Expression Recognition under Variable Lighting, In https://web.stanford.edu/class/ee368/Ma_Mohamed.pdf.
[16] Guo F, Yang J, Chen Y, Yao B (2018). Research on Image Detection and matching based on SIFT Features. In: 3rd International Conference on Control and Robotics Engineering (ICCRE), Nagoya, Japan, pp. 130-134, doi: 10.1109/ICCRE.2018.8376448. [Cross Ref]
[17] Pal MK, Porwal A (2015). A Local Brightness Normalization (LBN) Algorithm for destriping Hyperion images. International Journal of Remote Sensing, 36(10): 2674-2696. https://doi.org/10.1080/01431161.2015.1043761 [Cross Ref]
[18] Zhang T, et al.(2009). Multiscale facial structure representation for face recognition under varying illumination. Pattern Recognition, 42(2):251–8. https://doi.org/ 10.1016/j.patcog.2008.03.017. [Cross Ref]
[19] Feng Y, Zhao H, Li X, Zhang XL, Li HP (2017). A multi-scale 3D Otsu thresholding algorithm for medical image segmentation, Digital Signal Processing, 60: 186-199, https://doi.org/10.1016/j.dsp.2016.08.003. [Cross Ref]
[20] Vezzetti E, Marcolin F, Tornincasa S, Ulrich NDL (2017). 3D geometry-based automatic landmark localization in presence of facial occlusions, Multimedia Tools and Applications, 77: 14177-14205, 10.1007/s11042-017-5025-y. [Cross Ref]
[21] Jain V, Lamba PS, Singh B, Namboothiri N, Dhall S (2019). Facial Expression Recognition using feature level fusion, Journal of Discrete Mathematical Sciences and Cryptography, 22:337-350, https://doi.org/10.1080/09720529.2019.158266. [Cross Ref]
[22] Singh, M. Singh, Singh B (2016). Face detection and eyes extraction using Sobel edge detection and morphological operations, In: Conference on Advances in Signal Processing (CASP), Pune, India, pp. 295-300, doi: 10.1109/CASP.2016.7746183. [Cross Ref]
[23] Kazemi V, Sullivan J, (2014). One millisecond face alignment with an ensemble of regression trees, In: IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 1867-1874, doi: 10.1109/CVPR.2014.241. [Cross Ref]
[24] Kohler R, (1981). A segmentation system based on thresholding. Computer Graphics and Image Processing, 15(4): 319-338. [Cross Ref]
[25] EKMAN, P. Introduction. Annals of the New York Academy of Sciences, volume 1000, no. 1, 2003: pp. 1–6, ISSN 1749-6632, doi: 10.1196/annals.1280.002. [Cross Ref]
[26] Ekman, P. Happy, sad, angry, disgusted. Volume 184, 10 2004: pp. 4–5.
[27] Darwin, C. The Expression of the Emotions in Man and Animals. Cambridge Library Collection - Darwin, Evolution and Genetics, Cambridge University Press, second edition, 2009, doi:10.1017/CBO9780511694110.
[28] Gendron, M.; Roberson, D.; et al. Perceptions of emotion from facial expressions are not culturally universal: evidence from a remote culture. Emotion, volume 14, no. 2, Apr 2014: pp. 251–262. [Cross Ref]
[29] Zhu, Q.; Yeh, M.-C.; et al. Fast human detection using a cascade of histograms of oriented gradients. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, IEEE, 2006, pp. 1491–1498. [Cross Ref]
[30] Kazemi, V.; Josephine, S. One millisecond face alignment with an ensemble of regression trees. In 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, United States, 23 June 2014 through 28 June 2014, IEEE Computer Society, 2014, pp. 1867– 1874. [Cross Ref]
[31] Ebrahimi Kahou, S.; Michalski, V.; et al. Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ACM, 2015, pp. 467–474. [Cross Ref]
[32] Lucey, P.; Cohn, J. F.; et al. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, IEEE, 2010, pp. 94–101. [Cross Ref]
[33] Szegedy, C.; Vanhoucke, V.; et al. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826. [Cross Ref]
[34] Hunter, J. D. Matplotlib: A 2D graphics environment. Computing In Science & Engineering, volume 9, no. 3, 2007: pp. 90–95, doi:10.1109/MCSE.2007.55. [Cross Ref]
[35] Stephen Raj. S and Sripriya. P (2021), Eye Drowsiness Tiredness Detection Based on Driver Experience Using Image Mining. IJEER 9(1), 1-5. DOI: 10.37391/IJEER.090101. [Cross Ref]
Shashank M Gowda and H N Suresh (2022), Facial Expression Analysis and Estimation Based on Facial Salient Points and Action Unit (AUs). IJEER 10(1), 7-17. DOI: 10.37391/IJEER.100102.