Research Article |
Enhanced Recognition of Human Activity using Hybrid Deep Learning Techniques
Author(s): Abinaya S*, Rajasenbagam T, Indira K, Uttej Kumar K and Potti Sai Pavan Guru Jayanth
Published In : International Journal of Electrical and Electronics Research (IJEER) Volume 12, Issue 1
Publisher : FOREX Publication
Published : 20 January 2024
e-ISSN : 2347-470X
Page(s) : 36-40
Abstract
In the domain of deep learning, Human Activity Recognition (HAR) models stand out, surpassing conventional methods. These cutting-edge models excel in autonomously extracting vital data features and managing complex sensor data. However, the evolving nature of HAR demands costly and frequent retraining due to subjects, sensors, and sampling rate variations. To address this challenge, we introduce Cross-Domain Activities Analysis (CDAA) combined with a clustering-based Gated Recurrent Unit (GRU) model. CDAA reimagines motion clusters, merging origin and destination movements while quantifying domain disparities. Expanding our horizons, we incorporate image datasets, leveraging Convolutional Neural Networks (CNNs). The innovative aspects of the proposed hybrid GRU_CNN model, showcasing its superiority in addressing specific challenges in human activity recognition, such as subject and sensor variations. This approach consistently achieves 98.5% accuracy across image, UCI-HAR, and PAMAP2 datasets. It excels in distinguishing activities with similar postures. Our research not only pushes boundaries but also reshapes the landscape of HAR, opening doors to innovative applications in healthcare, fitness tracking, and beyond.
Keywords: Human Activity Recognition (HAR)
, Convolutional Neural Networks (CNNs)
, Sensor Data Analysis
, Activity Classification
, Multi-Modal Data Fusion
.
Abinaya S*, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India; Email: s.abinaya@vit.ac.in
Rajasenbagam T, Department of Computer Science and Engineering, Government College of Technology, Coimbatore 641013,India; Email: trajasenbagam@gct.ac.in
Indira K, Department of Computer Science and Engineering, Thiagarajar College of Engineering, Madurai 625015, India; Email: kiit@tce.edu
Uttej Kumar K, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India; Email: kandagatlauttej.kumar2019@vitstudent.ac.in
Potti Sai Pavan Guru Jayanth, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India; Email: sai.pavangurujayanth2019@vitstudent.ac.in
-
[1] Paul, A., Dey, N., & Chakraborty, S. (2020). Hybrid deep learning model for human activity recognition using smartphone sensors. Multimedia Tools and Applications, 77(14), 18023-18043.
-
[2] Abinaya, S., & Rajasenbagam, T. (2022). Enhanced Visual Analytics Technique for Content-Based Medical Image Retrieval. IJEER, 10(2), 93-99. [CrossRef]
-
[3] Zhang, Y., Xu, B., Yang, L., & Liu, F. (2019). Multimodal deep learning for human activity recognition: A survey. Neurocomputing, 335, 27-49.
-
[4] Kumar, P., & Suresh, S. (2023). DeepTransHAR: a novel clustering-based transfer learning approach for recognizing the cross-domain human activities using GRUs (Gated Recurrent Units) Networks. Internet of Things, 21, 100681. [CrossRef]
-
[5] Bishoy, M., Bahaa, E. K., & Khattab, T. M. (2019). Human activity recognition using a hybrid model combining 3D-CNNs and LSTM networks. Multimedia Tools and Applications, 78(5), 5565-5584.
-
[6] Khan, Z., Gao, Y., Khan, I. U., Ali, M., & Rehman, A. (2021). Multi-head convolutional neural networks with attention-based fusion for human activity recognition. Applied Soft Computing, 100, 107035. [CrossRef]
-
[7] Islam, M. M., Sharif, M. H., Idris, M. Y. I., & Kamal, N. A. M. (2021). Recent advances in deep learning-based human activity recognition: a comprehensive review. Sensors, 21(17), 5926.
-
[8] Nguyen, H. N., Nguyen, N. T., & Dang, T. N. (2019). Multi-modal deep learning for human activity recognition using RGB-D images and inertial sensors. Sensors, 19(13), 2958.
-
[9] Lu, L., Zhang, C., Cao, K., Deng, T., & Yang, Q. (2022). A multichannel CNN-GRU model for human activity recognition. IEEE Access, 10, 66797-66810. [CrossRef]
-
[10] Dua, N., Singh, S. N., & Semwal, V. B. (2021). Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing, 103, 1461-1478. [CrossRef]