Real-Time Human Action Representation Based on 2D Skeleton Joints
No Thumbnail Available
Files
Date
2023-12-04
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Abstract
We present a new approach to capture human actions by using 2D skeletal joints as the foundation for the real-time representation. Our approach combines three distinct types of information, namely: (1) motion detection to identify salient regions within the action. This allows us to compute joint contribution ratios and save processing time by excluding still joints and focusing on the main joints involved in the action; (2) utilization of a predefined map for joints trajectory shapes, which encodes the temporal information and avoids noisy data using the map; and (3) incorporation of a direction map that captures the movement of the joints, in the spatial space. By integrating these elements, we have devised a comprehensive representation capable of discerning even highly similar actions. To evaluate the effectiveness of our proposed representation, we conducted experiments on the UTD-MHAD dataset [25], which encompasses a diverse range of 27 actions performed by 8 subjects (4 females and 4 males), each with 4 repetitions. The evaluation results exhibit a notable dissimilarity within the same action category (intra-class) and a significant similarity across different action categories (inter-class). Specifically, our approach achieved a commendable score of 94.81 on the UTD-MHAD dataset, thus demonstrating its efficacy and robustness.
Description
RAS
Keywords
Citation
B. Adil, H. S. Mounine and H. Ouassila, "Real-Time Human Action Representation Based on 2D Skeleton Joints," 2023 International Conference on Networking and Advanced Systems (ICNAS), Algiers, Algeria, 2023, pp. 1-7, doi: 10.1109/ICNAS59892.2023.10330505.