| Download | - View final version: MuViH: Multi-View Hand gesture dataset and recognition pipeline for human–robot interaction in a collaborative robotic finishing platform (PDF, 4.6 MiB)
- View supplementary information: MuViH: Multi-View Hand gesture dataset and recognition pipeline for human–robot interaction in a collaborative robotic finishing platform (DOCX, 195 KiB)
|
|---|
| DOI | Resolve DOI: https://doi.org/10.1016/j.rcim.2025.102957 |
|---|
| Author | Search for: Hubert, Corentin; Search for: Odic, Nathan; Search for: Noel, Marie; Search for: Gharib, Sidney; Search for: Zargarbashi, Seyedhossein H. H.1; Search for: Séoud, LamaORCID identifier: https://orcid.org/0000-0001-8209-5282 |
|---|
| Affiliation | - National Research Council of Canada. Aerospace
|
|---|
| Funder | Search for: National Research Council Canada |
|---|
| Format | Text, Article |
|---|
| Subject | vision-based human–robot interaction; gesture recognition; hand detection; occlusion; multi-view; dataset |
|---|
| Abstract | The proliferation of tedious and repetitive tasks on production lines has accelerated the deployment of automated robots. This has also led to a demand for more flexible robots, known as cobots, that can work in collaboration with operators to perform a variety of tasks in different contexts. This paper explores the potential of computer vision-based hand gesture recognition as a means of human–robot interaction within cobotic platforms. Our research focuses on the challenges of gesture recognition in the face of visual occlusions and different camera viewpoints, typical of part finishing tasks in a real-world industrial setting. We introduce a new dataset, MuViH (Multi-View Hand gesture), which features a high variability in camera viewpoints, human operator characteristics, and occlusions, and is fully annotated for hand detection and gesture recognition. We then present a comprehensive hand gesture recognition pipeline that leverages this dataset. Our pipeline incorporates a multi-view aggregation step that significantly enhances gesture recognition accuracy, particularly in the case of visual occlusions. Thanks to extensive experiments and cross-validation on the MuViH dataset and another public dataset, HANDS, our approach demonstrates state-of-the-art performance in gesture recognition. This breakthrough underlines the potential of integrating robust vision-based interaction techniques into cobotic systems, improving flexibility and speed on the production line. |
|---|
| Publication date | 2025-01-22 |
|---|
| Publisher | Elsevier |
|---|
| Licence | |
|---|
| In | |
|---|
| Related data | |
|---|
| Language | English |
|---|
| Peer reviewed | Yes |
|---|
| Export citation | Export as RIS |
|---|
| Report a correction | Report a correction (opens in a new tab) |
|---|
| Record identifier | 4d56eed0-549b-4403-a39d-93bbf9b560d8 |
|---|
| Record created | 2025-05-08 |
|---|
| Record modified | 2025-11-03 |
|---|