A Comparative Case Study to Control Drones using Hand Gestures
DOI:
https://doi.org/10.22555/pjets.v13i2.1265Keywords:
Drone, Hand gesture recognition, Microsoft Kinect, Myo Armband, Intel RealsenseAbstract
As human-computer interfaces are constantly evolving day by day, hand gesture recognition (HGR) systems are becoming reliable and cost-effective. The interfaces for natural interaction allow users to convert their gestures into commands for a computer system. Multirotor drones are usually controlled using radio controllers, which require good experience and training. It is dif?cult for novice users to properly handle the multirotor using such controllers. This paper builds upon our previously developed architecture for drone control via hand gestures [1]. While the initial work demonstrated the feasibility of using a vision sensor, this study expands its scope by exploring sensor-based and vision-based approaches of HGR. It has been done by integrating and testing three hand gesture data acquisition devices. The aim is to evaluate the performance and usability of these sensors under varying conditions, including indoor and outdoor environments. Experiments, initially conducted in simulation and then with a real drone, show that such devices can be used to train novice users to maneuver the multirotor. Results reveal significant differences in accuracy and practicality, providing actionable insights for selecting sensors based on application needs. This comparative study highlights the adaptability of the existing architecture and its potential for future advancements in human-computer interaction.
References
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Pakistan Journal of Engineering, Technology and Science

This work is licensed under a Creative Commons Attribution 4.0 International License.









