Human Gaze-Driven Spatial Tasking of an Autonomous MAV
A recent paper by members of the DCIST alliance provides new results on human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, noninvasive forms of interaction between humans and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an inertial measurement unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, and decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3-D navigation waypoints to) the robot in an uninstrumented environment. We decouple the gaze direction from head motion by tracking the human’s head orientation using a combination of camera and IMU data. In order to detect the flying robot, we train and use a deep neural network. We experimentally evaluate the proposed approach, and show that our pipeline has the potential to enable gaze-driven autonomy for spatial tasking. The proposed approach can be employed in multiple scenarios including inspection and first response, as well as by people with disabilities that affect their mobility.
Source: L. Yuan, C. Reardon, G. Warnell, and G. Loianno, “Human Gaze-Driven Spatial Tasking of an Autonomous MAV”, IEEE Robotics and Automation Letters 2019 and ICRA 2019, Featured IEEE Spectrum, DigitalTrends, and more.
Task: CDE 4, RA1.C1, RA1.B3, RA2.A2
Points of Contact: Giuseppe Loianno (PI) and Christopher Reardon.