DCIST researchers are addressing the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, non-invasive forms of interactions between humans and robots. In a paper titled, Human Gaze-Driven Spatial Tasking of an Autonomous MAV, Liangzhe Yuan, Christopher Reardon, Garrett Warnell, and Giuseppe Loianno, from the University of Pennsylvania, U.S. Army Research Laboratory, and New York University, show how a set of glasses equipped with gaze tracker, a camera, and an Inertial Measurement Unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, decouple the gaze direction from the head orientation, and  allow the human spatially task (i.e., send new 3D navigation waypoints to) the robot in an uninstrumented environment.

Read the article here.

View video here.

Point of Contact: Giuseppe Loianno