Localization and Mapping using Instance-specific Mesh Models

A recent paper by members of the DCIST alliance proposes an approach for building semantic maps, containing object poses and shapes, in real time, onboard an autonomous robot equippend with a monocular camera. Rich understanding of the geometry and context of a robot’s surroundigs is important for specification and safe, efficient execution of complex missions. This work develops a deformable mesh model of object shape that can be optimized online based on semantic information (object parts and segmentation) extracted from camera images. Multi-view constraints on the object shape are obtained by detecting objects and extracting category-specific keypoints and segmentation masks. The paper shows that the errors between projections of the mesh model and the observed keypoints and masks can be differentiated in order to obtain accurate instance-specific object shapes. The potential of this approach to build large-scale object-level maps will be investigated in DCIST autonomous navigation and situational awareness tasks.

 

Additional Details: https://fengqiaojun.github.io/IROS19_InstanceMesh.html

Source: Q. Feng, Y. Meng, M. Shan, and N. Atanasov, “Localization and Mapping using Instance-specific Mesh Models,“ IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019.

Task: RA1.A1: The Swarm’s Knowledge Base: Contextual Perceptual Representations

Points of Contact: Nikolay Atanasov (PI) and Qiaojun Feng.