Entries by Lily Hoot

Learning to swarm with knowledge-based neural ordinary differential equations

A recent paper by members of the DCIST alliance uses the deep learning method, knowledge-based neural ordinary differential equations (KNODE) to develop a data-driven approach for extracting single-robot controllers from the observations of a swarm’s trajectory. The goal is to reproduce global swarm behavior using the extracted controller. Different from the previous works on imitation […]

GNN based Coverage and Tracking Tracking in Heterogeneous Swarms

A recent paper by members of the DCIST alliance designs decentralized mechanisms for coverage control in heterogeneous multi-robot systems especially when considering limited sensing ranges of the robots and complex environments. These are part of the broader DCIST efforts for designing GNN-based control architectures which are, from the ground up, designed to operate in harsh […]

Learning Decentralized Controllers with Graph Neural Networks

A recent paper by members of the DCIST alliance develops a perception-action-communication loop framework using Vision-based Graph Aggregation and Inference (VGAI). This multi-agent decentralized learning-to-control framework maps raw visual observations to agent actions, aided by local communication among neighboring agents. The framework is implemented by a cascade of a convolutional and a graph neural network […]

Asynchronous and Parallel Distributed Pose Graph Optimization

A recent paper by members of the DCIST alliance has received a 2020 honorable mention from IEEE Robotics and Automation Letters. The paper presents Asynchronous Stochastic Parallel Pose Graph Optimization (ASAPP), the first asynchronous algorithm for distributed pose graph optimization (PGO) in multi-robot simultaneous localization and mapping. By enabling robots to optimize their local trajectory estimates […]

Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams

A recent paper by members of the DCIST alliance considers the problem of planning trajectories for a team of sensor-equipped robots to reduce uncertainty about a dynamical process. Optimizing the trade-off between information gain and energy cost (e.g., control effort, energy expenditure, distance travelled) is desirable but leads to a non-monotone objective function in the […]

Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping

A recent paper by members of the DCIST alliance develops an open-source C++ library for real-time metric- semantic visual-inertial Simultaneous Localization And Mapping (SLAM). The library goes beyond existing visual and visual-inertial SLAM libraries (e.g., ORB-SLAM, VINSMono, OKVIS, ROVIO) by enabling mesh reconstruction and semantic labeling in 3D. Kimera is designed with modularity in mind […]

Asymptotically Optimal Planning for Non-myopic Multi-Robot Information Gathering

A recent paper by members of the DCIST alliance develops a novel highly scalable sampling-based planning algorithm for multi-robot active information acquisition tasks in complex environments. Active information gathering scenarios include target localization and tracking, active Simultaneous Localization and Mapping (SLAM), surveillance, environmental monitoring and others. The goal is to compute control policies for mobile robot […]

Active Exploration in Signed Distance Fields

When performing tasks in unknown environments it is useful for a team of robots to have a good map of the area to assist in efficient, collision-free planning and navigation. A recent paper by members of the DCIST alliance tackles the problem of autonomous mapping of unknown environments using information theoretic metrics and signed distance […]

Learning Multi-Agent Policies from Observations

A recent paper from the DCIST team introduces a framework for learning to perform multi-robot missions by observing an expert system executing the same mission. The expert system is a team of robots equipped with a library of controllers, each designed to solve a specific task. The expert system’s policy selects the controller necessary to […]

Sim-to-(Multi)-Real: Transfer of Low-Level Robust Control Policies to Multiple Quadrotors

A recent paper by members of the DCIST alliance develops the use of reinforcement learning techniques to train policies in simulation that transfer remarkably well to multiple different physical quadrotors. Quadrotor stabilizing controllers often require careful, model-specific tuning for safe operation. The policies developed are low-level, i.e., they map the rotorcrafts’ state directly to the […]