Learning Connectivity-Maximizing Network Configurations

A recent paper by members of the DCIST alliance develops a data-driven method for providing mobile wireless infrastructure on demand to multi-robot teams requiring communication in order to collaboratively achieve a common objective. While a considerable amount of research has been devoted to this problem, existing solutions do not scale in a manner suitable for online applications for more than a handful of agents. To address this problem, the researchers propose a supervised learning approach with a convolutional neural network (CNN) that learns to place communication agents from an expert that uses an optimization-based strategy. After detailing training and CNN architecture choices, they demonstrate the performance of their CNN on canonical network topologies, randomly generated test cases, and larger teams not seen during training. They also show how the system can be applied to dynamic robot teams through a Unity-based simulation. Their approach provides connected networks orders of magnitude faster than the optimization-based scheme while achieving comparable performance.

Capability: T3C2A – Learning Configurations in Mobile Infrastructure on Demand

Points of Contact: Alejandro Ribeiro (PI) and Daniel Mox

Video: https://youtu.be/YLgxFJdN9hg

Paper: https://arxiv.org/abs/2112.07663

Citation: Daniel Mox, Vijay Kumar, and Alejandro Ribeiro. “Learning Connectivity-Maximizing Network Configurations.” arXiv preprint arXiv:2112.07663 (2021).