Intermittently Connected Mobile Robot Networks with Information Propagation Guarantees

DCIST researchers pioneered strategies for teams of mobile robots to form intermittently connected communication networks by leveraging their mobility.  Robots assigned to monitor and patrol large urban environments can leverage their movements to carry information to other robots that are not within their communication ranges. Our work shows intermittent connectivity between pairs of robots can be achieved by synchronizing the robots’ motions and ensuring robots rendezvous or move into each other’s communication ranges periodically.  And while the network is not fully connected at any instance in time, it is guaranteed to be connected over a predetermined period of time ensuring the successful propagation of information across the entire network.  In addition, these strategies can be extended to guarantee resiliency of the mobile robot network in the presence of non-cooperative robots.  Resilience for a large intermittently connected mobile robot network is achieved by concatenating modular dynamic formations where each module is guaranteed to be resilient by design.

Left: The creation of a time-varying periodically connected in an urban environment by a team of mobile robots tasked to patrol different buildings.  Right: Two examples of time-varying networks formed using lattice modules.  Red nodes represent malicious agents.  Plots on the right show consensus formation in the network over time.

Capability: T3C2C Mobility to Extend Communication Networks 

Points of Contact: M. Ani Hsieh (PI), Xi Yu, Daigo Shishika, David Saldana

Paper: https://10.1109/LRA.2020.2967704, https://10.1109/TRO.2021.3088765 

Citation: X. Yu and M. A. Hsieh, “Synthesis of a time-varying communication network by robot teams with information propagation guarantees,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1413–1420,2020.

Yu, D. Shishika, D. Saldana and M. A. Hsieh. “Modular Robot Formation and Routing for Resilient Consensus,” in the Proc. of the 2020 IEEE American Control Conference (ACC 2020), Jul 2020 (Virtual).

X. Yu, D. Saldana, D. Shishika, and M. A. Hsieh.“Resilient Consensus in Robot Swarms with Periodic Motion and Intermittent Communication,” in IEEE Transactions on Robotics and Automation (T-RO), 2021.

Learning Connectivity-Maximizing Network Configurations

A recent paper by members of the DCIST alliance develops a data-driven method for providing mobile wireless infrastructure on demand to multi-robot teams requiring communication in order to collaboratively achieve a common objective. While a considerable amount of research has been devoted to this problem, existing solutions do not scale in a manner suitable for online applications for more than a handful of agents. To address this problem, the researchers propose a supervised learning approach with a convolutional neural network (CNN) that learns to place communication agents from an expert that uses an optimization-based strategy. After detailing training and CNN architecture choices, they demonstrate the performance of their CNN on canonical network topologies, randomly generated test cases, and larger teams not seen during training. They also show how the system can be applied to dynamic robot teams through a Unity-based simulation. Their approach provides connected networks orders of magnitude faster than the optimization-based scheme while achieving comparable performance.

Capability: T3C2A – Learning Configurations in Mobile Infrastructure on Demand

Points of Contact: Alejandro Ribeiro (PI) and Daniel Mox



Citation: Daniel Mox, Vijay Kumar, and Alejandro Ribeiro. “Learning Connectivity-Maximizing Network Configurations.” arXiv preprint arXiv:2112.07663 (2021).

Dynamic Defender-Attacker Resource Allocation Game

A recent paper by members of the DCIST alliance proposes a new resource allocation game that studies a dynamic, adversarial resource allocation problem in environments modeled as graphs. By combining ideas from Colonel Blotto games with a population dynamics model, the proposed formulation incorporates: (i) dynamic reallocation in time-varying situations, and (ii) the presence of adversarial agents. A blue team of defender robots are deployed in the environment to protect the nodes from a red team of attacker robots. The engagement is formulated a discrete-time dynamic game, where the robots can move at most one hop in each time step. The game terminates with the attacker’s win if any location has more attacker robots than defender robots at any time. The goal is to identify dynamic resource allocation strategies, as well as the conditions that determine the winner: graph structure, available resources, and initial conditions. The authors analyze the problem using reachable sets and show how the outdegree of the underlying graph directly influences the difficulty of the defending task. Furthermore, they provide algorithms that identify sufficiency of the attacker’s victory. The proposed model has a potential for being extended to various scenarios to study dynamic and adversarial engagement between robots with traversability constraints.

Capability: T2C1B: Distributed Control for Dynamic Resource Allocation in Adversarial Environments

Points of Contact: Daigo Shishika (PI), Scott Guan, Michael Dorothy 


Citation: Daigo Shishika, Yue Guan, Michael Dorohy, and Vijay Kumar, “Dynamic Defender-Attacker Blotto Game,” (under review), arXiv preprint, arXiv:2112.09890 (2021).

Cooperative Systems Design in Adversarial Environments

The Colonel Blotto game describes a scenario where two opposing Colonels strategically allocate their limited resources across multiple battlefields. The game is compelling for a multitude of reasons, having numerous applications in military strategy. Optimal strategies in the Colonel Blotto game are highly complex – the game does not admit pure strategy equilibria in settings of interest. Mixed equilibria have been characterized for special setups in only a handful of landmark papers. These contributions, however, are almost exclusively in a complete information setting. In this thrust, we investigate incomplete and asymmetric information scenarios where the Colonels may have incomplete information regarding the battlefield valuations and the opposing Colonel’s budget. Focusing on the framework of General Lotto games, a well-known variant of Colonel Blotto, we provide a complete analytical characterization of the Bayesian Nash equilibria for all instances of this game. This characterization identifies the “value of information” in such domains, i.e., the performance improvement attainable by having better information.  Lastly, we explore the importance of information dissemination as a strategic component of decision-making in adversarial environments. That is, is it ever strategically advantageous to disclose information about one’s intentions in competitive scenarios.  Surprisingly, the answer is yes and we provide a characterization of when this is the case.

Capability: T2C1H 

Points of Contact: Jason R. Marden (PI), Keith Paarporn, Rahul Chandan 


Citation: K. Paarporn, R. Chandan, M. Alizadeh, and J. R. Marden. A General Lotto game with asymmetric budget uncertainty. 2021 (under review).  K. Paarporn et al., “Asymmetric battlefield uncertainty in General Lotto games,” 2021 (under review). 

Learning to swarm with knowledge-based neural ordinary differential equations

A recent paper by members of the DCIST alliance uses the deep learning method, knowledge-based neural ordinary differential equations (KNODE) to develop a data-driven approach for extracting single-robot controllers from the observations of a swarm’s trajectory. The goal is to reproduce global swarm behavior using the extracted controller. Different from the previous works on imitation learning, this method does not require action data for training. The proposed method can combine existing knowledge about the single-robot dynamics, and incorporates information decentralization, time delay, and obstacle avoidance into a general model for controlling each individual robot in a swarm. The decentralized information structure and homogeneity assumption further allow the method for scalable training, i.e., the training time grows linearly with the swarm size. This method was applied on two different flocking swarms, in 2D and 3D respectively, and successfully reproduced global swarm behavior using the learnt controllers. In addition to the learning method, the paper also proposed the novel application of proper orthogonal decomposition (POD) for evaluating the performance of a learnt controller. Furthermore, extensive analysis on hyperparameters is conducted to provide more insights on the properties and  characteristics of the proposed method.

Capability: T3C4C – Adaptive Swarm Behaviors for Uncertainty Mitigation (Hsieh)

Points of Contact: M. Ani Hsieh (PI) and Tom Z. Jiahao



Citation: T. Z. Jiahao, L. Pan, M. A. Hsieh “Learning to Swarm with Knowledge-Based Neural Ordinary Differential Equations.” Arxiv Preprint, December 2021.

GNN based Coverage and Tracking Tracking in Heterogeneous Swarms

A recent paper by members of the DCIST alliance designs decentralized mechanisms for coverage control in heterogeneous multi-robot systems especially when considering limited sensing ranges of the robots and complex environments. These are part of the broader DCIST efforts for designing GNN-based control architectures which are, from the ground up, designed to operate in harsh operational conditions, leveraging multi-hop communication to overcome local informational limitations. Our efforts on creating a publication have identified the following salient features of our GNN-controller for multi-robot coverage: (1) We present a model-informed learning solution which leverages relevant (model-based) aspects of the coverage task and propagates it through the network via communication among neighbors in the graph; (2): We use ablation studies explicitly demonstrate that the resulting policies automatically leverage inter-robot communication for improved performance; (3) We show the GNN-based coverage controller outperforms Lloyd’s algorithm under a wide range of training and testing conditions, demonstrating scalability and transferability. 


Capability: T1C5 – Joint Resource Allocation in Perception-Action-Communication Loops

Points of Contact: Vijay Kumar (PI) and Walker Gosrich


Citation: Walker Gosrich, Siddharth Mayya, Rebecca Li, James Paulos, Mark Yim, Alejandro Ribeiro, and Vijay Kumar. “Coverage Control in Multi-Robot Systems via Graph Neural Networks.” arXiv preprint arXiv:2109.15278 (2021)

Learning Decentralized Controllers with Graph Neural Networks

A recent paper by members of the DCIST alliance develops a perception-action-communication loop framework using Vision-based Graph Aggregation and Inference (VGAI). This multi-agent decentralized learning-to-control framework maps raw visual observations to agent actions, aided by local communication among neighboring agents. The framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning, as well as swarm-level communication, local information aggregation and agent action inference, respectively. By jointly training the CNN and GNN, image features and communication messages are learned in conjunction to better address the specific task. The researchers use imitation learning to train the VGAI controller in an offline phase, relying on a centralized expert controller. This results in a learned VGAI controller that can be deployed in a distributed manner for online execution. Additionally, the controller exhibits good scaling properties, with training in smaller teams and application in larger teams. Through a multiagent flocking application, the researchers demonstrate that VGAI yields performance comparable to or better than other decentralized controllers, using only the visual input modality (even with visibility degradation) and without accessing precise location or motion state information.

Capability: T1C5: Joint Resource Allocation in Perception-Action-Communication Loops

Points of Contact: Zhangyang “Atlas” Wang and Ting-Kuei Hu


(also appears in ARL press release as no. 3)


Citation: Hu, T. K., Gama, F., Chen, T., Zheng, W., Wang, Z., Ribeiro, A., & Sadler, B. M., “Scalable Perception-Action-Communication Loops with Convolutional and Graph Neural Networks.” IEEE Transactions on Signal and Information Processing over Networks, 2021