Resilience in Multi-robot Coordination

Submitted
Matthew Cavorsi, Beatrice Capelli, Lorenzo Sabattini, and Stephanie Gil. Submitted. “Multi-Robot Adversarial Resilience using Control Barrier Functions”.Abstract

In this paper we present a control barrier function-based (CBF) resilience controller that provides resilience in a multi-robot network to adversaries. Previous approaches provide resilience by virtue of specific linear combinations of multiple control constraints. These combinations can be difficult to find and are sensitive to the addition of new constraints. Unlike previous approaches, the proposed CBF provides network resilience and is easily amenable to multiple other control constraints, such as collision and obstacle avoidance. The inclusion of such constraints is essential in order to implement a resilience controller on realistic robot platforms. We demonstrate the iability of the CBF-based resilience controller on real robotic systems through case studies on a multi-robot flocking problem in cluttered environments with the presence of adversarial robots.

2023
Matthew Cavorsi, Orhan Eren Akgün, Michal Yemini, Andrea Goldsmith, and Stephanie Gil. 2023. “Exploiting Trust for Resilient Hypothesis Testing with Malicious Robots Adversarial Hypothesis Testing.” In 2023 IEEE International Conference on Robotics and Automation (ICRA). London, UK. Publisher's VersionAbstract
We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit to Google Maps, subject to a Sybil attack. We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT respectively.
exploiting_trust_for_resilient_hypothesis_testing_with_malicious_robots.pdf
2022
Matthew Cavorsi, Ninad Jadhav, David Saldaña, and Stephanie Gil. 2022. “Adaptive malicious robot detection in dynamic topologies.” In IEEE Conference on Decisions and Control (CDC). Cancun, Mexico.Abstract
We consider a class of problems where robots
gather observations of each other to assess the legitimacy
of their peers. Previous works propose accurate detection of
malicious robots when robots are able to extract observations of
each other for a long enough time. However, they often consider
static networks where the set of neighbors a robot observes
remains the same. Mobile robots experience a dynamic set of
neighbors as they move, making the acquisition of adequate
observations more difficult. We design a stochastic policy that
enables the robots to periodically gather observations of every
other robot, while simultaneously satisfying a desired robot
distribution over an environment modeled by sites. We show
that with this policy, any pre-existing or new malicious robot in
the network will be detected in a finite amount of time, which
we minimize and also characterize. We derive bounds on the
time needed to obtain the desired number of observations for a
given topological map and validate these bounds in simulation.
We also show and verify in a hardware experiment that the
team is able to successfully detect malicious robots, and thus
estimate the true distribution of cooperative robots per site, in
order to converge to the desired robot distribution over sites.
Matthew Cavorsi, Beatrice Capelli, Lorenzo Sabattini, and Stephanie Gil. 2022. “Multi-robot adversarial resilience using control barrier functions.” In Robotics Science and Systems (RSS) Conference.Abstract

In this paper we present a control barrier function-based (CBF) resilience controller that provides resilience in a multi-robot network to adversaries. Previous approaches provide resilience by virtue of specific linear combinations of multiple control constraints. These combinations can be difficult to find and are sensitive to the addition of new constraints. Unlike previous approaches, the proposed CBF provides network resilience and is easily amenable to multiple other control constraints, such as collision and obstacle avoidance. The inclusion of such constraints is essential in order to implement a resilience controller on realistic robot platforms. We demonstrate the viability of the CBF-based resilience controller on real robotic systems through case studies on a multi-robot flocking problem in cluttered environments with the presence of adversarial robots.

Matthew Cavorsi and Stephanie Gil. 2022. “Providing local resilience to vulnerable areas in robotic networks.” In IEEE International Conference on Robotics and Automation (ICRA), Pp. 4929-4935. Philadelphia, PA.Abstract
We study how information flows through a multirobot network in order to better understand how to provide resilience to malicious information. While the notion of global resilience is well studied, one way existing methods provide global resilience is by bringing robots closer together to improve the connectivity of the network. However, large changes in network structure can impede the team from performing other functions such as coverage, where the robots need to spread apart. Our goal is to mitigate the trade-off between resilience and network structure preservation by applying resilience locally in areas of the network where it is needed most. We introduce a metric, Influence, to identify vulnerable regions in the network requiring resilience. We design a control law targeting local resilience to the vulnerable areas by improving the connectivity of robots within these areas so that each robot has at least 2F +1 vertex-disjoint communication paths between itself and the high influence robot in the vulnerable area. We demonstrate the performance of our local resilience controller in simulation and in hardware by applying it to a coverage problem and comparing our results with an existing global resilience strategy. For the specific hardware experiments, we show that our control provides local resilience to vulnerable areas in the network while only requiring 9.90% and 15.14% deviations from the desired team formation compared to the global strategy.
Michal Yemini, Angelia Nedi´c, Andrea J. Goldsmith, and Stephanie Gil. 2022. “Resilience to Malicious Activity in Distributed Optimization for Cyberphysical Systems .” In IEEE Conference on Decision and Control. Publisher's VersionAbstract
Enhancing resilience in distributed networks in the face of malicious agents is an important problem for which many key theoretical results and applications require further development and characterization. This work focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent’s dynamic is influenced both by the values it receives from potentially malicious neighboring agents, and by its own self-serving target function. We develop a new algorithmic and analytical framework to achieve resilience for the class of problems where stochastic values of trust between agents exist and can be exploited. In this case we show that convergence to the true global optimal point can be recovered, both in mean and almost surely, even in the presence of malicious agents. Furthermore, we provide expected convergence rate guarantees in the form of upper bounds on the expected squared distance to the optimal value. Finally, we present numerical results that validate the analytical convergence guarantees we present in this paper even when the malicious agents compose the majority of agents in the network.
2021
Crowd Vetting: Rejecting Adversaries via Collaboration--with Application to Multi-Robot Flocking
Frederik Mallmann-Trenn, Matthew Cavorsi, and Stephanie Gil. 2021. “Crowd Vetting: Rejecting Adversaries via Collaboration--with Application to Multi-Robot Flocking.” Transactions on Robotics Journal.Abstract
We characterize the advantage of using a robot's neighborhood to find and eliminate adversarial robots in the presence of a Sybil attack. We show that by leveraging the opinions of its neighbors on the trustworthiness of transmitted data, robots can detect adversaries with high probability. We characterize a number of communication rounds required to achieve this result to be a function of the communication quality and the proportion of legitimate to malicious robots. This result enables increased resiliency of many multi-robot algorithms. Because our results are finite time and not asymptotic, they are particularly well-suited for problems with a time critical nature. We develop two algorithms, \emph{FindSpoofedRobots} that determines trusted neighbors with high probability, and \emph{FindResilientAdjacencyMatrix} that enables distributed computation of graph properties in an adversarial setting. We apply our methods to a flocking problem where a team of robots must track a moving target in the presence of adversarial robots. We show that by using our algorithms, the team of robots are able to maintain tracking ability of the dynamic target.
crowd_vetting.pdf