Resilience in Multi-robot Coordination

Dynamic Crowd Vetting: Collaborative Detection of Malicious Robots in Dynamic Communication Networks

Robots leveraging neighboring opinions

In this work we extend the results of the Crowd Vetting paper (see the section later in this page) to dynamic scenarios. More specifically, robots perform random walks through an environment and gather information about the trustworthiness of one another as they interact with each other. The challenge is that robots want to use this information, called trust information, to determine who among them are legitimate or malicious, but must interact with all of the other robots a sufficient number of times in order to do so. We speed up this trust estimation process by designing the Dynamic Crowd Vetting algorithm, which allows robots to use second-hand information from trusted neighbors. This second-hand information speeds up the process because it allows robots to develop opinions about other robots in the team that it may not have directly interacted with enough to form an opinion based on its information alone. We show that by leveraging this additional second-hand information, the time required for robots to correctly estimate the trustworthiness of all others remains constant as the number of robots in the network is scaled up, compared to the approach which does not use second-hand information, which scales logarithmically with the number of robots.

 

Exploiting Trust for Resilient Hypothesis Testing with Malicious Robots

 

Google Maps example

Robots sense the occurence of an event of interest and relay their hypothesis as a binary measurement (0 means the event has not occured; 1 means the event has occured) to a centralized server that we call a Fusion Center (FC). The FC also extracts a trust value from each transmitting robot corresponding to the likelihood that the robot is legitimate/trustworthy. We develop two different algorithms that use this information in order to improve the probability that the FC arrives at the correct hypothesis. The first algorithm, called the Two-Stage Approach (2SA), uses the trust values in the first stage to decide which robots to trust and which not to trust. Then in the second stage the measurements from trusted robots are used to make a final decision. The other algorithm, called the Adversarial Generalized Likelihood Ratio Test (A-GLRT), jointly uses the trust values and event measurements simultaneously to decide both the trustworthiness of each robot as well as the more likely true hypothesis. We apply these methods to a crowdsensing application, such as Google Maps, under a spoofing attack where we demonstrate the ability of both methods to accurately predict the traffic conditions of road segments despite a large number of spoofed robots trying to force the FC to make the wrong decision.

You may find a video summarizing the work here:

Exploiting_trust_video

 

 

Adaptive Malicious Robot Detection in Dynamic Topologies

 

Sites and Trust Observations

We consider a persistent surveillance task where a team of robots must monitor an area of interest in the presence of malicious robots. This is done by discretizing a given environment into sites, and seeking a desired proportion of the overall robot team to occupy each site at any given time. The robots estimate the proportion of legitimate robots at adjacent sites to develop a control strategy that drives the team toward the correct distribution. However, this estimation requires the accurate detection of malicious robots who should not be counted as contributing to the distribution. We assume robots can estimate the legitimacy of others by observing their behavior and gathering trust observations which are stochastic values corresponding to their likelihood of being legitimate or malicious. We develop control strategies that trade off gathering many observations of neighboring robots in order to estimate their legitimacy with satisfying the desired site proportions using those estimates. We achieve this objective even in the case where the network of robots is dynamic, meaning robots communicate with different robots each time-step, new robots can enter or leave the network at any time, and legitimate robots can switch and become malicious at any time.

You may find a video summarizing the work here:

CDC22_video

 

 

Multi-Robot Adversarial Resilience using Control Barrier Functions

 

experiment snapshot

In this work we design a controller that provides a multi-robot team with a communication network that is guaranteed to be resilient to adversaries. The controller also helps to maintain the resilience of the team as the team navigates through an environment where they also look to satisfy other objectives as well as avoid collisions with each other and other obstacles they may encounter. We model communication between robots as a function of distance, so that resilience, which is related to a communication network with many connections between robots, can be modeled as a physical constraint. We analyze how this physical constraint can sometimes be at odds with other physical safety constraints in scenarios where robots split around large obstacles and navigate narrow corridors. We also develop an alternative controller which can allow resilience to be treated as a soft constraint, meaning the robots may choose to sacrifice resilience momentarily if it helps them achieve another goal such as continuing to navigate through an environment.

You may find a video summarizing the work here:

rss22_video.mp4

This work was nominated for the Best Paper Award at the RSS 2022 Conference! Check out the award nominee presentation at RSS here:

https://youtu.be/A6rRCVtB2sM?t=2587

 

 

Providing Local Resilience to Vulnerable Areas in Robotic Networks

 

sample simulation

In this work we analyze communication networks to find vulnerabilities. These vulnerabilities are areas where robots may be able to disperse malicious information to a large portion of the robot team very quickly. We say a robot with this capability has a high influence in the network. We develop a control law that provides resilience around vulnerable areas in the network using only the robots local to that area. In doing so we are able to provide local resilience while reducing the reconfiguration the rest of the team formation has to incur, better supporting their ability to also perform other tasks. We develop an algorithm that chooses high influence robots as candidates for local resilience, called the Vulnerable Set, and then prove that our control law provides the local resilience that is desired.

You may find a video summarizing the work here:

icra22_video.mp4

 

 

Crowd Vetting: Rejecting Adversaries via Collaboration with Application to Multi-Robot Flocking

 

Resilience Multi-robot Coordination

 

An important aspect of multi-robot coordination is the ability to validate information received. Malicious robots can try to trick a team into failing by feeding the team incorrect and purposely harmful information. One common attack malicious robots use is a spoofing attack, where a robot transmits information to the others under multiple false identities. In the REACT Lab, we use Wi-Fi communication as a sensor to validate information sent by robots to counter spoofing attacks.

This work deals with determining which robots in the network are malicious, and which robots we can trust, so that we can filter out the malicious information by ignoring it. This is done by using sensing over the wireless channels to determine which robots may or may not be involved in a spoofing attack. Every observation made by a robot, about another, ideally should improve its confidence on whether or not to trust that robot, but the question becomes, how confident should a robot be about its perception of trustworthiness before it should proceed? Also, how long could a process like this take? These questions are addressed in our recent work where we develop an algorithm called FindSpoofedRobots that utilizes neighboring opinions to speed up the validation process.

You may also find a video summarizing the work here:

View Rejecting Adversaries via Collaboration in Multi-robot Flocking

 

 

Switching Topologies for Rejecting Spoofed Nodes in Multi-agent Consensus Based on Observations over the Wi-Fi Communication Channels (ICRA 2019)

switched-topologies-for-resilient-consensus-using-wi-fi-signals

  

Resilience in Multi-robot Coordination Publications

Michal Yemini, Angelia Nedi´c, Andrea J. Goldsmith, and Stephanie Gil. 2022. “Resilience to Malicious Activity in Distributed Optimization for Cyberphysical Systems .” In IEEE Conference on Decision and Control. Publisher's VersionAbstract
Enhancing resilience in distributed networks in the face of malicious agents is an important problem for which many key theoretical results and applications require further development and characterization. This work focuses on the problem of distributed optimization in multi-agent cyberphysical systems, where a legitimate agent’s dynamic is influenced both by the values it receives from potentially malicious neighboring agents, and by its own self-serving target function. We develop a new algorithmic and analytical framework to achieve resilience for the class of problems where stochastic values of trust between agents exist and can be exploited. In this case we show that convergence to the true global optimal point can be recovered, both in mean and almost surely, even in the presence of malicious agents. Furthermore, we provide expected convergence rate guarantees in the form of upper bounds on the expected squared distance to the optimal value. Finally, we present numerical results that validate the analytical convergence guarantees we present in this paper even when the malicious agents compose the majority of agents in the network.
Matthew Cavorsi, Beatrice Capelli, Lorenzo Sabattini, and Stephanie Gil. Submitted. “Multi-Robot Adversarial Resilience using Control Barrier Functions”.Abstract

In this paper we present a control barrier function-based (CBF) resilience controller that provides resilience in a multi-robot network to adversaries. Previous approaches provide resilience by virtue of specific linear combinations of multiple control constraints. These combinations can be difficult to find and are sensitive to the addition of new constraints. Unlike previous approaches, the proposed CBF provides network resilience and is easily amenable to multiple other control constraints, such as collision and obstacle avoidance. The inclusion of such constraints is essential in order to implement a resilience controller on realistic robot platforms. We demonstrate the iability of the CBF-based resilience controller on real robotic systems through case studies on a multi-robot flocking problem in cluttered environments with the presence of adversarial robots.

Matthew Cavorsi, Orhan Eren Akgün, Michal Yemini, Andrea Goldsmith, and Stephanie Gil. 2023. “Exploiting Trust for Resilient Hypothesis Testing with Malicious Robots Adversarial Hypothesis Testing.” In 2023 IEEE International Conference on Robotics and Automation (ICRA). London, UK. Publisher's VersionAbstract
We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit to Google Maps, subject to a Sybil attack. We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT respectively.
Matthew Cavorsi, Ninad Jadhav, David Saldaña, and Stephanie Gil. 2022. “Adaptive malicious robot detection in dynamic topologies.” In IEEE Conference on Decisions and Control (CDC). Cancun, Mexico.Abstract
We consider a class of problems where robots
gather observations of each other to assess the legitimacy
of their peers. Previous works propose accurate detection of
malicious robots when robots are able to extract observations of
each other for a long enough time. However, they often consider
static networks where the set of neighbors a robot observes
remains the same. Mobile robots experience a dynamic set of
neighbors as they move, making the acquisition of adequate
observations more difficult. We design a stochastic policy that
enables the robots to periodically gather observations of every
other robot, while simultaneously satisfying a desired robot
distribution over an environment modeled by sites. We show
that with this policy, any pre-existing or new malicious robot in
the network will be detected in a finite amount of time, which
we minimize and also characterize. We derive bounds on the
time needed to obtain the desired number of observations for a
given topological map and validate these bounds in simulation.
We also show and verify in a hardware experiment that the
team is able to successfully detect malicious robots, and thus
estimate the true distribution of cooperative robots per site, in
order to converge to the desired robot distribution over sites.
Matthew Cavorsi and Stephanie Gil. 2022. “Providing local resilience to vulnerable areas in robotic networks.” In IEEE International Conference on Robotics and Automation (ICRA), Pp. 4929-4935. Philadelphia, PA.Abstract
We study how information flows through a multirobot network in order to better understand how to provide resilience to malicious information. While the notion of global resilience is well studied, one way existing methods provide global resilience is by bringing robots closer together to improve the connectivity of the network. However, large changes in network structure can impede the team from performing other functions such as coverage, where the robots need to spread apart. Our goal is to mitigate the trade-off between resilience and network structure preservation by applying resilience locally in areas of the network where it is needed most. We introduce a metric, Influence, to identify vulnerable regions in the network requiring resilience. We design a control law targeting local resilience to the vulnerable areas by improving the connectivity of robots within these areas so that each robot has at least 2F +1 vertex-disjoint communication paths between itself and the high influence robot in the vulnerable area. We demonstrate the performance of our local resilience controller in simulation and in hardware by applying it to a coverage problem and comparing our results with an existing global resilience strategy. For the specific hardware experiments, we show that our control provides local resilience to vulnerable areas in the network while only requiring 9.90% and 15.14% deviations from the desired team formation compared to the global strategy.
Matthew Cavorsi, Beatrice Capelli, Lorenzo Sabattini, and Stephanie Gil. 2022. “Multi-robot adversarial resilience using control barrier functions.” In Robotics Science and Systems (RSS) Conference.Abstract

In this paper we present a control barrier function-based (CBF) resilience controller that provides resilience in a multi-robot network to adversaries. Previous approaches provide resilience by virtue of specific linear combinations of multiple control constraints. These combinations can be difficult to find and are sensitive to the addition of new constraints. Unlike previous approaches, the proposed CBF provides network resilience and is easily amenable to multiple other control constraints, such as collision and obstacle avoidance. The inclusion of such constraints is essential in order to implement a resilience controller on realistic robot platforms. We demonstrate the viability of the CBF-based resilience controller on real robotic systems through case studies on a multi-robot flocking problem in cluttered environments with the presence of adversarial robots.