Dynamic Crowd Vetting: Collaborative Detection of Malicious Robots in Dynamic Communication Networks
In this work we extend the results of the Crowd Vetting paper (see the section later in this page) to dynamic scenarios. More specifically, robots perform random walks through an environment and gather information about the trustworthiness of one another as they interact with each other. The challenge is that robots want to use this information, called trust information, to determine who among them are legitimate or malicious, but must interact with all of the other robots a sufficient number of times in order to do so. We speed up this trust estimation process by designing the Dynamic Crowd Vetting algorithm, which allows robots to use second-hand information from trusted neighbors. This second-hand information speeds up the process because it allows robots to develop opinions about other robots in the team that it may not have directly interacted with enough to form an opinion based on its information alone. We show that by leveraging this additional second-hand information, the time required for robots to correctly estimate the trustworthiness of all others remains constant as the number of robots in the network is scaled up, compared to the approach which does not use second-hand information, which scales logarithmically with the number of robots.
Exploiting Trust for Resilient Hypothesis Testing with Malicious Robots
Robots sense the occurence of an event of interest and relay their hypothesis as a binary measurement (0 means the event has not occured; 1 means the event has occured) to a centralized server that we call a Fusion Center (FC). The FC also extracts a trust value from each transmitting robot corresponding to the likelihood that the robot is legitimate/trustworthy. We develop two different algorithms that use this information in order to improve the probability that the FC arrives at the correct hypothesis. The first algorithm, called the Two-Stage Approach (2SA), uses the trust values in the first stage to decide which robots to trust and which not to trust. Then in the second stage the measurements from trusted robots are used to make a final decision. The other algorithm, called the Adversarial Generalized Likelihood Ratio Test (A-GLRT), jointly uses the trust values and event measurements simultaneously to decide both the trustworthiness of each robot as well as the more likely true hypothesis. We apply these methods to a crowdsensing application, such as Google Maps, under a spoofing attack where we demonstrate the ability of both methods to accurately predict the traffic conditions of road segments despite a large number of spoofed robots trying to force the FC to make the wrong decision.
You may find a video summarizing the work here:
Adaptive Malicious Robot Detection in Dynamic Topologies
We consider a persistent surveillance task where a team of robots must monitor an area of interest in the presence of malicious robots. This is done by discretizing a given environment into sites, and seeking a desired proportion of the overall robot team to occupy each site at any given time. The robots estimate the proportion of legitimate robots at adjacent sites to develop a control strategy that drives the team toward the correct distribution. However, this estimation requires the accurate detection of malicious robots who should not be counted as contributing to the distribution. We assume robots can estimate the legitimacy of others by observing their behavior and gathering trust observations which are stochastic values corresponding to their likelihood of being legitimate or malicious. We develop control strategies that trade off gathering many observations of neighboring robots in order to estimate their legitimacy with satisfying the desired site proportions using those estimates. We achieve this objective even in the case where the network of robots is dynamic, meaning robots communicate with different robots each time-step, new robots can enter or leave the network at any time, and legitimate robots can switch and become malicious at any time.
You may find a video summarizing the work here:
Multi-Robot Adversarial Resilience using Control Barrier Functions
In this work we design a controller that provides a multi-robot team with a communication network that is guaranteed to be resilient to adversaries. The controller also helps to maintain the resilience of the team as the team navigates through an environment where they also look to satisfy other objectives as well as avoid collisions with each other and other obstacles they may encounter. We model communication between robots as a function of distance, so that resilience, which is related to a communication network with many connections between robots, can be modeled as a physical constraint. We analyze how this physical constraint can sometimes be at odds with other physical safety constraints in scenarios where robots split around large obstacles and navigate narrow corridors. We also develop an alternative controller which can allow resilience to be treated as a soft constraint, meaning the robots may choose to sacrifice resilience momentarily if it helps them achieve another goal such as continuing to navigate through an environment.
You may find a video summarizing the work here:
This work was nominated for the Best Paper Award at the RSS 2022 Conference! Check out the award nominee presentation at RSS here:
https://youtu.be/A6rRCVtB2sM?t=2587
Providing Local Resilience to Vulnerable Areas in Robotic Networks
In this work we analyze communication networks to find vulnerabilities. These vulnerabilities are areas where robots may be able to disperse malicious information to a large portion of the robot team very quickly. We say a robot with this capability has a high influence in the network. We develop a control law that provides resilience around vulnerable areas in the network using only the robots local to that area. In doing so we are able to provide local resilience while reducing the reconfiguration the rest of the team formation has to incur, better supporting their ability to also perform other tasks. We develop an algorithm that chooses high influence robots as candidates for local resilience, called the Vulnerable Set, and then prove that our control law provides the local resilience that is desired.
You may find a video summarizing the work here:
Crowd Vetting: Rejecting Adversaries via Collaboration with Application to Multi-Robot Flocking
An important aspect of multi-robot coordination is the ability to validate information received. Malicious robots can try to trick a team into failing by feeding the team incorrect and purposely harmful information. One common attack malicious robots use is a spoofing attack, where a robot transmits information to the others under multiple false identities. In the REACT Lab, we use Wi-Fi communication as a sensor to validate information sent by robots to counter spoofing attacks.
This work deals with determining which robots in the network are malicious, and which robots we can trust, so that we can filter out the malicious information by ignoring it. This is done by using sensing over the wireless channels to determine which robots may or may not be involved in a spoofing attack. Every observation made by a robot, about another, ideally should improve its confidence on whether or not to trust that robot, but the question becomes, how confident should a robot be about its perception of trustworthiness before it should proceed? Also, how long could a process like this take? These questions are addressed in our recent work where we develop an algorithm called FindSpoofedRobots that utilizes neighboring opinions to speed up the validation process.
You may also find a video summarizing the work here:
Switching Topologies for Rejecting Spoofed Nodes in Multi-agent Consensus Based on Observations over the Wi-Fi Communication Channels (ICRA 2019)