Motivation

Security at major locations of economic or political importance is a key concern around the world, particularly given the threat of terrorism. Stackelberg games have received significant attention in recent years given their deployment for real-world security problems, such as critical infrastructure protection and robot patrolling strategies.

Many of these real-world systems, including ARMOR, IRIS and GUARDS, have adopted the standard game-theoretic assumption that the adversaries are perfectly rational and strictly maximize their expected utilities. However, this assumption may not hold when facing human adversaries with bounded rationality and may reduce the effectiveness of these systems as a result.

Our goal is to relax the perfect rationality assumption of human adversaries in real-world Stackelberg security games. In particular, we aim at developing new models of the adversary that incorporate their bounded rationality. At the same time, we also want to build new algorithms for efficiently computing the optimal defender strategy for real-world security problems.

Approaches

There are several open questions we need to address in moving beyond perfect rationality assumptions. First, a large variety of alternative models have been studied in Behavioral Game Theory and Cognitive Psychology that capture some of the deviations of human decisions from perfect rationality. However, there is an important empirical question of which model best represents the salient features of human behavior in applied security contexts. Second, integrating any of the proposed models into a decision-support system (even for the purpose of empirical evaluation) requires developing new computational methods, since the existing algorithms for security games are based on perfectly rational attackers.

Third, many of these models imply mathematically complex representations of the adversary’s decision-making procedure (e.g., nonlinear and non-convex function forms), which in general leads to an NP-hard problem of computing the defender’s optimal strategy. Therefore, developing efficient algorithms to solve such a computationally complex problem is critical for real-world security problems due to their massive scale.

Rong Yang Presenting at IJCAI July 2011 (Part 2)

We work towards answering these open questions by first developing mathematical models of adversary decision making based on fundamental theories in literature of related fields, such as Cognitive Science and Behavioral Economics. At the same time, we develop efficient algorithms to compute optimal strategies. We also conduct experiments with human subjects to evaluate the performance of both the adversary behavioral models and the corresponding defender strategies.

We have developed models of adversary behavior for both single-shot game domains such as counter-terrorism and repeated SSG game settings such as wildlife poaching which involves repeated interactions between the defenders and the adversaries. Assuming that attackers act independently in such domains, i.e., they do not collude with each other to attack targets, we have developed several adversary behavior models, such as QR, SUQR, Bayesian SUQR, Robust SUQR and SHARP. We have conducted extensive human subjects experiments on simulated games developed for such domains (see this page ) to compare the performances of these behavioral models.

From data collected about human behavior in such games, we observed that: (a) human being’s perception of probabilities are S-shaped in nature (see Fig. 1 in this page ), this is contrary to what is popularly observed in the behavioral game theory literature such as in Kahneman and Tversky’s Noble prize winning work on prospect theory; and (b) behavioral models such as SHARP which model the adaptive nature of human adversaries perform better as opposed to other models (see Fig. 2 in this page ). See this paper for more details about our experiments and findings.

However, the attackers may collude with each other while conducting attacks. Given such adversary collusion may be more detrimental for the defender, she has an incentive to break up collusion by playing off the self-interest of individual adversaries. As we show in our study, breaking up such collusion is difficult given bounded rationality of human adversaries; we therefore investigate algorithms for the defender assuming both rational and boundedly rational adversaries. Rational algorithm showed that the best strategy for defender is an imbalanced resource allocation strategy such that one of the adversaries is in an advantaged situation and it is completely irrational for him to collude with other adversary who is in a disadvantaged situation; hence, collusion breaks.

In this approach, adversaries are assumed to be utility maximizers; however, our human subject experiments showed that human adversaries not necessarily maximize their utility, rather targets with higher expected utility is more probable to be chosen by them. By this stochastic approach, we were able to improve the performance of the algorithm for security resource allocation against collusive adversaries. See this paper for more details about our models, experiments and findings.

Real-world Deployment

Models and algorithms based on human behaviour models developed in our group have been tested and deployed in various places. Protection Assistant for Wildlife Security (PAWS) is an application which was initially developed with the goal of improving wildlife patrols. PAWS based strategies have been tested in the Queen Elizabeth National park in Uganda in collaboration with Andrew Lemieux.

MaxImin Defense Against SUQR (MIDAS) is a scalable and robust algorithm for generating game-theoretic patrols against multiple boundedly-rational adversaries. MIDAS has recently been tested in the Gulf of Mexico by the USCG. We have successfully developed PROTECT, a software system helps the US Coast Guard schedule patrols to protect the ports. At the heart of PROTECT is the PASAQ algorithm which assumes a Quantal Response model of adversary decision making.

Share this
Become a CAIS partner through community projects, funding, volunteering, or research collaboration.