Motivation

Many of the most devastating recent cyber attacks have been carried out by an especially dangerous category of attackers known as Advanced Persistent Threats (APT). APT are particularly dangerous because they make use of deception tactics that are personalized, pervasive, and persistent. To leap ahead of attackers in the evolving landscape of cybersecurity, we must make a radical departure from ‘passive defense’ strategies towards `active defense` strategies that match the capabilities of sophisticated APT attackers. We propose to develop active defense capabilities based on a new theory and science of deception for cyber defense that will match or exceed the capabilities of APT attackers. To achieve this we will introduce new deception tactics that are personalized, pervasive, and persistent, bringing together interdisciplinary research in cognitive modeling, computational game theory, and computer systems to develop transformative advances in the science of security. In doing so, we will create a cyber environment in which it is impossible for the attacker to determine what is real and what is deceptive, or a new approach to cybersecurity we call ‘Cyber Inception’.

Cyber Deception through Active Leverage of Adversaries' Cognition Process

Personalized deception involves gathering information about a specific adversary, including their goals, knowledge, capabilities, limitations, biases, and decision-making processes. This information can then be used to develop more targeted deceptions that are likely to be more effective than generic deceptions meant for a generic, unknown opponent.

Pervasive deception relates to the scope and breadth of the deception strategies deployed.  For example, it is very different to have a network with a single honeypot deployed in comparison to a network that is primarily honeypots with only a few real hosts lost in a sea of deceptive ones. Another aspect of pervasive deception that we will explore uses many different types of deceptions in multiple layers, as in layered security.

Persistent deception is about maintaining effective deception over long time horizons. APT attackers are persistent, maintaining a presence in a systems and networks over years in many cases. Attackers are also likely to learn and adapt over time to the types of deceptions being deployed, gaining a sort of “immunity” over time, so our deceptive tactics must also evolve to keep pace and maintain effectiveness.

Our approach uses an innovative combination of adaptive cognitive modeling, cyber deceptive game theory, and deception and monitoring systems that operationalize our strategies:

Deception and Monitoring Systems:  Ultimately deceptive strategies developed by higher-level reasoning about the attacker must be realized in a system, in such a way that the deceptions are convincing and their effects on the attacker can be effectively monitored.   The deceptiveness of an object can be affected by both intrinsic (e.g., its content) and extrinsic (e.g., where it is) factors, and so a convincing campaign of deception must consider both.

Cyber Deceptive Game Theory: Game theory provides a mathematical framework for modeling the interactions between defenders and attackers in cybersecurity, which is an important foundation for developing a science of security. Developing game-theoretic models and algorithms for cybersecurity will enable richer modeling of adversarial interactions, a deeper understanding of deception and information manipulation tactics, and more effective response strategies.

Cognitive Modeling: Cognitive models provide a computational representation of human cognitive processes, their detailed mechanisms and limitations, and the knowledge upon which they operate. Taking advantage of human boundedly rational decision behavior -where humans make decisions according to the constraints on the environment of their own cognitive limitations- we can build a personalized model of adversary behavior.

Share this
Become a CAIS partner through community projects, funding, volunteering, or research collaboration.