[review#27] Survey of Cyber Moving Targets Second Edition_2018
This report is the result of studies performed at Lincoln Laboratory, a federally funded research and development center operated by Massachusetts Institute of Technology.
This report is the result of studies performed at Lincoln Laboratory, a federally funded research and development center operated by Massachusetts Institute of Technology.
In this article, we first provide a thorough analysis of the threats in the cloud–edge–terminal network. Then, we conduct a comprehensive survey to discuss the concept, design principles, and main classifications of MTD. Next, we further introduce the development potential in terms of AI-powered MTD on each network layer.
In this paper, we propose a concept of deception attack surface to illustrate deception-based moving target defense. Moreover, we propose a quantitative method to measure deception, which includes two core concepts: exposed falseness degree and hidden truth de
gree.
This paper presents a proactive network reconnaissance defense mechanism based on the temporal randomization of network IP addresses, MAC addresses and port numbers.
The chapters in this book present a range of MTD challenges and promising solution paths based on game–theoretic approaches, network-based cyber maneuver, and software transformations.
By designing a trilateral game cost-effective shuffling algorithm, we capture the best MTD strategy and reach a balance between them in a given shuffling scenario.
This study presents the basic concepts of MTD and game theory, followed by a literature review, to study MTD decision-making methods based on game theory from the dimensions of space, time, space–time, and bounded rationality.
Our work is aimed at better understanding the behavior of agents in settings where their privacy concerns are explicitly given. We consider a toy setting where agent A, in an attempt to discover the secret type of agent B, offers B a gift that one type of B agent likes and the other type dislikes.
In this work, we propose a new, general way of modeling privacy in players’ utility functions. Specifically, we only assume that if an outcome o has the property that any report of player i would have led to o with approximately the same probability, then o has a small privacy cost to player i.
In the area of privacy-preserving data mining, a differentially private mechanism intuitively encourages people to share their data because they are at little risk of revealing their own information.