1,431 research outputs found
Three Decades of Deception Techniques in Active Cyber Defense -- Retrospect and Outlook
Deception techniques have been widely seen as a game changer in cyber
defense. In this paper, we review representative techniques in honeypots,
honeytokens, and moving target defense, spanning from the late 1980s to the
year 2021. Techniques from these three domains complement with each other and
may be leveraged to build a holistic deception based defense. However, to the
best of our knowledge, there has not been a work that provides a systematic
retrospect of these three domains all together and investigates their
integrated usage for orchestrated deceptions. Our paper aims to fill this gap.
By utilizing a tailored cyber kill chain model which can reflect the current
threat landscape and a four-layer deception stack, a two-dimensional taxonomy
is developed, based on which the deception techniques are classified. The
taxonomy literally answers which phases of a cyber attack campaign the
techniques can disrupt and which layers of the deception stack they belong to.
Cyber defenders may use the taxonomy as a reference to design an organized and
comprehensive deception plan, or to prioritize deception efforts for a budget
conscious solution. We also discuss two important points for achieving active
and resilient cyber defense, namely deception in depth and deception lifecycle,
where several notable proposals are illustrated. Finally, some outlooks on
future research directions are presented, including dynamic integration of
different deception techniques, quantified deception effects and deception
operation cost, hardware-supported deception techniques, as well as techniques
developed based on better understanding of the human element.Comment: 19 page
A Praise for Defensive Programming: Leveraging Uncertainty for Effective Malware Mitigation
A promising avenue for improving the effectiveness of behavioral-based
malware detectors would be to combine fast traditional machine learning
detectors with high-accuracy, but time-consuming deep learning models. The main
idea would be to place software receiving borderline classifications by
traditional machine learning methods in an environment where uncertainty is
added, while software is analyzed by more time-consuming deep learning models.
The goal of uncertainty would be to rate-limit actions of potential malware
during the time consuming deep analysis. In this paper, we present a detailed
description of the analysis and implementation of CHAMELEON, a framework for
realizing this uncertain environment for Linux. CHAMELEON offers two
environments for software: (i) standard - for any software identified as benign
by conventional machine learning methods and (ii) uncertain - for software
receiving borderline classifications when analyzed by these conventional
machine learning methods. The uncertain environment adds obstacles to software
execution through random perturbations applied probabilistically on selected
system calls. We evaluated CHAMELEON with 113 applications and 100 malware
samples for Linux. Our results showed that at threshold 10%, intrusive and
non-intrusive strategies caused approximately 65% of malware to fail
accomplishing their tasks, while approximately 30% of the analyzed benign
software to meet with various levels of disruption. With a dynamic, per-system
call threshold, CHAMELEON caused 92% of the malware to fail, and only 10% of
the benign software to be disrupted. We also found that I/O-bound software was
three times more affected by uncertainty than CPU-bound software. Further, we
analyzed the logs of software crashed with non-intrusive strategies, and found
that some crashes are due to the software bugs
Deception in network defences using unpredictability
In this article, we propose a novel method that aims to improve upon existing moving-target defences by making them unpredictably reactive using probabilistic decision-making. We postulate that unpredictability can improve network defences in two key capacities: (1) by re-configuring the network in direct response to detected threats, tailored to the current threat and a security posture, and (2) by deceiving adversaries using pseudo-random decision-making (selected from a set of acceptable set of responses), potentially leading to adversary delay and failure. Decisions are performed automatically, based on reported events (e.g., Intrusion Detection System (IDS) alerts), security posture, mission processes, and states of assets. Using this codified form of situational awareness, our system can respond differently to threats each time attacker activity is observed, acting as a barrier to further attacker activities. We demonstrate feasibility with both anomaly-and misuse-based detection alerts, for a historical dataset (playback), and a real-time network simulation where asset-to-mission mappings are known. Our findings suggest that unpredictability yields promise as a new approach to deception in laboratory settings. Further research will be necessary to explore unpredictability in production environments
Manipulating the Online Marketplace of Ideas
Social media, the modern marketplace of ideas, is vulnerable to manipulation.
Deceptive inauthentic actors impersonate humans to amplify misinformation and
influence public opinions. Little is known about the large-scale consequences
of such operations, due to the ethical challenges posed by online experiments
that manipulate human behavior. Here we introduce a model of information
spreading where agents prefer quality information but have limited attention.
We evaluate the impact of manipulation strategies aimed at degrading the
overall quality of the information ecosystem. The model reproduces empirical
patterns about amplification of low-quality information. We find that
infiltrating a critical fraction of the network is more damaging than
generating attention-grabbing content or targeting influentials. We discuss
countermeasures suggested by these insights to increase the resilience of
social media users to manipulation, and legal issues arising from regulations
aimed at protecting human speech from suppression by inauthentic actors.Comment: 25 pages, 8 figures, 80 reference
Veteran managers and adaptation in team leadership
Adaptation is of paramount importance in an ever-changing world. Work teams need to able to
collect overcome the hurdles of changing environments and stressful situations if they want to
succeed. Arguably no place is this truer than in war, and as such, it’s in the best interests of
military organisations to train their leaders into being adaptable and resilient in the face of
unpredictable life-and-death situations. This study follows the IMOI model of Marks, Mathieu
and Zaccaro (2001) and aims to compare work teams led by military Veterans and non-military
Veterans, to assert if those led by the former are better at keeping a high level of team work
engagement, developing better problem solving competencies, and adapting to stressful
situations and, as a result, be more effective. The data was collected through an online survey
questionnaire with a sample of 49 teams (49 leaders and 169 subordinates), six of which were
led by Veterans, mostly of a consulting context. None of the proposed hypotheses were
postulated, and no statistical significance was found in the mediation, moderation and
moderated mediation models used to test the relationships between the variables.A Adaptação é de extrema importância num mundo em mudança contante. As equipas de
trabalho necessitam de estar aptas a ultrapassar as barreiras colocadas por ambientes em
constante mudança e situações stress se quiserem suceder. Não há mais nenhum lugar em que
talvez isto seja mais verdade do que na guerra e, como tal, está no interesse nas organizações
militares treinar os seus para serem adaptáveis e resilientes perante situações imprevisĂveis de
vida ou de morte. Este estude segue o modelo IMOI de Marks, Mathieu e Zaccaro (2001) e
procura comparar equipas de trabalho lideradas por Gestores ex-militares e Gestores sem esta
experiĂŞncia para auferir se os primeiros conseguem manter nĂveis mais elevados de Team Work
Engagement, desenvolver melhores competências de resolução de problemas, adaptar-se a
situações e de stress e, como consequências, serem mais eficientes. Os dados foram recolhidos
atravĂ©s de um inquĂ©rito por questionário online de 49 equipas (49 lĂderes e 169 subordinados),
seis das quais eram lideradas por ex-Militares, a maioria das quais em contexto de
consultadoria. Nenhuma das hipĂłteses propostas foram verificadas e nĂŁo foi encontrada
significância estatĂstica nos modelos de mediação, moderação de mediação moderada usados
para testar a relação entre as variáveis
Quantifying the invisible audience in social networks
This paper combines survey and large-scale log data to examine how well users’ perceptions of their audience match their actual audience on Facebook.AbstractWhen you share content in an online social network, who is listening? Users have scarce information about who actually sees their content, making their audience seem invisible and difficult to estimate. However, understanding this invisible audience can impact both science and design, since perceived audiences influence content production and self-presentation online. In this paper, we combine survey and large-scale log data to examine how well users’ perceptions of their audience match their actual audience on Facebook. We find that social media users consistently underestimate their audience size for their posts, guessing that their audience is just 27% of its true size. Qualitative coding of survey responses reveals folk theories that attempt to reverse-engineer audience size using feedback and friend count, though none of these approaches are particularly accurate. We analyze audience logs for 222,000 Facebook users’ posts over the course of one month and find that publicly visible signals — friend count, likes, and comments — vary widely and do not strongly indicate the audience of a single post. Despite the variation, users typically reach 61% of their friends each month. Together, our results begin to reveal the invisible undercurrents of audience attention and behavior in online social networks.Authored by Michael S. Bernstein, Eytan Bakshy, Moira Burke and Brian Karrer
Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges
As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of
concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific
solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision
system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR
- …