120,056 research outputs found

    Architectural Style: Distortions for Deploying and Managing Deception Technologies in Software Systems

    Get PDF
    Deception technologies are software tools that simulate/dissimulate information as security measures in software systems. Such tools can help prevent, detect, and correct security threats in the systems they are integrated with. Despite the continued existence and use of these technologies (~20+ years) the process for integrating them into software systems remains undocumented. This is due to deception technologies varying greatly from one another in a number of different ways. To begin the process of documentation, I have proposed an architectural style that describes one possible way deception technologies may be integrated into software systems. To develop this architectural style, I performed a literature review on deception technologies and the art of deception as a discipline. I break down how deception technologies work according to the art of deception through the simulation and dissimulation of software components. I then examined existing deception technologies and categorize them according to their simulations/dissimulations. The documented and proposed architectural style describes how software systems deploy and manage deceptions. Afterwards, I propose a number of future research opportunities surrounding this subject

    Man vs. machine: Investigating the effects of adversarial system use on end-user behavior in automated deception detection interviews

    Get PDF
    Deception is an inevitable component of human interaction. Researchers and practitioners are developing information systems to aid in the detection of deceptive communication. Information systems are typically adopted by end users to aid in completing a goal or objective (e.g., increasing the efficiency of a business process). However, end-user interactions with deception detection systems (adversarial systems) are unique because the goals of the system and the user are orthogonal. Prior work investigating systems-based deception detection has focused on the identification of reliable deception indicators. This research extends extant work by looking at how users of deception detection systems alter their behavior in response to the presence of guilty knowledge, relevant stimuli, and system knowledge. An analysis of data collected during two laboratory experiments reveals that guilty knowledge, relevant stimuli, and system knowledge all lead to increased use of countermeasures. The implications and limitations of this research are discussed and avenues for future research are outline

    Individual Determinants of Media Choice for Deception

    Get PDF
    Recent research has found that deceivers are extremely difficult to detect in computer-mediated work settings. However, it is unclear which individuals are likely to use computer systems for deception in these settings. This study looked at how 172 upper-level business students’ political skill, social skill, and tendency to use impression management was related to their deception media choice in a business scenario. We found that most individuals preferred e-mail and face-to-face media to the phone for deception. However, the individuals with high social skill, individuals with high political skill, and individuals with a tendency to use impression management predominately chose the phone and face-to-face methods for deception. These findings imply that organizations do need to be aware of deception in e-mail communications; however, they also need to be aware of deception in phone and face-to-face settings, since this deception will likely be coming from individuals that are skilled deceivers

    Deception in Therapy: Setting as a Motivation

    Get PDF
    The current study investigated setting as a motivation for deception. The therapy setting was compared to a casual social situation in attempts to see if there were differences, speculating that therapy relationships involve more closeness, thus less deception endorsement/motivation. Furthermore, the orientation of benefit (self vs. other) was also explored as well as lie acceptability. Participants were recruited from Angelo State University using Sona-Systems technology in return for course credit. Participants were asked to watch stimulus videos and complete the Deception Motivation Questionnaires in response. In addition, participants completed the Revised Lie Acceptability Scale and a Demographics Questionnaire. The questionnaires were used to assess participants’ use of, acceptance, and motivations for using deception. Results indicated that setting was not a motivation for deception. Overall, results indicated significant effects in the types of lie and orientation of benefit of deception. Further implications of motivational factors to use deception are discussed

    Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses

    Get PDF
    As the convergence between our physical and digital worlds continue at a rapid pace, securing our digital information is vital to our prosperity. Most current typical computer systems are unwittingly helpful to attackers through their predictable responses. In everyday security, deception plays a prominent role in our lives and digital security is no different. The use of deception has been a cornerstone technique in many successful computer breaches. Phishing, social engineering, and drive-by-downloads are some prime examples. The work in this dissertation is structured to enhance the security of computer systems by using means of deception and deceit

    To Deceive or not Deceive: Unveiling The Adoption Determinants Of Defensive Cyber Deception in Norwegian Organizations

    Get PDF
    Due to the prevailing threat landscape in Norway, it is imperative for organizations to safe- guard their infrastructures against cyber threats. One of the technologies that is advan- tageous against these threats is defensive cyber deception, which is an approach in cyber security that aims to be proactive, to interact with the attackers, trick them, deceive them and use this to the defenders advantage. This type of technology can help organizations defend against sophisticated threat actors that are able to avoid more traditional defensive mechanisms, such as Intrusion Detection Systems (IDS) or Intrusion Prevention Systems (IPS). In order to aid the adoption of defensive cyber deception in Norway, we asked the question: "What affects the adoption of defensive cyber deception in organizations in Nor- way?". To answer this question, we utilized the Technology, Organization, and Environment (TOE) Framework to identity what factors affect an organization’s adoption of defensive cyber deception. Through our use of the framework, we identified eighteen different factors which affect an organization’s adoption of defensive cyber deception. These factors are the product of the empirical data analysis from eight different semi-structured interview with individuals from six different organizations in Norway. The main theoretical implications of our research is the introduction of a TOE model for defensive cyber deception, focusing specifically on organizations in Norway as well as contributing with a maturity estimate model for defensive cyber deception. For the practical implications of our research, we have identified seven different benefits that defensive cyber deception provides. We are also con- tributing to raising the awareness of defensive cyber deception in Norwegian research and we hope that our TOE model can aid organizations that are considering adopting the tech- nology. We hope that these implications and contributions can act as a spark for both the adoption of defensive cyber deception in organizations as well as the start of a new wave for the cyber security researchers within Norway. Keywords: Cyber Security, Defensive Cyber Deception, TOE Framework, Adoptio

    To Deceive or not Deceive: Unveiling The Adoption Determinants Of Defensive Cyber Deception in Norwegian Organizations

    Get PDF
    Due to the prevailing threat landscape in Norway, it is imperative for organizations to safeguard their infrastructures against cyber threats. One of the technologies that is advantageous against these threats is defensive cyber deception, which is an approach in cyber security that aims to be proactive, to interact with the attackers, trick them, deceive them and use this to the defenders advantage. This type of technology can help organizations defend against sophisticated threat actors that are able to avoid more traditional defensive mechanisms, such as Intrusion Detection Systems (IDS) or Intrusion Prevention Systems (IPS). In order to aid the adoption of defensive cyber deception in Norway, we asked the question: "What affects the adoption of defensive cyber deception in organizations in Norway?". To answer this question, we utilized the Technology, Organization, and Environment (TOE) Framework to identity what factors affect an organization's adoption of defensive cyber deception. Through our use of the framework, we identified eighteen different factors which affect an organization's adoption of defensive cyber deception. These factors are the product of the empirical data analysis from eight different semi-structured interview with individuals from six different organizations in Norway. The main theoretical implications of our research is the introduction of a TOE model for defensive cyber deception, focusing specifically on organizations in Norway as well as contributing with a maturity estimate model for defensive cyber deception. For the practical implications of our research, we have identified seven different benefits that defensive cyber deception provides. We are also contributing to raising the awareness of defensive cyber deception in Norwegian research and we hope that our TOE model can aid organizations that are considering adopting the technology. We hope that these implications and contributions can act as a spark for both the adoption of defensive cyber deception in organizations as well as the start of a new wave for the cyber security researchers within Norway. Keywords: Cyber Security, Defensive Cyber Deception, TOE Framework, Adoptio

    The evolution of deception.

    Get PDF
    Funder: MIT Media LabFunder: King's College LondonFunder: Ethics and Governance of AI FundDeception plays a critical role in the dissemination of information, and has important consequences on the functioning of cultural, market-based and democratic institutions. Deception has been widely studied within the fields of philosophy, psychology, economics and political science. Yet, we still lack an understanding of how deception emerges in a society under competitive (evolutionary) pressures. This paper begins to fill this gap by bridging evolutionary models of social good-public goods games (PGGs)-with ideas from interpersonal deception theory (Buller and Burgoon 1996 Commun. Theory 6, 203-242. (doi:10.1111/j.1468-2885.1996.tb00127.x)) and truth-default theory (Levine 2014 J. Lang. Soc. Psychol. 33, 378-392. (doi:10.1177/0261927X14535916); Levine 2019 Duped: truth-default theory and the social science of lying and deception. University of Alabama Press). This provides a well-founded analysis of the growth of deception in societies and the effectiveness of several approaches to reducing deception. Assuming that knowledge is a public good, we use extensive simulation studies to explore (i) how deception impacts the sharing and dissemination of knowledge in societies over time, (ii) how different types of knowledge sharing societies are affected by deception and (iii) what type of policing and regulation is needed to reduce the negative effects of deception in knowledge sharing. Our results indicate that cooperation in knowledge sharing can be re-established in systems by introducing institutions that investigate and regulate both defection and deception using a decentralized case-by-case strategy. This provides evidence for the adoption of methods for reducing the use of deception in the world around us in order to avoid a Tragedy of the Digital Commons (Greco and Floridi 2004 Ethics Inf. Technol. 6, 73-81. (doi:10.1007/s10676-004-2895-2))
    • …
    corecore