345 research outputs found

    The Threat of Offensive AI to Organizations

    Get PDF
    AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI to enhance their attacks and expand their campaigns. Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future? In this study, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary’s methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 32 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a panel survey spanning industry, government and academia, we rank the AI threats and provide insights on the adversaries

    Continuous Operator Authentication for Teleoperated Systems Using Hidden Markov Models [post-print]

    Get PDF
    In this article, we present a novel approach for continuous operator authentication in teleoperated robotic processes based on Hidden Markov Models (HMM). While HMMs were originally developed and widely used in speech recognition, they have shown great performance in human motion and activity modeling. We make an analogy between human language and teleoperated robotic processes (i.e., words are analogous to a teleoperator\u27s gestures, sentences are analogous to the entire teleoperated task or process) and implement HMMs to model the teleoperated task. To test the continuous authentication performance of the proposed method, we conducted two sets of analyses. We built a virtual reality (VR) experimental environment using a commodity VR headset (HTC Vive) and haptic feedback enabled controller (Sensable PHANToM Omni) to simulate a real teleoperated task. An experimental study with 10 subjects was then conducted. We also performed simulated continuous operator authentication by using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). The performance of the model was evaluated based on the continuous (real-time) operator authentication accuracy as well as resistance to a simulated impersonation attack. The results suggest that the proposed method is able to achieve 70% (VR experiment) and 81% (JIGSAWS dataset) continuous classification accuracy with as short as a 1-second sample window. It is also capable of detecting an impersonation attack in real-time

    Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

    Get PDF
    In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research

    Towards Security and Privacy in Networked Medical Devices and Electronic Healthcare Systems

    Get PDF
    E-health is a growing eld which utilizes wireless sensor networks to enable access to effective and efficient healthcare services and provide patient monitoring to enable early detection and treatment of health conditions. Due to the proliferation of e-health systems, security and privacy have become critical issues in preventing data falsification, unauthorized access to the system, or eavesdropping on sensitive health data. Furthermore, due to the intrinsic limitations of many wireless medical devices, including low power and limited computational resources, security and device performance can be difficult to balance. Therefore, many current networked medical devices operate without basic security services such as authentication, authorization, and encryption. In this work, we survey recent work on e-health security, including biometric approaches, proximity-based approaches, key management techniques, audit mechanisms, anomaly detection, external device methods, and lightweight encryption and key management protocols. We also survey the state-of-the art in e-health privacy, including techniques such as obfuscation, secret sharing, distributed data mining, authentication, access control, blockchain, anonymization, and cryptography. We then propose a comprehensive system model for e-health applications with consideration of battery capacity and computational ability of medical devices. A case study is presented to show that the proposed system model can support heterogeneous medical devices with varying power and resource constraints. The case study demonstrates that it is possible to signicantly reduce the overhead for security on power-constrained devices based on the proposed system model

    Cognitive Checkpoint: Emerging Technologies for Biometric-Enabled Watchlist Screening

    Get PDF
    This paper revisits the problem of individual risk assessment in the layered security model. It contributes to the concept of balancing security and privacy via cognitive-centric machine called an ’e-interviewer’. Cognitive checkpoint is a cyber-physical security frontier in mass-transit hubs that provides an automated screening using all types of identity (attributed, biometric, and biographical) from both physical and virtual worlds. We investigate how the development of the next generation of watchlist for rapid screening impacts a sensitive balancing mechanism between security and privacy. We identify directions of such an impact, trends in watchlist technologies, and propose ways to mitigate the potential risks
    • …
    corecore