7 research outputs found

    Towards an Integrated In-Vehicle Isolation and Resilience Framework for Connected Autonomous Vehicles

    Get PDF
    Connected Autonomous Vehicles (CAV) have attracted significant attention, specifically due to successful deployment of ultra-reliable low-latency communications with Fifth Generation (5G) wireless networks. Due to the safety-critical nature of CAV, reliability is one of the well-investigated areas of research. Security of in-vehicle communications is mandatory to achieve this goal. Unfortunately, existing research so far focused on in-vehicle isolation or resilience independently. This short paper presents the elements of an integrated in-vehicle isolation and resilience framework to attain a higher degree of reliability for CAV systems. The proposed framework architecture leverages benefits of Trusted Execution Environments to mitigate several classes of threats. The framework implementation is also mapped to the AUTOSAR open automotive standard

    TFHE-rs: A library for safe and secure remote computing using fully homomorphic encryption and trusted execution environments

    Get PDF
    Fully Homomorphic Encryption (FHE) and Trusted Execution Environ-ments (TEEs) are complementing approaches that can both secure computa-tions running remotely on a public cloud. Existing FHE schemes are, however, malleable by design and lack integrity protection, making them susceptible to integrity breaches where an adversary could modify the data and corrupt the output. This paper describes how both confidentiality and integrity of remote compu-tations can be assured by combining FHE with hardware based secure enclave technologies. We provide a software library for performing FHE within the Intel SGX TEE, written in the memory-safe programming language Rust to strengthen the internal safety of software and reduce its attack surface. We evaluate a sample application written with our library. We demonstrate that we can feasibly combine these concepts and provide stronger security guar-antees with a minimal development effort

    CrowdGuard: Federated Backdoor Detection in Federated Learning

    Full text link
    Federated Learning (FL) is a promising approach enabling multiple clients to train Deep Neural Networks (DNNs) collaboratively without sharing their local training data. However, FL is susceptible to backdoor (or targeted poisoning) attacks. These attacks are initiated by malicious clients who seek to compromise the learning process by introducing specific behaviors into the learned model that can be triggered by carefully crafted inputs. Existing FL safeguards have various limitations: They are restricted to specific data distributions or reduce the global model accuracy due to excluding benign models or adding noise, are vulnerable to adaptive defense-aware adversaries, or require the server to access local models, allowing data inference attacks. This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques. It leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme. CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback. The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios, including IID and non-IID data distributions. Additionally, CrowdGuard withstands adaptive adversaries while preserving the original performance of protected models. To ensure confidentiality, CrowdGuard uses a secure and privacy-preserving architecture leveraging Trusted Execution Environments (TEEs) on both client and server sides.Comment: To appear in the Network and Distributed System Security (NDSS) Symposium 2024. Phillip Rieger and Torsten Krau{\ss} contributed equally to this contribution. 19 pages, 8 figures, 5 tables, 4 algorithms, 5 equation

    FIVADMI: A Framework for In-Vehicle Anomaly Detection by Monitoring and Isolation

    Get PDF
    Self-driving vehicles have attracted significant attention in the automotive industry that is heavi-ly investing to reach the level of reliability needed from these safety critical systems. Security of in-vehicle communications is mandatory to achieve this goal. Most of the existing research to de-tect anomalies for in-vehicle communication does not take into account the low processing power of the in-vehicle Network and ECUs (Electronic Control Units). Also, these approaches do not consider system level isolation challenges such as side-channel vulnerabilities, that may arise due to adoption of new technologies in the automotive domain. This paper introduces and discusses the design of a framework to detect anomalies in in-vehicle communications, including side channel attacks. The proposed framework supports real time monitoring of data exchanges among the components of in-vehicle communication network and ensures the isolation of the components in in-vehicle network by deploying them in Trusted Execution Environments (TEEs). The framework is designed based on the AUTOSAR open standard for automotive software ar-chitecture and framework. The paper also discusses the implementation and evaluation of the proposed framework
    corecore