3,435 research outputs found

    e-SAFE: Secure, Efficient and Forensics-Enabled Access to Implantable Medical Devices

    Full text link
    To facilitate monitoring and management, modern Implantable Medical Devices (IMDs) are often equipped with wireless capabilities, which raise the risk of malicious access to IMDs. Although schemes are proposed to secure the IMD access, some issues are still open. First, pre-sharing a long-term key between a patient's IMD and a doctor's programmer is vulnerable since once the doctor's programmer is compromised, all of her patients suffer; establishing a temporary key by leveraging proximity gets rid of pre-shared keys, but as the approach lacks real authentication, it can be exploited by nearby adversaries or through man-in-the-middle attacks. Second, while prolonging the lifetime of IMDs is one of the most important design goals, few schemes explore to lower the communication and computation overhead all at once. Finally, how to safely record the commands issued by doctors for the purpose of forensics, which can be the last measure to protect the patients' rights, is commonly omitted in the existing literature. Motivated by these important yet open problems, we propose an innovative scheme e-SAFE, which significantly improves security and safety, reduces the communication overhead and enables IMD-access forensics. We present a novel lightweight compressive sensing based encryption algorithm to encrypt and compress the IMD data simultaneously, reducing the data transmission overhead by over 50% while ensuring high data confidentiality and usability. Furthermore, we provide a suite of protocols regarding device pairing, dual-factor authentication, and accountability-enabled access. The security analysis and performance evaluation show the validity and efficiency of the proposed scheme

    Lifelong Generative Modeling

    Full text link
    Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to unsupervised generative modeling, where we continuously incorporate newly observed distributions into a learned model. We do so through a student-teacher Variational Autoencoder architecture which allows us to learn and preserve all the distributions seen so far, without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, inspired by a Bayesian update rule, the student model leverages the information learned by the teacher, which acts as a probabilistic knowledge store. The regularizer reduces the effect of catastrophic interference that appears when we learn over sequences of distributions. We validate our model's performance on sequential variants of MNIST, FashionMNIST, PermutedMNIST, SVHN and Celeb-A and demonstrate that our model mitigates the effects of catastrophic interference faced by neural networks in sequential learning scenarios.Comment: 32 page

    A secure archive for Voice-over-IP conversations

    Full text link
    An efficient archive securing the integrity of VoIP-based two-party conversations is presented. The solution is based on chains of hashes and continuously chained electronic signatures. Security is concentrated in a single, efficient component, allowing for a detailed analysis.Comment: 9 pages, 2 figures. (C) ACM, (2006). This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of VSW06, June, 2006, Berlin, German

    Cold Start Streaming Learning for Deep Networks

    Full text link
    The ability to dynamically adapt neural networks to newly-available data without performance deterioration would revolutionize deep learning applications. Streaming learning (i.e., learning from one data example at a time) has the potential to enable such real-time adaptation, but current approaches i) freeze a majority of network parameters during streaming and ii) are dependent upon offline, base initialization procedures over large subsets of data, which damages performance and limits applicability. To mitigate these shortcomings, we propose Cold Start Streaming Learning (CSSL), a simple, end-to-end approach for streaming learning with deep networks that uses a combination of replay and data augmentation to avoid catastrophic forgetting. Because CSSL updates all model parameters during streaming, the algorithm is capable of beginning streaming from a random initialization, making base initialization optional. Going further, the algorithm's simplicity allows theoretical convergence guarantees to be derived using analysis of the Neural Tangent Random Feature (NTRF). In experiments, we find that CSSL outperforms existing baselines for streaming learning in experiments on CIFAR100, ImageNet, and Core50 datasets. Additionally, we propose a novel multi-task streaming learning setting and show that CSSL performs favorably in this domain. Put simply, CSSL performs well and demonstrates that the complicated, multi-step training pipelines adopted by most streaming methodologies can be replaced with a simple, end-to-end learning approach without sacrificing performance.Comment: 52 pages, 7 figures, pre-prin
    corecore