364 research outputs found

    Training Large-Vocabulary Neural Language Models by Private Federated Learning for Resource-Constrained Devices

    Full text link
    Federated Learning (FL) is a technique to train models using data distributed across devices. Differential Privacy (DP) provides a formal privacy guarantee for sensitive data. Our goal is to train a large neural network language model (NNLM) on compute-constrained devices while preserving privacy using FL and DP. However, the DP-noise introduced to the model increases as the model size grows, which often prevents convergence. We propose Partial Embedding Updates (PEU), a novel technique to decrease noise by decreasing payload size. Furthermore, we adopt Low Rank Adaptation (LoRA) and Noise Contrastive Estimation (NCE) to reduce the memory demands of large models on compute-constrained devices. This combination of techniques makes it possible to train large-vocabulary language models while preserving accuracy and privacy

    A brief tour of formally secure compilation

    Get PDF
    Modern programming languages provide helpful high-level abstractions and mechanisms (e.g. types, module, automatic memory management) that enforce good programming practices and are crucial when writing correct and secure code. However, the security guarantees provided by such abstractions are not preserved when a compiler translates a source program into object code. Formally secure compilation is an emerging research field concerned with the design and the implementation of compilers that preserve source-level security properties at the object level. This paper presents a short guided tour of the relevant literature on secure compilation. Our goal is to help newcomers to grasp the basic concepts of this field and, for this reason, we rephrase and present the most relevant results in the literature in a common setting

    Dynamic IFC Theorems for Free!

    Full text link
    We show that noninterference and transparency, the key soundness theorems for dynamic IFC libraries, can be obtained "for free", as direct consequences of the more general parametricity theorem of type abstraction. This allows us to give very short soundness proofs for dynamic IFC libraries such as faceted values and LIO. Our proofs stay short even when fully mechanized for Agda implementations of the libraries in terms of type abstraction.Comment: CSF 2021 final versio

    A Composable Security Treatment of the Lightning Network

    Get PDF
    • …
    corecore