13 research outputs found

    FLSys: Toward an Open Ecosystem for Federated Learning Mobile Apps

    Full text link
    This paper presents the design, implementation, and evaluation of FLSys, a mobile-cloud federated learning (FL) system that supports deep learning models for mobile apps. FLSys is a key component toward creating an open ecosystem of FL models and apps that use these models. FLSys is designed to work with mobile sensing data collected on smart phones, balance model performance with resource consumption on the phones, tolerate phone communication failures, and achieve scalability in the cloud. In FLSys, different DL models with different FL aggregation methods in the cloud can be trained and accessed concurrently by different apps. Furthermore, FLSys provides a common API for third-party app developers to train FL models. FLSys is implemented in Android and AWS cloud. We co-designed FLSys with a human activity recognition (HAR) in the wild FL model. HAR sensing data was collected in two areas from the phones of 100+ college students during a five-month period. We implemented HAR-Wild, a CNN model tailored to mobile devices, with a data augmentation mechanism to mitigate the problem of non-Independent and Identically Distributed (non-IID) data that affects FL model training in the wild. A sentiment analysis (SA) model is used to demonstrate how FLSys effectively supports concurrent models, and it uses a dataset with 46,000+ tweets from 436 users. We conducted extensive experiments on Android phones and emulators showing that FLSys achieves good model utility and practical system performance.Comment: The first two authors contributed equally to this wor

    Protecting Functional Programs From Low-Level Attackers

    No full text
    Software systems are growing ever larger. Early software systems were singular units developed by small teams of programmers writing in the same programming language. Modern software systems, on the other hand, consist of numerous interoperating components written by different teams and in different programming languages. While this more modular and diversified approach to software development has enabled us to build ever larger and more complex software systems, it has, however, made it harder to ensure the reliability and security of software systems. In this thesis we study and remedy the security flaws that arise when attempting to resolve the difference in abstractions between components written in high-level functional programming languages and components written in imperative low-level programming languages. High-level functional programming languages, treat computation as the evaluation of mathematical functions. Low-level imperative programming languages, on the contrary, provide programmers with features that enable them to directly interact with the underlying hardware. While these features help programmers write more efficient software, they also make it easy to write malware through techniques such as buffer overflows and return oriented programming. Concretely, we develop new run-time approaches for protecting components written in functional programming languages from malicious components written in low-level programming languages by making using of an emerging memory isolation mechanism.This memory isolation mechanism is called the Protected Module Architecture (PMA). Informally, PMA isolates the code and data that reside within a certain area of memory by restricting access to that area based on the location of the program counter. We develop these run-time protection techniques that make use of PMA for three important areas where components written in functional programming languages are threatened by malicious low-level components: foreign function interfaces, abstract machines and compilation. In everyone of these three areas, we formally prove that our run-time protection techniques are indeed secure. In addtion to that we also provide implementations of our ideas through a fully functional compiler and a well-performing abstract machine

    Protecting Functional Programs From Low-Level Attackers

    No full text
    Software systems are growing ever larger. Early software systems were singular units developed by small teams of programmers writing in the same programming language. Modern software systems, on the other hand, consist of numerous interoperating components written by different teams and in different programming languages. While this more modular and diversified approach to software development has enabled us to build ever larger and more complex software systems, it has, however, made it harder to ensure the reliability and security of software systems. In this thesis we study and remedy the security flaws that arise when attempting to resolve the difference in abstractions between components written in high-level functional programming languages and components written in imperative low-level programming languages. High-level functional programming languages, treat computation as the evaluation of mathematical functions. Low-level imperative programming languages, on the contrary, provide programmers with features that enable them to directly interact with the underlying hardware. While these features help programmers write more efficient software, they also make it easy to write malware through techniques such as buffer overflows and return oriented programming. Concretely, we develop new run-time approaches for protecting components written in functional programming languages from malicious components written in low-level programming languages by making using of an emerging memory isolation mechanism.This memory isolation mechanism is called the Protected Module Architecture (PMA). Informally, PMA isolates the code and data that reside within a certain area of memory by restricting access to that area based on the location of the program counter. We develop these run-time protection techniques that make use of PMA for three important areas where components written in functional programming languages are threatened by malicious low-level components: foreign function interfaces, abstract machines and compilation. In everyone of these three areas, we formally prove that our run-time protection techniques are indeed secure. In addtion to that we also provide implementations of our ideas through a fully functional compiler and a well-performing abstract machine

    Protecting Functional Programs From Low-Level Attackers

    No full text
    Software systems are growing ever larger. Early software systems were singular units developed by small teams of programmers writing in the same programming language. Modern software systems, on the other hand, consist of numerous interoperating components written by different teams and in different programming languages. While this more modular and diversified approach to software development has enabled us to build ever larger and more complex software systems, it has, however, made it harder to ensure the reliability and security of software systems. In this thesis we study and remedy the security flaws that arise when attempting to resolve the difference in abstractions between components written in high-level functional programming languages and components written in imperative low-level programming languages. High-level functional programming languages, treat computation as the evaluation of mathematical functions. Low-level imperative programming languages, on the contrary, provide programmers with features that enable them to directly interact with the underlying hardware. While these features help programmers write more efficient software, they also make it easy to write malware through techniques such as buffer overflows and return oriented programming. Concretely, we develop new run-time approaches for protecting components written in functional programming languages from malicious components written in low-level programming languages by making using of an emerging memory isolation mechanism.This memory isolation mechanism is called the Protected Module Architecture (PMA). Informally, PMA isolates the code and data that reside within a certain area of memory by restricting access to that area based on the location of the program counter. We develop these run-time protection techniques that make use of PMA for three important areas where components written in functional programming languages are threatened by malicious low-level components: foreign function interfaces, abstract machines and compilation. In everyone of these three areas, we formally prove that our run-time protection techniques are indeed secure. In addtion to that we also provide implementations of our ideas through a fully functional compiler and a well-performing abstract machine

    Modelling an Assembly Attacker by Reflection

    No full text
    Many high-level functional programming languages are compiled to or interoperate with, low-level languages such as C and assembly. Research into the security of these compilation and interoperation mechanisms often makes use of high-level attacker models to simplify formalisations. In practice, however, the validity of such high-level attacker models is frequently called into question. In this technical report we formally prove that a light-weight ML-like including references, equipped with a reflection operator can serve as an accurate model for malicious assembly language programs, when reasoning about the security threats posed to the abstractions of high-level functional programs that reside within a protected memory space. The proof proceeds by relating a bisimulation over the inputs and observations of the assembly language attacker to a bisimulation over the inputs and observations of the high-level attacker.This technical report is an extended version of a paper titled: \emphA High-Level Model for an Assembly Language Attacker by means of Reflection that is to appear at SETTA 2015

    Formalizing a Secure Foreign Function Interface - Extended Version

    No full text
    Many high-level functional programming languages provide programmers with the ability to interoperate with untyped and low-level languages such as C and assembly. Research into such interoperation has generally focused on a closed world scenario, one where both the high-level and low-level code are defined and analyzed statically. In practice, however, components are sometimes linked in at run-time through malicious means. In this paper we formalize an operational semantics that securely combines MiniML, a light-weight ML, with a model of a low-level attacker, without relying on any static checks. We prove that the operational semantics are secure by establishing that they preserve and reflect the equivalences of MiniML. To that end a notion of bisimulation for the interaction between the attacker and MiniML is developed.This technical report is an extended version of the paper of the same name that is to appear at SEFM 2015

    Implementing a Secure Abstract Machine – Extended Version

    No full text
    Abstract machines are both theoretical models used to study language properties and practical models of language implementations. As with all language implementations, abstract machines are subject to security violations by the context in which they reside. This paper presents the implementation of an abstract machine for ML that preserves the abstractions of the language it implements, in possibly malicious, low-level contexts. To guarantee this security result, we make use of a low-level memory isolation mechanism and derive the formalisation of the machine through a methodology, whose every step is accompanied by formal properties that ensure that the step has been carried out properly. We provide an implementation of the abstract machine and analyse its performance for relevant scenarios.This technical report serves as the companion report to a SEC@SAC 2016 paper of the same name

    A Secure Compiler for ML Modules - Extended Version

    No full text
    Many functional programming languages compile to low-level languages such as C or assembly. Most security properties of those compilers, however, apply only when the compiler compiles whole programs. This paper presents a compilation scheme that securely compiles a standalone module of ModuleML, a light-weight version of an ML with modules, into untyped assembly. The compilation scheme is secure in that it reflects the abstractions of a ModuleML module, for every possible piece of assembly code that it interacts with. This is achieved by isolating the compiled module through a low-level memory isolation mechanism and by dynamically type checking the low-level interactions. We evaluate an implementation of the compiler on relevant test scenarios

    Operational semantics for secure interoperation

    No full text
    Modern software systems are commonly programmed in multiple languages. Research into the security and correctness of such multi-language programs has generally relied on static methods that check both the individual components as well as the interoperation between them. In practice, however, components are sometimes linked in at run-time through malicious means. In this paper we introduce a technique to specify operational semantics that securely combine an abstraction-rich language with a model of an arbitrary attacker, without relying on any static checks. The resulting operational semantics, instead, lifts a proven memory isolation mechanism into the resulting multi-language system. We establish the security benefits of our technique by proving that the obtained multi-language system preserves and reflects the equivalences of the abstraction-rich language. To that end a notion of bisimilarity for this new type of multi-language system is developed.status: publishe

    A Secure Compiler for ML Modules - Extended Version

    No full text
    Many functional programming languages compile to low-level languages such as C or assembly. Most security properties of those compilers, however, apply only when the compiler compiles whole programs. This paper presents a compilation scheme that securely compiles a standalone module of ModuleML, a light-weight version of an ML with modules, into untyped assembly. The compilation scheme is secure in that it reflects the abstractions of a ModuleML module, for every possible piece of assembly code that it interacts with. This is achieved by isolating the compiled module through a low-level memory isolation mechanism and by dynamically type checking the low-level interactions. We evaluate an implementation of the compiler on relevant test scenarios.This technical report is an updated version of TR 2015-017 that serves as the companion report to an APLAS 2015 paper of the same title
    corecore