1,019 research outputs found

    Privacy Enhanced Access Control by Means of Policy Blinding

    Get PDF
    Traditional techniques of enforcing an access control policy\ud rely on an honest reference monitor to enforce the policy. However, for\ud applications where the resources are sensitive, the access control policy\ud might also be sensitive. As a result, an honest-but-curious reference monitor would glean some interesting information from the requests that it\ud processes. For example if a requestor in a role psychiatrist is granted access to a document, the patient associated with that document probably\ud has a psychiatric problem. The patient would consider this sensitive in-\ud formation, and she might prefer the honest-but-curious reference monitor\ud to remain oblivious of her mental problem.\ud We present a high level framework for querying and enforcing a role\ud based access control policy that identifies where sensitive information\ud might be disclosed. We then propose a construction which enforces a\ud role based access control policy cryptographically, in such a way that the\ud reference monitor learns as little as possible about the policy. (The reference monitor only learns something from repeated queries). We prove\ud the security of our scheme showing that it works in theory, but that it\ud has a practical drawback. However, the practical drawback is common\ud to all cryptographically enforced access policy schemes. We identify several approaches to mitigate the drawback and conclude by arguing that\ud there is an underlying fundamental problem that cannot be solved. We\ud also show why attribute based encryption techniques do not not solve the\ud problem of enforcing policy by an honest but curious reference monitor

    EL PASSO: Efficient and Lightweight Privacy-preserving Single Sign On

    Get PDF
    Anonymous credentials are a solid foundation for privacy-preserving Single Sign-On (SSO). They enable unlinkable authentication across domains and allow users to prove their identity without revealing more than necessary. Unfortunately, anonymous credentials schemes remain difficult to use and complex to deploy. They require installation and use of complex software at the user side, suffer from poor performance, and do not support security features that are now common, such as two-factor authentication, secret recovery, or support for multiple devices. In contrast, Open ID Connect (OIDC), the de facto standard for SSO is widely deployed and used despite its lack of concern for users’ privacy. We present EL PASSO, a privacy-preserving SSO system based on anonymous credentials that does not trade security for usability, and can be incrementally deployed at scale alongside Open ID Connect with no significant changes to end-user operations. EL PASSO client-side operations leverage a WebAssembly module that can be downloaded on the fly and cached by users’ browsers, requiring no prior software installation or specific hardware. We develop automated procedures for managing cryptographic material, supporting multi-device support, secret recovery, and privacy-preserving two-factor authentication using only the built-in features of common Web browsers. Our implementation using PS Signatures achieves 39x to 180x lower computational cost than previous anonymous credentials schemes, similar or lower sign-on latency than Open ID Connect and is amenable for use on mobile devices

    Hardware-Assisted Secure Computation

    Get PDF
    The theory community has worked on Secure Multiparty Computation (SMC) for more than two decades, and has produced many protocols for many settings. One common thread in these works is that the protocols cannot use a Trusted Third Party (TTP), even though this is conceptually the simplest and most general solution. Thus, current protocols involve only the direct players---we call such protocols self-reliant. They often use blinded boolean circuits, which has several sources of overhead, some due to the circuit representation and some due to the blinding. However, secure coprocessors like the IBM 4758 have actual security properties similar to ideal TTPs. They also have little RAM and a slow CPU.We call such devices Tiny TTPs. The availability of real tiny TTPs opens the door for a different approach to SMC problems. One major challenge with this approach is how to execute large programs on large inputs using the small protected memory of a tiny TTP, while preserving the trust properties that an ideal TTP provides. In this thesis we have investigated the use of real TTPs to help with the solution of SMC problems. We start with the use of such TTPs to solve the Private Information Retrieval (PIR) problem, which is one important instance of SMC. Our implementation utilizes a 4758. The rest of the thesis is targeted at general SMC. Our SMC system, Faerieplay, moves some functionality into a tiny TTP, and thus avoids the blinded circuit overhead. Faerieplay consists of a compiler from high-level code to an arithmetic circuit with special gates for efficient indirect array access, and a virtual machine to execute this circuit on a tiny TTP while maintaining the typical SMC trust properties. We report on Faerieplay\u27s security properties, the specification of its components, and our implementation and experiments. These include comparisons with the Fairplay circuit-based two-party system, and an implementation of the Dijkstra graph shortest path algorithm. We also provide an implementation of an oblivious RAM which supports similar tiny TTP-based SMC functionality but using a standard RAM program. Performance comparisons show Faerieplay\u27s circuit approach to be considerably faster, at the expense of a more constrained programming environment when targeting a circuit

    Toward an Effective Automated Tracing Process

    Get PDF
    Traceability is defined as the ability to establish, record, and maintain dependency relations among various software artifacts in a software system, in both a forwards and backwards direction, throughout the multiple phases of the project’s life cycle. The availability of traceability information has been proven vital to several software engineering activities such as program comprehension, impact analysis, feature location, software reuse, and verification and validation (V&V). The research on automated software traceability has noticeably advanced in the past few years. Various methodologies and tools have been proposed in the literature to provide automatic support for establishing and maintaining traceability information in software systems. This movement is motivated by the increasing attention traceability has been receiving as a critical element of any rigorous software development process. However, despite these major advances, traceability implementation and use is still not pervasive in industry. In particular, traceability tools are still far from achieving performance levels that are adequate for practical applications. Such low levels of accuracy require software engineers working with traceability tools to spend a considerable amount of their time verifying the generated traceability information, a process that is often described as tedious, exhaustive, and error-prone. Motivated by these observations, and building upon a growing body of work in this area, in this dissertation we explore several research directions related to enhancing the performance of automated tracing tools and techniques. In particular, our work addresses several issues related to the various aspects of the IR-based automated tracing process, including trace link retrieval, performance enhancement, and the role of the human in the process. Our main objective is to achieve performance levels, in terms of accuracy, efficiency, and usability, that are adequate for practical applications, and ultimately to accomplish a successful technology transfer from research to industry

    The multilayered nature of reference selection

    Get PDF
    Why authors choose some references in preference to others is a question that is still not wholly answered despite its being of interest to scientists. The relevance of references is twofold: They are a mechanism for tracing the evolution of science, and because they enhance the image of the cited authors, citations are a widely known and used indicator of scientific endeavor. Following an extensive review of the literature, we selected all papers that seek to answer the central question and demonstrate that the existing theories are not sufficient: Neither citation nor indicator theory provides a complete and convincing answer. Some perspectives in this arena remain, which are isolated from the core literature. The purpose of this article is to offer a fresh perspective on a 30-year-old problem by extending the context of the discussion. We suggest reviving the discussion about citation theories with a new perspective, that of the readers, by layers or phases, in the final choice of references, allowing for a new classification in which any paper, to date, could be included.Publicad

    Titanium: A Metadata-Hiding File-Sharing System with Malicious Security

    Get PDF
    End-to-end encrypted file-sharing systems enable users to share files without revealing the file contents to the storage servers. However, the servers still learn metadata, including user identities and access patterns. Prior work tried to remove such leakage but relied on strong assumptions. Metal (NDSS \u2720) is not secure against malicious servers. MCORAM (ASIACRYPT \u2720) provides confidentiality against malicious servers, but not integrity. Titanium is a metadata-hiding file-sharing system that offers confidentiality and integrity against malicious users and servers. Compared with MCORAM, which offers confidentiality against malicious servers, Titanium also offers integrity. Experiments show that Titanium is 5x-200x faster or more than MCORAM
    • …
    corecore