8 research outputs found

    Block public access: Trust safety verification of access control policies

    Get PDF
    © 2020 Owner/Author. Data stored in cloud services is highly sensitive and so access to it is controlled via policies written in domain-specific languages (DSLs). The expressiveness of these DSLs provides users flexibility to cover a wide variety of uses cases, however, unintended misconfigurations can lead to potential security issues. We introduce Block Public Access, a tool that formally verifies policies to ensure that they only allow access to trusted principals, i.e. that they prohibit access to the general public. To this end, we formalize the notion of Trust Safety that formally characterizes whether or not a policy allows unconstrained (public) access. Next, we present a method to compile the policy down to a logical formula whose unsatisfiability can be (1) checked by SMT and (2) ensures Trust Safety. The constructs of the policy DSLs render unsatisfiability checking PSPACE-complete, which precludes verifying the millions of requests per second seen at cloud scale. Hence, we present an approach that leverages the structure of the policy DSL to compute a much smaller residual policy that corresponds only to untrusted accesses. Our approach allows Block Public Access to, in the common case, syntactically verify Trust Safety without having to query the SMT solver. We have implemented Block Public Access and present an evaluation showing how the above optimization yields a low-latency policy verifier that the S3 team at AWS has integrated into their authorization system, where it is currently in production, analyzing millions of policies everyday to ensure that client buckets do not grant unintended public access

    Code-level model checking in the software development workflow at Amazon Web Services

    Get PDF
    This article describes a style of applying symbolic model checking developed over the course of four years at Amazon Web Services (AWS). Lessons learned are drawn from proving properties of numerous C‐based systems, for example, custom hypervisors, encryption code, boot loaders, and an IoT operating system. Using our methodology, we find that we can prove the correctness of industrial low‐level C‐based systems with reasonable effort and predictability. Furthermore, AWS developers are increasingly writing their own formal specifications. As part of this effort, we have developed a CI system that allows integration of the proofs into standard development workflows and extended the proof tools to provide better feedback to users. All proofs discussed in this article are publicly available on GitHub

    Semantic-based Automated Reasoning for AWS Access Policies using SMT

    Get PDF
    Cloud computing provides on-demand access to IT resources via the Internet. Permissions for these resources are defined by expressive access control policies. This paper presents a formalization of the Amazon Web Services (AWS) policy language and a corresponding analysis tool, called ZELKOVA, for verifying policy properties. ZELKOVA encodes the semantics of policies into SMT, compares behaviors, and verifies properties. It provides users a sound mechanism to detect misconfigurations of their policies. ZELKOVA solves a PSPACE-complete problem and is invoked many millions of times daily

    Least-Privilege Identity-Based Policies for Lambda Functions in Amazon Web Services (AWS)

    Get PDF
    We address least-privilege in a particular context of public cloud computing: identity-based policies for callback functions, called Lambda functions, in serverless applications of the Amazon Web Services (AWS) cloud provider. We argue that this is an important context in which to consider the fundamental security design principle of least-privilege, which states that every thread of execution should possess only those privileges it needs. We observe that poor documentation from AWS makes the task of devising least-privilege policies difficult for developers of such applications. We then describe our experimental approach to discovering least-privilege for a method call, and our observations, some of which are alarming, from running it against 171 methods across five different AWS services. We discuss also our assessment of two repositories, and two full-fledged serverless applications, all of which are publicly available, for least-privilege, and find that the vast majority of policies are over-privileged. We conclude with a few recommendations for developers of Lambda functions in AWS. Our work suggests that much work is needed, both from developers and providers, in securing cloud applications from the standpoint of least-privilege

    Evaluating verification awareness as a method for assessing adaptation risk

    Get PDF
    Self-integration requires a system to be self-aware and self-protecting of its functionality and communication processes to mitigate interference in accomplishing its goals. Incorporating self-protection into a framework for reasoning about compliance with critical requirements is a major challenge when the system’s operational environment may have uncertainties resulting in runtime changes. The reasoning should be over a range of impacts and tradeoffs in order for the system to immediately address an issue, even if only partially or imperfectly. Assuming that critical requirements can be formally specified and embedded as part of system self-awareness, runtime verification often involves extensive on-board resources and state explosion, with minimal explanation of results. Model-checking partially mitigates runtime verification issues by abstracting the system operations and architecture. However, validating the consistency of a model given a runtime change is generally performed external to the system and translated back to the operational environment, which can be inefficient.This paper focuses on codifying and embedding verification awareness into a system. Verification awareness is a type of self-awareness related to reasoning about compliance with critical properties at runtime when a system adaptation is needed. The premise is that an adaptation that interferes with a design-time proof process for requirement compliance increases the risk that the original proof process cannot be reused. The greater the risk to limiting proof process reuse, the higher the probability that the requirement would be violated by the adaptation. The application of Rice’s 1953 theorem to this domain indicates that determining whether a given adaptation inherently inhibits proof reuse is undecidable, suggesting the heuristic, comparative approach based on proof meta-data that is part of our approach. To demonstrate our deployment of verification awareness, we predefine four adaptations that are all available to three distinct wearable simulations (stress, insulin delivery, and hearables). We capture meta-data from applying automated theorem proving to wearable requirements and assess the risk among the four adaptations for limiting the proof process reuse for each of their requirements. The results show that the adaptations affect proof process reuse differently on each wearable. We evaluate our reasoning framework by embedding checkpoints on requirement compliance within the wearable code and log the execution trace of each adaptation. The logs confirm that the adaptation selected by each wearable with the lowest risk of inhibiting proof process reuse for its requirements also causes the least number of requirement failures in execution.Computer Scienc
    corecore