55 research outputs found

    Practical Formal Methods for Real World Cryptography

    Get PDF
    International audienceCryptographic algorithms, protocols, and applications are difficult to implement correctly, and errors and vulnerabilities in their code can remain undiscovered for long periods before they are exploited. Even highly-regarded cryptographic libraries suffer from bugs like buffer overruns, incorrect numerical computations, and timing side-channels, which can lead to the exposure of sensitive data and longterm secrets. We describe a tool chain and framework based on the the F * programming language to formally specify, verify and compile high-performance cryptographic software that is secure by design. This tool chain has been used to build a verified cryptographic library called HACL * , and provably secure implementations of sophisticated secure communication protocols like TLS and Signal. We describe these case studies and conclude with ongoing work on using our framework to build verified implementations of privacy preserving machine learning systems

    A verification framework for secure machine learning

    Get PDF
    International audienceWe propose a programming and verification framework to help developers build distributed software applications using composite homomorphic encryption (and secure multi-party computation) protocols, and implement secure machine learning and classification over private data. With our framework, a developer can prove that the application code is functionally correct, that it correctly composes the various cryptographic schemes it uses, and that it does not accidentally leak any secrets (via side-channels, for example.) Our end-to-end solution results in verified and efficient implementations of state-of-the-art secure privacy-preserving learning and classification techniques

    Understanding policy intent and misconfigurations from implementations: consistency and convergence

    Get PDF
    Abstract. We study the problem of inferring policy intent to identify misconfigurations in access control implementations. This is in contrast to traditional role-mining techniques, which focus on creating better abstractions for access control management. We show how raw metadata can be summarized effectively, by grouping together users with similar permissions over shared resources. Using these summary statements, we apply statistical techniques to detect outliers, which we classify as security and accessibility misconfigurations. Specifically, we show how our techniques for mining policy intent are robust, and have strong consistency and convergence guarantees

    Formal Models and Verified Protocols for Group Messaging: Attacks and Proofs for IETF MLS

    Get PDF
    Group conversations are supported by most modern messaging applications, but the security guarantees they offer are significantly weaker than those for two-party protocols like Signal. The problem is that mechanisms that are efficient for two parties do not scale well to large dynamic groups where members may be regularly added and removed. Further, group messaging introduces subtle new security requirements that require new solutions. The IETF Messaging Layer Security (MLS) working group is standardizing a new asynchronous group messaging protocol that aims to achieve strong guarantees like forward secrecy and post-compromise security for large dynamic groups. In this paper, we define a formal framework for group messaging in the F language and use it to compare the security and performance of several candidate MLS protocols up to draft 7. We present a succinct, executable, formal specification and symbolic security proof for TreeKEMB, the group key establishment protocol in MLS draft 7. Our analysis finds new attacks and we propose verified fixes, which are now being incorporated into MLS. Ours is the first mechanically checked proof for MLS, and our analysis technique is of independent interest, since it accounts for groups of unbounded size, stateful recursive data structures, and fine-grained compromise

    Identification of irregularities and allocation suggestion of relative file system permissions

    Get PDF
    It is well established that file system permissions in large, multi-user environments can be audited to identify vulnerabilities with respect to what is regarded as standard practice. For example, identifying that a user has an elevated level of access to a system directory which is unnecessary and introduces a vulnerability. Similarly, the allocation of new file system permissions can be assigned following the same standard practices. On the contrary, and less well established, is the identification of potential vulnerabilities as well as the implementation of new permissions with respect to a system's current access control implementation. Such tasks are heavily reliant on expert interpretation. For example, the assigned relationship between users and groups, directories and their parents, and the allocation of permissions on file system resources all need to be carefully considered. This paper presents the novel use of statistical analysis to establish independence and homogeneity in allocated file system permissions. This independence can be interpreted as potential anomalies in a system's implementation of access control. The paper then presents the use of instance-based learning to suggest the allocation of new permissions conforming to a system's current implementation structure. Following this, both of the presented techniques are then included in a tool for interacting with Microsoft's New Technology File System (NTFS) permissions. This involves experimental analysis on six different NTFS directory structures within different organisations. The effectiveness of the developed technique is then established through analysing the true positive and true negative values. The presented results demonstrate the potential of the proposed techniques for overcoming complexities with real-world file system administratio

    Zdvue: Prioritization of JavaScript attacks to discover new vulnerabilities

    Get PDF
    ABSTRACT Malware writers are constantly looking for new vulnerabilities to exploit in popular software applications. A successful exploit of a previously unknown vulnerability, that evades state-of-the art anti-virus and intrusion-detection systems is called a zero-day vulnerability. JavaScript is a popular vehicle for testing and delivering attacks through drive-by downloads on web clients. Failed attack attempts leave traces of suspicious activity on victim machines. We present ZDVUE, a tool for automatic prioritization of suspicious JavaScript traces, which can lead to early detection of potential zeroday vulnerabilities. Our algorithm uses a combination of correlation analysis and mixture modeling for fast and robust prioritization of suspicious JavaScript samples.On data collected between June and November 2009, ZDVUE identified a new zero-day vulnerability and its variant in its top results, as well as revealed many new anti-virus signatures. ZDVUE is used in our organization on a routine basis to automatically filter, analyze, and prioritize thousands of downloaded JavaScript files, for information to update anti-virus signatures and to find new zero-day vulnerabilities

    Modeling Insecurity: Enabling Recovery-Oriented Security with Dynamic Policies

    Get PDF
    Policy engineering for access-control security has traditionally focused on specification and verification of safety properties (``nothing bad happens''). In most real systems however, resources and access mechanisms are regularly compromised, either maliciously by attackers, or inadvertently due to vulnerabilities caused by poor systems-engineering. I argue that the all-or-nothing nature of assurance provided by safety-engineering cannot describe or reason about systems that are secure and survivable--systems that can be engineered to proactively or reactively change their security policies and policy enforcement mechanisms, and thereby continue to provide assurance for critical resources, in spite of compromises and failures. In this thesis, I present a framework that extends traditional state-transition models of access control security, to describe timing guarantees and stochastic behavior, and show how we can introduce notions of information compromise, subsequent recovery (whenever possible) and flexible-response in a modular fashion. Our framework is also capable of describing insider attacks. I show how we need to focus on liveness properties (``something good eventually happens'') to explicitly capture the temporal and dynamic nature of enforceable guarantees required for survivability. I develop a new class of properties expressed as branching-time temporal logic formulas that focus on secure availability as a measure of survivability. For finite-state models, the validation of these formulas is decidable in polynomial time using automated model-checking techniques. To showcase the expressive power of our framework, I apply it to study network Denial of Service (DoS) attacks, and model resilience to such attacks as a survivability property. I show how we can systematically analyze the relative impact of different anti-DoS strategies by changing policies and mechanisms during an attack. Using our automated verification methodology, we formally prove for the first time whether strategies such as selective filtering, strong-authentication, and client-puzzles reduce the vulnerability of an example network to DoS attacks

    Unlinkability through Access Control: Respecting User-Privacy in Distributed Systems

    Get PDF
    We propose a policy-based framework using RBAC (Role Based Access Control) to address the unlinkability problem in the context of correlating audit records generated from access to distributed services. We explore this problem in an environment where the enforcement of access control policies is decentralized and ensuring policy consistency as the protection state of the system evolves becomes important. We introduce the notion of an audit flow associated with a user's access transactions, which represents the flow of information through audit logs within an administrative domain. Users of our system can present a set of audit flows to a decision engine that uses global access rules to detect potential linkability conflicts. Users can use this information to specify discretionary unlinkability requirements, depending on whether these accesses can expose sensitive attributes. We present an algorithm that can generate policy constraints based on these discretionary requirements. We also show how these policy constraints can be attached to individual audit log records to enforce unlinkability in a distributed manner. We prove that our proposed algorithm generates constraints that are secure and precise under strong tranquility assumptions with respect to the system's protection state. When we relax these assumptions, we show how versioning can cope with evolving protection state, trading off precision to maintain the security of deployed policies

    Distributed Enforcement of Unlinkability Policies: Looking Beyond the Chinese Wall

    Get PDF
    This paper presents an access control model that preserves the unlinkability of audit-logs in a distributed environment. The model restricts entities from accessing and correlating two or more audit-records belonging to different service invocations created by the same user. While the traditional Chinese Wall (CW) model is sufficient to enforce this type of unlinkability, in distributed environments CW is inefficient because the simple security condition semantics requires knowledge of a user's access history. Our model allows specifications that are simple and efficient to enforce in a decentralized manner without the need for an access history. The proposed enforcement architecture allows users to negotiate unlinkability policies with the system. The system attaches automatically generated policy constraints to the audit-records. When these constraints are enforced appropriately, they implement unlinkability policies that are provably secure and precise for a fixed protection state. The model extends to a versioning scheme that adapts to evolving protection state, trading off precision to maintain the security of deployed policies

    Routing with Confidence: A Model for Trustworthy Communication

    Get PDF
    We present a model for trustworthy communication with respect to security and privacy in heterogeneous networks. In general, existing privacy protocols assume independently operated nodes spread over the Internet. Most of the analysis of these protocols has assumed a fraction of colluding nodes picked at random. While these approaches provide promising guarantees of anonymity for such attack models, we argue that trust relationships dominate threats to privacy at smaller scales, and such independence assumptions should not be made. For example, within an organization, all nodes along a chosen path may be physically collocated, making a collusion attack more likely. Users can have varying notions of threat to their privacy -- users may not trust nodes located in a particular domain, or consider nodes with low physical security to be a particularly strong threat to their privacy. We present a model for trustworthy communication that addresses users' privacy needs in such environments. Our model also applies to peer-to-peer anonymizing networks such as Tor for finding more trustworthy routes. For example, users may consider nodes operating in a particular country to be untrustworthy. We recognize that users in the network will have different perceived threats and must be allowed to "route around" untrustworthy nodes based on these threats. Our research makes the following contributions: We present a formalization of trustworthy routing and examine its properties in an effort to understand the boundaries of attribute based trustworthy routing schemes. We propose a model that exposes trust relationships in the network to concerned users. Our policy language allows users to specify qualitative path policies based on their own perceived threat to security and privacy. We define a general quantitative measure of trust that is used to find routes that are most trustworthy based on this measure. We identify feasible and infeasible interpretations of trust by showing how trustworthy routes can be computed efficiently for certain semantic models of trust and by contributing several NP-hardness results for infeasible models of trust
    • …
    corecore