20 research outputs found

    Gaming security by obscurity

    Get PDF
    Shannon sought security against the attacker with unlimited computational powers: *if an information source conveys some information, then Shannon's attacker will surely extract that information*. Diffie and Hellman refined Shannon's attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer science, and led to modern cryptography. Shannon also sought security against the attacker with unlimited logical and observational powers, expressed through the maxim that "the enemy knows the system". This view is still endorsed in cryptography. The popular formulation, going back to Kerckhoffs, is that "there is no security by obscurity", meaning that the algorithms cannot be kept obscured from the attacker, and that security should only rely upon the secret keys. In fact, modern cryptography goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there is an algorithm that can break the system, then the attacker will surely find that algorithm*. The attacker is not viewed as an omnipotent computer any more, but he is still construed as an omnipotent programmer. So the Diffie-Hellman step from unlimited to limited computational powers has not been extended into a step from unlimited to limited logical or programming powers. Is the assumption that all feasible algorithms will eventually be discovered and implemented really different from the assumption that everything that is computable will eventually be computed? The present paper explores some ways to refine the current models of the attacker, and of the defender, by taking into account their limited logical and programming powers. If the adaptive attacker actively queries the system to seek out its vulnerabilities, can the system gain some security by actively learning attacker's methods, and adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos correcte

    Formal Derivation of Concurrent Garbage Collectors

    Get PDF
    Concurrent garbage collectors are notoriously difficult to implement correctly. Previous approaches to the issue of producing correct collectors have mainly been based on posit-and-prove verification or on the application of domain-specific templates and transformations. We show how to derive the upper reaches of a family of concurrent garbage collectors by refinement from a formal specification, emphasizing the application of domain-independent design theories and transformations. A key contribution is an extension to the classical lattice-theoretic fixpoint theorems to account for the dynamics of concurrent mutation and collection.Comment: 38 pages, 21 figures. The short version of this paper appeared in the Proceedings of MPC 201

    From G\"odel's Incompleteness Theorem to the completeness of bot beliefs (Extended abstract)

    Full text link
    Hilbert and Ackermann asked for a method to consistently extend incomplete theories to complete theories. G\"odel essentially proved that any theory capable of encoding its own statements and their proofs contains statements that are true but not provable. Hilbert did not accept that G\"odel's construction answered his question, and in his late writings and lectures, G\"odel agreed that it did not, since theories can be completed incrementally, by adding axioms to prove ever more true statements, as science normally does, with completeness as the vanishing point. This pragmatic view of validity is familiar not only to scientists who conjecture test hypotheses but also to real estate agents and other dealers, who conjure claims, albeit invalid, as necessary to close a deal, confident that they will be able to conjure other claims, albeit invalid, sufficient to make the first claims valid. We study the underlying logical process and describe the trajectories leading to testable but unfalsifiable theories to which bots and other automated learners are likely to converge.Comment: 19 pages, 13 figures; version updates: changed one word in the title, expanded Introduction, improved presentation, tidied up some diagram

    Bicompletions of distance matrices

    Get PDF
    In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The usual reason is that the matrix composition of linear algebra presents information as flowing in waves, whereas it sometimes flows in particles, which seek the shortest paths. This wave-particle duality in computation and information processing has been originally observed by Abramsky. In this paper we pursue a particle view of information, formalized in distance spaces, which generalize metric spaces, but are slightly less general than Lawvere’s generalized metric spaces. In this framework, the task of extracting the ‘principal components’ from a given matrix of data boils down to a bicompletion, in the sense of enriched category theory. We describe the bicompletion construction for distance matrices. The practical goal that motivates this research is to develop a method to estimate the hardness of attack constructions in security

    Bicompletions of distance matrices

    Full text link
    In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The usual reason is that the matrix composition of linear algebra presents information as flowing in waves, whereas it sometimes flows in particles, which seek the shortest paths. This wave-particle duality in computation and information processing has been originally observed by Abramsky. In this paper we pursue a particle view of information, formalized in *distance spaces*, which generalize metric spaces, but are slightly less general than Lawvere's *generalized metric spaces*. In this framework, the task of extracting the 'principal components' from a given matrix of data boils down to a bicompletio}, in the sense of enriched category theory. We describe the bicompletion construction for distance matrices. The practical goal that motivates this research is to develop a method to estimate the hardness of attack constructions in security.Comment: 20 pages, 5 figures; appeared in Springer LNCS vol 7860 in 2013; v2 fixes an error in Sec. 2.3, noticed by Toshiki Kataok

    Correctness by Construction for Pairwise Sequence Alignment Algorithm in Bio-Sequence

    Get PDF
    Pairwise sequence alignment is a classical problem in bioinformatics, aiming at finding the similarity between two sequences, which is important for discovering functional, structural and evolutionary information in biological sequences. More algorithms have been developed for the sequence alignment problem. There is no formal development process for the existing pairwise sequence algorithms and leads to the low trustworthiness of those algorithms. In addition, the application of formal methods in the field of bioinformatics algorithm development is rarely seen. In this paper, we use a formal method PAR to construct a pairwise sequence algorithm, analyze the essence of the pairwise sequence alignment problem, construct the Apla algorithm program by stepwise refinement, and further verify its correctness. Finally a highly reliable and executable pairwise sequence alignment algorithm program is generated from Apla program via PAR platform. The formal construction process ensures the reliability of algorithm, and also demonstrates the algorithm design idea clearly, which makes the originally difficult algorithm design process easier. The successful practice of this method on the pairwise sequence alignment problem in biological sequence analysis can provide a reference for the construction of highly reliable algorithms in complex bioinformatics from both methodological and practical aspects

    Book reports

    Get PDF

    Security Policy Alignment:A Formal Approach

    Get PDF
    Security policy alignment concerns the matching of security policies specified at different levels in socio-technical systems, and delegated to different agents, technical and human. For example, the policy that sales data should not leave an organization is refined into policies on door locks, firewalls and employee behavior, and this refinement should be correct with respect to the original policy. Although alignment of security policies in socio-technical systems has been discussed in the literature, especially in relation to business goals, there has been no formal treatment of this topic so far in terms of consistency and completeness of policies. Wherever formal approaches are used in policy alignment, these are applied to well-defined technical access control scenarios instead. Therefore, we aim at formalizing security policy alignment for complex socio-technical systems in this paper, and our formalization is based on predicates over sequences of actions. We discuss how this formalization provides the foundations for existing and future methods for finding security weaknesses induced by misalignment of policies in socio-technical systems

    A Methodology for Automated Verification of Rosetta Specification Transformations

    Get PDF
    The Rosetta system-level design language is a specification language created to support design and analysis of heterogeneous models at varying levels of abstraction. These abstraction levels are represented in Rosetta as domains, specifying a particular semantic vocabulary and modeling style. The following dissertation proposes a framework, semantics and methodology for automated verification of safety preservation over specification transformations between domains. Utilizing the ideas of lattice theory, abstract interpretation and category theory we define the semantics of a Rosetta domain as well as safety of specification transformations between domains using Galois connections and functors. With the help of Isabelle, a higher order logic theorem prover, we verify the existence of Galois connections between Rosetta domains as well as safety of transforming specifications between these domains. The following work overviews the semantic infrastructure required to construct the Rosetta domain lattice and provides a methodology for verification of transformations within the lattice
    corecore