13 research outputs found

    Gaming security by obscurity

    Get PDF
    Shannon sought security against the attacker with unlimited computational powers: *if an information source conveys some information, then Shannon's attacker will surely extract that information*. Diffie and Hellman refined Shannon's attacker model by taking into account the fact that the real attackers are computationally limited. This idea became one of the greatest new paradigms in computer science, and led to modern cryptography. Shannon also sought security against the attacker with unlimited logical and observational powers, expressed through the maxim that "the enemy knows the system". This view is still endorsed in cryptography. The popular formulation, going back to Kerckhoffs, is that "there is no security by obscurity", meaning that the algorithms cannot be kept obscured from the attacker, and that security should only rely upon the secret keys. In fact, modern cryptography goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there is an algorithm that can break the system, then the attacker will surely find that algorithm*. The attacker is not viewed as an omnipotent computer any more, but he is still construed as an omnipotent programmer. So the Diffie-Hellman step from unlimited to limited computational powers has not been extended into a step from unlimited to limited logical or programming powers. Is the assumption that all feasible algorithms will eventually be discovered and implemented really different from the assumption that everything that is computable will eventually be computed? The present paper explores some ways to refine the current models of the attacker, and of the defender, by taking into account their limited logical and programming powers. If the adaptive attacker actively queries the system to seek out its vulnerabilities, can the system gain some security by actively learning attacker's methods, and adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos correcte

    Monoidal computer III: A coalgebraic view of computability and complexity

    Full text link
    Monoidal computer is a categorical model of intensional computation, where many different programs correspond to the same input-output behavior. The upshot of yet another model of computation is that a categorical formalism should provide a much needed high level language for theory of computation, flexible enough to allow abstracting away the low level implementation details when they are irrelevant, or taking them into account when they are genuinely needed. A salient feature of the approach through monoidal categories is the formal graphical language of string diagrams, which supports visual reasoning about programs and computations. In the present paper, we provide a coalgebraic characterization of monoidal computer. It turns out that the availability of interpreters and specializers, that make a monoidal category into a monoidal computer, is equivalent with the existence of a *universal state space*, that carries a weakly final state machine for any pair of input and output types. Being able to program state machines in monoidal computers allows us to represent Turing machines, to capture their execution, count their steps, as well as, e.g., the memory cells that they use. The coalgebraic view of monoidal computer thus provides a convenient diagrammatic language for studying computability and complexity.Comment: 34 pages, 24 figures; in this version: added the Appendi

    Bicompletions of distance matrices

    Get PDF
    In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The usual reason is that the matrix composition of linear algebra presents information as flowing in waves, whereas it sometimes flows in particles, which seek the shortest paths. This wave-particle duality in computation and information processing has been originally observed by Abramsky. In this paper we pursue a particle view of information, formalized in distance spaces, which generalize metric spaces, but are slightly less general than Lawvere’s generalized metric spaces. In this framework, the task of extracting the ‘principal components’ from a given matrix of data boils down to a bicompletion, in the sense of enriched category theory. We describe the bicompletion construction for distance matrices. The practical goal that motivates this research is to develop a method to estimate the hardness of attack constructions in security

    Bicompletions of distance matrices

    Full text link
    In the practice of information extraction, the input data are usually arranged into pattern matrices, and analyzed by the methods of linear algebra and statistics, such as principal component analysis. In some applications, the tacit assumptions of these methods lead to wrong results. The usual reason is that the matrix composition of linear algebra presents information as flowing in waves, whereas it sometimes flows in particles, which seek the shortest paths. This wave-particle duality in computation and information processing has been originally observed by Abramsky. In this paper we pursue a particle view of information, formalized in *distance spaces*, which generalize metric spaces, but are slightly less general than Lawvere's *generalized metric spaces*. In this framework, the task of extracting the 'principal components' from a given matrix of data boils down to a bicompletio}, in the sense of enriched category theory. We describe the bicompletion construction for distance matrices. The practical goal that motivates this research is to develop a method to estimate the hardness of attack constructions in security.Comment: 20 pages, 5 figures; appeared in Springer LNCS vol 7860 in 2013; v2 fixes an error in Sec. 2.3, noticed by Toshiki Kataok

    Security-by-experiment: lessons from responsible deployment in cyberspace

    Get PDF
    Conceiving new technologies as social experiments is a means to discuss responsible deployment of technologies that may have unknown and potentially harmful side-effects. Thus far, the uncertain outcomes addressed in the paradigm of new technologies as social experiments have been mostly safetyrelated, meaning that potential harm is caused by the design plus accidental events in the environment. In some domains, such as cyberspace, dversarial agents (attackers)may be at least as important when it comes to undesirable effects of deployed technologies. In such cases, conditions for responsible experimentation may need to be implemented differently, as attackers behave strategically rather than probabilistically. In this contribution, we outline how adversarial aspects are already taken into account in technology deployment in the field of cyber security, and what the paradigm of new technologies as social experiments can learn from this. In particular, we show the importance of adversarial roles in social experiments with new technologies

    AppCon: Mitigating evasion attacks to ML cyber detectors

    Get PDF
    Adversarial attacks represent a critical issue that prevents the reliable integration of machine learning methods into cyber defense systems. Past work has shown that even proficient detectors are highly affected just by small perturbations to malicious samples, and that existing countermeasures are immature. We address this problem by presenting AppCon, an original approach to harden intrusion detectors against adversarial evasion attacks. Our proposal leverages the integration of ensemble learning to realistic network environments, by combining layers of detectors devoted to monitor the behavior of the applications employed by the organization. Our proposal is validated through extensive experiments performed in heterogeneous network settings simulating botnet detection scenarios, and consider detectors based on distinct machine-and deep-learning algorithms. The results demonstrate the effectiveness of AppCon in mitigating the dangerous threat of adversarial attacks in over 75% of the considered evasion attempts, while not being affected by the limitations of existing countermeasures, such as performance degradation in non-adversarial settings. For these reasons, our proposal represents a valuable contribution to the development of more secure cyber defense platforms

    Detecting piracy in standalone and network licensing systems

    Get PDF
    Software has been traditionally sold as executable files which require an additional license to run. Licenses come in many forms but all of them aim at the single goal of preventing unauthorized use, which is also known as software piracy. Piracy can lead to big losses for software companies so it’s important to study what can be done about it. Instead of trying to tackle the whole enterprise, I will focus on the first step of preventing piracy, that is, detecting it. This topic will be covered in the context of two traditional software licensing schemes: standalone licensing and network licensing. There are also modern cloud based licensing schemes like Software as a Service (SaaS) that don’t need a license in the same sense, but it seems like the traditional models aren’t going anywhere soon. Standalone licensing means that a unique license is required for each installation of the application. Think of product keys on good old installation CDs. In network licensing the application requires a license only when it’s actually running. This is achieved with the help of a dedicated license server which distributes licenses to client applications within the local network. I map out all notable attack vectors against both licensing systems, and suggest methods for detecting these attacks. Network verification is discovered to be an indispensable method for detecting piracy. Transport layer security (TLS) protocol will turn out to be the best line of defense against network based attacks. Even the often forgotten feature of client certificate authentication will turn out to be useful. Detection of local attacks, such as tampering and virtual machine duplication, is also covered. Overall this work is intended as a theoretical basis for implementing piracy detection systems in standalone and network licensing environments
    corecore