25,295 research outputs found

    Steganography: a class of secure and robust algorithms

    Full text link
    This research work presents a new class of non-blind information hiding algorithms that are stego-secure and robust. They are based on some finite domains iterations having the Devaney's topological chaos property. Thanks to a complete formalization of the approach we prove security against watermark-only attacks of a large class of steganographic algorithms. Finally a complete study of robustness is given in frequency DWT and DCT domains.Comment: Published in The Computer Journal special issue about steganograph

    Chasing diagrams in cryptography

    Full text link
    Cryptography is a theory of secret functions. Category theory is a general theory of functions. Cryptography has reached a stage where its structures often take several pages to define, and its formulas sometimes run from page to page. Category theory has some complicated definitions as well, but one of its specialties is taming the flood of structure. Cryptography seems to be in need of high level methods, whereas category theory always needs concrete applications. So why is there no categorical cryptography? One reason may be that the foundations of modern cryptography are built from probabilistic polynomial-time Turing machines, and category theory does not have a good handle on such things. On the other hand, such foundational problems might be the very reason why cryptographic constructions often resemble low level machine programming. I present some preliminary explorations towards categorical cryptography. It turns out that some of the main security concepts are easily characterized through the categorical technique of *diagram chasing*, which was first used Lambek's seminal `Lecture Notes on Rings and Modules'.Comment: 17 pages, 4 figures; to appear in: 'Categories in Logic, Language and Physics. Festschrift on the occasion of Jim Lambek's 90th birthday', Claudia Casadio, Bob Coecke, Michael Moortgat, and Philip Scott (editors); this version: fixed typos found by kind reader

    Compositional synthesis of discrete event systems via synthesis equivalence

    Get PDF
    A two-pass algorithm for compositional synthesis of modular supervisors for largescale systems of composed finite-state automata is proposed. The first pass provides an efficient method to determine whether a supervisory control problem has a solution, without explicitly constructing the synchronous composition of all components. If a solution exists, the second pass yields an over-approximation of the least restrictive solution which, if nonblocking, is a modular representation of the least restrictive supervisor. Using a new type of equivalence of nondeterministic processes, called synthesis equivalence, a wide range of abstractions can be employed to mitigate state-space explosion throughout the algorithm

    Runtime protection via dataflow flattening

    Get PDF
    Software running on an open architecture, such as the PC, is vulnerable to inspection and modification. Since software may process valuable or sensitive information, many defenses against data analysis and modification have been proposed. This paper complements existing work and focuses on hiding data location throughout program execution. To achieve this, we combine three techniques: (i) periodic reordering of the heap, (ii) migrating local variables from the stack to the heap and (iii) pointer scrambling. By essentialy flattening the dataflow graph of the program, the techniques serve to complicate static dataflow analysis and dynamic data tracking. Our methodology can be viewed as a data-oriented analogue of control-flow flattening techniques. Dataflow flattening is useful in practical scenarios like DRM, information-flow protection, and exploit resistance. Our prototype implementation compiles C programs into a binary for which every access to the heap is redirected through a memory management unit. Stack-based variables may be migrated to the heap, while pointer accesses and arithmetic may be scrambled and redirected. We evaluate our approach experimentally on the SPEC CPU2006 benchmark suit

    A Pseudo Random Numbers Generator Based on Chaotic Iterations. Application to Watermarking

    Full text link
    In this paper, a new chaotic pseudo-random number generator (PRNG) is proposed. It combines the well-known ISAAC and XORshift generators with chaotic iterations. This PRNG possesses important properties of topological chaos and can successfully pass NIST and TestU01 batteries of tests. This makes our generator suitable for information security applications like cryptography. As an illustrative example, an application in the field of watermarking is presented.Comment: 11 pages, 7 figures, In WISM 2010, Int. Conf. on Web Information Systems and Mining, volume 6318 of LNCS, Sanya, China, pages 202--211, October 201

    NEEXP is Contained in MIP*

    Get PDF
    We study multiprover interactive proof systems. The power of classical multiprover interactive proof systems, in which the provers do not share entanglement, was characterized in a famous work by Babai, Fortnow, and Lund (Computational Complexity 1991), whose main result was the equality MIP = NEXP. The power of quantum multiprover interactive proof systems, in which the provers are allowed to share entanglement, has proven to be much more difficult to characterize. The best known lower-bound on MIP* is NEXP ⊆ MIP*, due to Ito and Vidick (FOCS 2012). As for upper bounds, MIP* could be as large as RE, the class of recursively enumerable languages. The main result of this work is the inclusion of NEEXP = NTIME[2^(2poly(n))] ⊆ MIP*. This is an exponential improvement over the prior lower bound and shows that proof systems with entangled provers are at least exponentially more powerful than classical provers. In our protocol the verifier delegates a classical, exponentially large MIP protocol for NEEXP to two entangled provers: the provers obtain their exponentially large questions by measuring their shared state, and use a classical PCP to certify the correctness of their exponentially-long answers. For the soundness of our protocol, it is crucial that each player should not only sample its own question correctly but also avoid performing measurements that would reveal the other player's sampled question. We ensure this by commanding the players to perform a complementary measurement, relying on the Heisenberg uncertainty principle to prevent the forbidden measurements from being performed

    Refinement for Probabilistic Systems with Nondeterminism

    Full text link
    Before we combine actions and probabilities two very obvious questions should be asked. Firstly, what does "the probability of an action" mean? Secondly, how does probability interact with nondeterminism? Neither question has a single universally agreed upon answer but by considering these questions at the outset we build a novel and hopefully intuitive probabilistic event-based formalism. In previous work we have characterised refinement via the notion of testing. Basically, if one system passes all the tests that another system passes (and maybe more) we say the first system is a refinement of the second. This is, in our view, an important way of characterising refinement, via the question "what sort of refinement should I be using?" We use testing in this paper as the basis for our refinement. We develop tests for probabilistic systems by analogy with the tests developed for non-probabilistic systems. We make sure that our probabilistic tests, when performed on non-probabilistic automata, give us refinement relations which agree with for those non-probabilistic automata. We formalise this property as a vertical refinement.Comment: In Proceedings Refine 2011, arXiv:1106.348
    corecore