82 research outputs found

    Decision problems for Clark-congruential languages

    Get PDF
    A common question when studying a class of context-free grammars is whether equivalence is decidable within this class. We answer this question positively for the class of Clark-congruential grammars, which are of interest to grammatical inference. We also consider the problem of checking whether a given CFG is Clark-congruential, and show that it is decidable given that the CFG is a DCFG.Comment: Version 2 incorporates revisions prompted by the comments of anonymous referees at ICGI and LearnAu

    Decision problems for Clark-congruential languages

    Get PDF
    A common question when studying a class of context-free grammars (CFGs) is whether equivalence is decidable within this class. We answer this question positively for the class of Clark-congruential grammars, which are of interest to grammatical inference. We also consider the problem of checking whether a given CFG is Clark-congruential, and show that it is decidable given that the CFG is a deterministic CFG

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings

    Some contribution in operations research

    Get PDF
    Not availabl

    Dynamic block encryption with self-authenticating key exchange

    Get PDF
    One of the greatest challenges facing cryptographers is the mechanism used for key exchange. When secret data is transmitted, the chances are that there may be an attacker who will try to intercept and decrypt the message. Having done so, he/she might just gain advantage over the information obtained, or attempt to tamper with the message, and thus, misguiding the recipient. Both cases are equally fatal and may cause great harm as a consequence. In cryptography, there are two commonly used methods of exchanging secret keys between parties. In the first method, symmetric cryptography, the key is sent in advance, over some secure channel, which only the intended recipient can read. The second method of key sharing is by using a public key exchange method, where each party has a private and public key, a public key is shared and a private key is kept locally. In both cases, keys are exchanged between two parties. In this thesis, we propose a method whereby the risk of exchanging keys is minimised. The key is embedded in the encrypted text using a process that we call `chirp coding', and recovered by the recipient using a process that is based on correlation. The `chirp coding parameters' are exchanged between users by employing a USB flash memory retained by each user. If the keys are compromised they are still not usable because an attacker can only have access to part of the key. Alternatively, the software can be configured to operate in a one time parameter mode, in this mode, the parameters are agreed upon in advance. There is no parameter exchange during file transmission, except, of course, the key embedded in ciphertext. The thesis also introduces a method of encryption which utilises dynamic blocks, where the block size is different for each block. Prime numbers are used to drive two random number generators: a Linear Congruential Generator (LCG) which takes in the seed and initialises the system and a Blum-Blum Shum (BBS) generator which is used to generate random streams to encrypt messages, images or video clips for example. In each case, the key created is text dependent and therefore will change as each message is sent. The scheme presented in this research is composed of five basic modules. The first module is the key generation module, where the key to be generated is message dependent. The second module, encryption module, performs data encryption. The third module, key exchange module, embeds the key into the encrypted text. Once this is done, the message is transmitted and the recipient uses the key extraction module to retrieve the key and finally the decryption module is executed to decrypt the message and authenticate it. In addition, the message may be compressed before encryption and decompressed by the recipient after decryption using standard compression tools

    Computing and estimating information leakage with a quantitative point-to-point information flow model

    Get PDF
    Information leakage occurs when a system exposes its secret information to an unauthorised entity. Information flow analysis is concerned with tracking flows of information through systems to determine whether they process information securely or leak information. We present a novel information flow model that permits an arbitrary amount of secret and publicly-observable information to occur at any point and in any order in a system. This is an improvement over previous models, which generally assume that systems process a single piece of secret information present before execution and produce a single piece of publicly-observable information upon termination. Our model precisely quantifies the information leakage from secret to publicly-observable values at user-defined points - hence, a "point-to-point" model - using the information-theoretic measures of mutual information and min-entropy leakage; it is ideal for analysing systems of low to moderate complexity. We also present a relaxed version of our information flow model that estimates, rather than computes, the measures of mutual information and min-entropy leakage via sampling of a system. We use statistical techniques to bound the accuracy of the estimates this model provides. We demonstrate how our relaxed model is more suitable for analysing complex systems by implementing it in a quantitative information flow analysis tool for Java programs
    corecore