15 research outputs found

    A Proof of Entropy Minimization for Outputs in Deletion Channels via Hidden Word Statistics

    Get PDF
    From the output produced by a memoryless deletion channel from a uniformly random input of known length nn, one obtains a posterior distribution on the channel input. The difference between the Shannon entropy of this distribution and that of the uniform prior measures the amount of information about the channel input which is conveyed by the output of length mm, and it is natural to ask for which outputs this is extremized. This question was posed in a previous work, where it was conjectured on the basis of experimental data that the entropy of the posterior is minimized and maximized by the constant strings 000…\texttt{000}\ldots and 111…\texttt{111}\ldots and the alternating strings 0101…\texttt{0101}\ldots and 1010…\texttt{1010}\ldots respectively. In the present work we confirm the minimization conjecture in the asymptotic limit using results from hidden word statistics. We show how the analytic-combinatorial methods of Flajolet, Szpankowski and Vall\'ee for dealing with the hidden pattern matching problem can be applied to resolve the case of fixed output length and n→∞n\rightarrow\infty, by obtaining estimates for the entropy in terms of the moments of the posterior distribution and establishing its minimization via a measure of autocorrelation.Comment: 11 pages, 2 figure

    From Clustering Supersequences to Entropy Minimizing Subsequences for Single and Double Deletions

    Get PDF
    A binary string transmitted via a memoryless i.i.d. deletion channel is received as a subsequence of the original input. From this, one obtains a posterior distribution on the channel input, corresponding to a set of candidate supersequences weighted by the number of times the received subsequence can be embedded in them. In a previous work it is conjectured on the basis of experimental data that the entropy of the posterior is minimized and maximized by the constant and the alternating strings, respectively. In this work, in addition to revisiting the entropy minimization conjecture, we also address several related combinatorial problems. We present an algorithm for counting the number of subsequence embeddings using a run-length encoding of strings. We then describe methods for clustering the space of supersequences such that the cardinality of the resulting sets depends only on the length of the received subsequence and its Hamming weight, but not its exact form. Then, we consider supersequences that contain a single embedding of a fixed subsequence, referred to as singletons, and provide a closed form expression for enumerating them using the same run-length encoding. We prove an analogous result for the minimization and maximization of the number of singletons, by the alternating and the uniform strings, respectively. Next, we prove the original minimal entropy conjecture for the special cases of single and double deletions using similar clustering techniques and the same run-length encoding, which allow us to characterize the distribution of the number of subsequence embeddings in the space of compatible supersequences to demonstrate the effect of an entropy decreasing operation

    Discovery of Unconventional Patterns for Sequence Analysis: Theory and Algorithms

    Get PDF
    The biology community is collecting a large amount of raw data, such as the genome sequences of organisms, microarray data, interaction data such as gene-protein interactions, protein-protein interactions, etc. This amount is rapidly increasing and the process of understanding the data is lagging behind the process of acquiring it. An inevitable first step towards making sense of the data is to study their regularities focusing on the non-random structures appearing surprisingly often in the input sequences: patterns. In this thesis we discuss three incarnations of the pattern discovery task, exploring three types of patterns that can model different regularities of the input dataset. While mask patterns have been designed to model short repeated biological sequences, showing a high conservation of their content at some specific positions, permutation patterns have been designed to detect repeated patterns whose parts maintain their physical adjacency but not their ordering in all the pattern occurrences. Transposons, instead, model mobile sequences in the input dataset, which can be discovered by comparing different copies of the same input string, detecting large insertions and deletions in their alignment

    From Information Theory Puzzles in Deletion Channels to Deniability in Quantum Cryptography

    Get PDF
    Research questions, originally rooted in quantum key exchange (QKE), have branched off into independent lines of inquiry ranging from information theory to fundamental physics. In a similar vein, the first part of this thesis is dedicated to information theory problems in deletion channels that arose in the context of QKE. From the output produced by a memoryless deletion channel with a uniformly random input of known length n, one obtains a posterior distribution on the channel input. The difference between the Shannon entropy of this distribution and that of the uniform prior measures the amount of information about the channel input which is conveyed by the output of length m. We first conjecture on the basis of experimental data that the entropy of the posterior is minimized by the constant strings 000..., 111... and maximized by the alternating strings 0101..., 1010.... Among other things, we derive analytic expressions for minimal entropy and propose alternative approaches for tackling the entropy extremization problem. We address a series of closely related combinatorial problems involving binary (sub/super)-sequences and prove the original minimal entropy conjecture for the special cases of single and double deletions using clustering techniques and a run-length encoding of strings. The entropy analysis culminates in a fundamental characterization of the extremal entropic cases in terms of the distribution of embeddings. We confirm the minimization conjecture in the asymptotic limit using results from hidden word statistics by showing how the analytic-combinatorial methods of Flajolet, Szpankowski and Vallée, relying on generating functions, can be applied to resolve the case of fixed output length and n → ∞. In the second part, we revisit the notion of deniability in QKE, a topic that remains largely unexplored. In a work by Donald Beaver it is argued that QKE protocols are not necessarily deniable due to an eavesdropping attack that limits key equivocation. We provide more insight into the nature of this attack and discuss how it extends to other prepare-and-measure QKE schemes such as QKE obtained from uncloneable encryption. We adopt the framework for quantum authenticated key exchange developed by Mosca et al. and extend it to introduce the notion of coercer-deniable QKE, formalized in terms of the indistinguishability of real and fake coercer views. We also elaborate on the differences between our model and the standard simulation-based definition of deniable key exchange in the classical setting. We establish a connection between the concept of covert communication and deniability by applying results from a work by Arrazola and Scarani on obtaining covert quantum communication and covert QKE to propose a simple construction for coercer-deniable QKE. We prove the deniability of this scheme via a reduction to the security of covert QKE. We relate deniability to fundamental concepts in quantum information theory and suggest a generic approach based on entanglement distillation for achieving information-theoretic deniability, followed by an analysis of other closely related results such as the relation between the impossibility of unconditionally secure quantum bit commitment and deniability. Finally, we present an efficient coercion-resistant and quantum-secure voting scheme, based on fully homomorphic encryption (FHE) and recent advances in various FHE primitives such as hashing, zero-knowledge proofs of correct decryption, verifiable shuffles and threshold FHE

    Bioinformatics

    Get PDF
    This book is divided into different research areas relevant in Bioinformatics such as biological networks, next generation sequencing, high performance computing, molecular modeling, structural bioinformatics, molecular modeling and intelligent data analysis. Each book section introduces the basic concepts and then explains its application to problems of great relevance, so both novice and expert readers can benefit from the information and research works presented here

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    A Bayesian Rule Generation Framework for 'Omic' Biomedical Data Analysis

    Get PDF
    High-dimensional biomedical 'omic' datasets are accumulating rapidly from studies aimed at early detection and better management of human disease. These datasets pose tremendous challenges for analysis due to their large number of variables that represent measurements of biochemical molecules, such as proteins and mRNA, from bodily fluids or tissues extracted from a rather small cohort of samples. Machine learning methods have been applied to modeling these datasets including rule learning methods, which have been successful in generating models that are easily interpretable by the scientists. Rule learning methods have typically relied on a frequentist measure of certainty within IF-THEN (propositional) rules. In this dissertation, a Bayesian Rule Generation Framework (BRGF) is developed and tested that can produce rules with probabilities, thereby enabling a mathematically rigorous representation of uncertainty in rule models. The BRGF includes a novel Bayesian Discretization method combined with one or more search strategies for building constrained Bayesian Networks from data and converting them into probabilistic rules. Both global and local structures are built using different Bayesian Network generation algorithms and the rule models generated from the network are tested on public and private 'omic' datasets. We show that using a specific type of structure (Bayesian decision graphs) in tandem with a specific type of search method (parallel greedy) allows us to achieve statistically significant higher overall performance over current state of the art rule learning methods. Not only does using the BRGF boost performance on average on 'omic' biomedical data to a statistically significant point, but also provides the ability to incorporate prior information in a mathematically rigorous fashion for modeling purposes
    corecore