2,613 research outputs found

    Fifteen years of quantum LDPC coding and improved decoding strategies

    No full text
    The near-capacity performance of classical low-density parity check (LDPC) codes and their efficient iterative decoding makes quantum LDPC (QLPDC) codes a promising candidate for quantum error correction. In this paper, we present a comprehensive survey of QLDPC codes from the perspective of code design as well as in terms of their decoding algorithms. We also conceive a modified non-binary decoding algorithm for homogeneous Calderbank-Shor-Steane-type QLDPC codes, which is capable of alleviating the problems imposed by the unavoidable length-four cycles. Our modified decoder outperforms the state-of-the-art decoders in terms of their word error rate performance, despite imposing a reduced decoding complexity. Finally, we intricately amalgamate our modified decoder with the classic uniformly reweighted belief propagation for the sake of achieving an improved performance

    Effect of the Side Effect Machines in Edit Metric Decoding

    Get PDF
    The development of general edit metric decoders is a challenging problem, especially with the inclusion of additional biological restrictions that can occur in DNA error correcting codes. Side effect machines (SEMs), an extension of finite state machines, can provide efficient decoding algorithms for such edit metric codes. However, finding a good machine poses its own set of challenges and is itself considered as an open problem with no general solution. Previous studies utilizing evolutionary computation techniques, such as genetic algorithms and evolutionary programming to search for good SEMs have found success in terms of decoding accuracy. However, they all worked with extremely constricted problem spaces i.e. a single code or codes of the same length. Therefore a general approach that works well across codes of different lengths is yet to be formalized. In this research, several codes of varying lengths are used to study the effectiveness of evolutionary programming (EP) as a general approach for finding efficient edit metric decoders. Two classification methods—direct and fuzzy—are compared while also changing some of the EP settings to observe how the decoding accuracy is affected. The final SEMs are verified against an additional dataset to test their general effectiveness. Regardless of the code length, the best results are found using the fuzzy classification methods. For codes of length 10, a maximum accuracy of up to 99.4% is achieved for distance 1 whereas distance 2 and 3 achieve up to 97.1% and 85.9%, respectively. Unsurprisingly, the accuracy suffers for longer codes, as the maximum accuracies achieved by codes of length 14 were 92.4%, 85.7% and 69.2% for distance 1, 2, and 3 respectively. Additionally, the machines are examined for potential bloat by comparing the number of visited states against the number of total states. The study has found some machines with at least one unvisited state. The bloat is seen more in larger machines than it is in smaller machines. Furthermore, the results are analyzed to find potential trends and relationships among the parameters. The trend that is most consistently noticed is that — when allowed, the longer codes generally show a propensity for larger machines

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 17th International Annual Conference on Cyber Security, CNCERT 2021, held in Beijing, China, in AJuly 2021. The 14 papers presented were carefully reviewed and selected from 51 submissions. The papers are organized according to the following topical sections: ​data security; privacy protection; anomaly detection; traffic analysis; social network security; vulnerability detection; text classification

    Template Based Recognition of On-Line Handwriting

    Get PDF
    Software for recognition of handwriting has been available for several decades now and research on the subject have produced several different strategies for producing competitive recognition accuracies, especially in the case of isolated single characters. The problem of recognizing samples of handwriting with arbitrary connections between constituent characters (emph{unconstrained handwriting}) adds considerable complexity in form of the segmentation problem. In other words a recognition system, not constrained to the isolated single character case, needs to be able to recognize where in the sample one letter ends and another begins. In the research community and probably also in commercial systems the most common technique for recognizing unconstrained handwriting compromise Neural Networks for partial character matching along with Hidden Markov Modeling for combining partial results to string hypothesis. Neural Networks are often favored by the research community since the recognition functions are more or less automatically inferred from a training set of handwritten samples. From a commercial perspective a downside to this property is the lack of control, since there is no explicit information on the types of samples that can be correctly recognized by the system. In a template based system, each style of writing a particular character is explicitly modeled, and thus provides some intuition regarding the types of errors (confusions) that the system is prone to make. Most template based recognition methods today only work for the isolated single character recognition problem and extensions to unconstrained recognition is usually not straightforward. This thesis presents a step-by-step recipe for producing a template based recognition system which extends naturally to unconstrained handwriting recognition through simple graph techniques. A system based on this construction has been implemented and tested for the difficult case of unconstrained online Arabic handwriting recognition with good results

    Information retrieval and text mining technologies for chemistry

    Get PDF
    Efficient access to chemical information contained in scientific literature, patents, technical reports, or the web is a pressing need shared by researchers and patent attorneys from different chemical disciplines. Retrieval of important chemical information in most cases starts with finding relevant documents for a particular chemical compound or family. Targeted retrieval of chemical documents is closely connected to the automatic recognition of chemical entities in the text, which commonly involves the extraction of the entire list of chemicals mentioned in a document, including any associated information. In this Review, we provide a comprehensive and in-depth description of fundamental concepts, technical implementations, and current technologies for meeting these information demands. A strong focus is placed on community challenges addressing systems performance, more particularly CHEMDNER and CHEMDNER patents tasks of BioCreative IV and V, respectively. Considering the growing interest in the construction of automatically annotated chemical knowledge bases that integrate chemical information and biological data, cheminformatics approaches for mapping the extracted chemical names into chemical structures and their subsequent annotation together with text mining applications for linking chemistry with biological information are also presented. Finally, future trends and current challenges are highlighted as a roadmap proposal for research in this emerging field.A.V. and M.K. acknowledge funding from the European Community’s Horizon 2020 Program (project reference: 654021 - OpenMinted). M.K. additionally acknowledges the Encomienda MINETAD-CNIO as part of the Plan for the Advancement of Language Technology. O.R. and J.O. thank the Foundation for Applied Medical Research (FIMA), University of Navarra (Pamplona, Spain). This work was partially funded by Consellería de Cultura, Educación e Ordenación Universitaria (Xunta de Galicia), and FEDER (European Union), and the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit and COMPETE 2020 (POCI-01-0145-FEDER-006684). We thank Iñigo Garciá -Yoldi for useful feedback and discussions during the preparation of the manuscript.info:eu-repo/semantics/publishedVersio

    Near-capacity fixed-rate and rateless channel code constructions

    No full text
    Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder

    An Investigation of Reading Development Through Sensitivity to Sublexical Units

    Get PDF
    The present dissertation provides a novel perspective to the study of reading, focusing on sensitivity to sublexical units across reading development. Work towards this thesis has been conducted at SISSA and Macquarie University. The first study is an eye tracking experiment on natural reading, with 140 developing readers and 33 adult participants, who silently read multiline passages from story books in Italian. A developmental database of eye tracking during natural reading was created, filling a gap in the literature. We replicated well-documented developmental trends of reading behavior (e.g., reading rate and skipping rate increasing with age) and effects of word length and frequency on eye tracking measures. The second study, in collaboration with Dr Jon Carr, is a methodological paper presenting algorithms for accuracy enhancement of eye tracking recordings in multiline reading. Using the above-mentioned dataset and computational simulations, we assessed the performance of several algorithms (including two novel methods that we proposed) on the correction of vertical drift, the progressive displacement of fixation registrations on the vertical axis over time. We provided guidance for eye tracking researchers in the application of these methods, and one of the novel algorithms (based on Dynamic Time Warping) proved particularly promising in realigning fixations, especially in child recordings. This manuscript has recently been accepted for publication in Behavior Research Methods. In the third study, I examined sensitivity to statistical regularities in letter co-occurrence throughout reading development, by analysing the effects of n-gram frequency metrics on eye-tracking measures. To this end, the EyeReadIt eye-tracking corpus (presented in the first study) was used. Our results suggest that n-gram frequency effects (in particular related to maximum/average frequency metrics) are present even in developing readers, suggesting that sensitivity to sublexical orthographic regularities in reading is present as soon as the developing reading system can pick it up \u2013 in the case of this study, as early as in third grade. The results bear relevant implications for extant theories of learning to read, which largely overlook the contribution of statistical learning to reading acquisition. The fourth study is a magnetoencephalography experiment conducted at Macquarie University, in collaboration with Dr Lisi Beyersmann, Prof Paul Sowman, and Prof Anne Castles, on 28 adults and 17 children (5th and 6th grade). We investigated selective neural responses to morphemes at different stages of reading development, using Fast Periodic Visual Stimulation (FPVS) combined with an oddball design. Participants were presented with rapid sequences (6 Hz) of pseudoword combinations of stem/nonstem and suffix/nonsuffix components. Interleaved in this stream, oddball stimuli appeared periodically every 5 items (1.2 Hz) and were specifically designed to examine stem or suffix detection (e.g., stem+suffix oddballs, such as softity, were embedded in a sequence of nonstem+suffix base items, such as terpity). We predicted that neural responses at the oddball stimulation frequency (1.2 Hz) would reflect the detection of morphemes in the oddball stimuli. Sensor-level analysis revealed a selective response in a left occipito-temporal region of interest when the oddball stimuli were fully decomposable pseudowords. This response emerged for adults and children alike, showing that automatic morpheme identification occurs at relatively early stages of reading development, in line with major accounts of morphological decomposition. Critically, these findings also suggest that morpheme identification is modulated by the context in which the morphemes appear

    Novel computational techniques for mapping and classifying Next-Generation Sequencing data

    Get PDF
    Since their emergence around 2006, Next-Generation Sequencing technologies have been revolutionizing biological and medical research. Quickly obtaining an extensive amount of short or long reads of DNA sequence from almost any biological sample enables detecting genomic variants, revealing the composition of species in a metagenome, deciphering cancer biology, decoding the evolution of living or extinct species, or understanding human migration patterns and human history in general. The pace at which the throughput of sequencing technologies is increasing surpasses the growth of storage and computer capacities, which creates new computational challenges in NGS data processing. In this thesis, we present novel computational techniques for read mapping and taxonomic classification. With more than a hundred of published mappers, read mapping might be considered fully solved. However, the vast majority of mappers follow the same paradigm and only little attention has been paid to non-standard mapping approaches. Here, we propound the so-called dynamic mapping that we show to significantly improve the resulting alignments compared to traditional mapping approaches. Dynamic mapping is based on exploiting the information from previously computed alignments, helping to improve the mapping of subsequent reads. We provide the first comprehensive overview of this method and demonstrate its qualities using Dynamic Mapping Simulator, a pipeline that compares various dynamic mapping scenarios to static mapping and iterative referencing. An important component of a dynamic mapper is an online consensus caller, i.e., a program collecting alignment statistics and guiding updates of the reference in the online fashion. We provide Ococo, the first online consensus caller that implements a smart statistics for individual genomic positions using compact bit counters. Beyond its application to dynamic mapping, Ococo can be employed as an online SNP caller in various analysis pipelines, enabling SNP calling from a stream without saving the alignments on disk. Metagenomic classification of NGS reads is another major topic studied in the thesis. Having a database with thousands of reference genomes placed on a taxonomic tree, the task is to rapidly assign a huge amount of NGS reads to tree nodes, and possibly estimate the relative abundance of involved species. In this thesis, we propose improved computational techniques for this task. In a series of experiments, we show that spaced seeds consistently improve the classification accuracy. We provide Seed-Kraken, a spaced seed extension of Kraken, the most popular classifier at present. Furthermore, we suggest ProPhyle, a new indexing strategy based on a BWT-index, obtaining a much smaller and more informative index compared to Kraken. We provide a modified version of BWA that improves the BWT-index for a quick k-mer look-up

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 17th International Annual Conference on Cyber Security, CNCERT 2021, held in Beijing, China, in AJuly 2021. The 14 papers presented were carefully reviewed and selected from 51 submissions. The papers are organized according to the following topical sections: ​data security; privacy protection; anomaly detection; traffic analysis; social network security; vulnerability detection; text classification
    corecore