102,013 research outputs found

    Bridging the Gap between Probabilistic and Deterministic Models: A Simulation Study on a Variational Bayes Predictive Coding Recurrent Neural Network Model

    Full text link
    The current paper proposes a novel variational Bayes predictive coding RNN model, which can learn to generate fluctuated temporal patterns from exemplars. The model learns to maximize the lower bound of the weighted sum of the regularization and reconstruction error terms. We examined how this weighting can affect development of different types of information processing while learning fluctuated temporal patterns. Simulation results show that strong weighting of the reconstruction term causes the development of deterministic chaos for imitating the randomness observed in target sequences, while strong weighting of the regularization term causes the development of stochastic dynamics imitating probabilistic processes observed in targets. Moreover, results indicate that the most generalized learning emerges between these two extremes. The paper concludes with implications in terms of the underlying neuronal mechanisms for autism spectrum disorder and for free action.Comment: This paper is accepted the 24th International Conference On Neural Information Processing (ICONIP 2017). The previous submission to arXiv is replaced by this version because there was an error in Equation

    Inception and propagation of positive streamers in high-purity nitrogen: effects of the voltage rise-rate

    Get PDF
    Controlling streamer morphology is important for numerous applications. Up to now, the effect of the voltage rise rate was only studied across a wide range. Here we show that even slight variations in the voltage rise can have significant effects. We have studied positive streamer discharges in a 16 cm point-plane gap in high-purity nitrogen 6.0, created by 25 kV pulses with a duration of 130 ns. The voltage rise varies by a rise rate from 1.9 kV/ns to 2.7 kV/ns and by the first peak voltage of 22 to 28 kV. A structural link is found between smaller discharges with a larger inception cloud caused by a faster rising voltage. This relation is explained by the greater stability of the inception cloud due to a faster voltage rise, causing a delay in the destabilisation. Time-resolved measurements show that the inception cloud propagates slower than an earlier destabilised, more filamentary discharge. This explains that the discharge with a faster rising voltage pulse ends up to be shorter. Furthermore, the effect of remaining background ionisation in a pulse sequence has been studied, showing that channel thickness and branching rate are locally affected, depending on the covered volume of the previous discharge.Comment: 16 pages, 9 figure

    A Stochastic Approach to Shortcut Bridging in Programmable Matter

    Full text link
    In a self-organizing particle system, an abstraction of programmable matter, simple computational elements called particles with limited memory and communication self-organize to solve system-wide problems of movement, coordination, and configuration. In this paper, we consider a stochastic, distributed, local, asynchronous algorithm for "shortcut bridging", in which particles self-assemble bridges over gaps that simultaneously balance minimizing the length and cost of the bridge. Army ants of the genus Eciton have been observed exhibiting a similar behavior in their foraging trails, dynamically adjusting their bridges to satisfy an efficiency trade-off using local interactions. Using techniques from Markov chain analysis, we rigorously analyze our algorithm, show it achieves a near-optimal balance between the competing factors of path length and bridge cost, and prove that it exhibits a dependence on the angle of the gap being "shortcut" similar to that of the ant bridges. We also present simulation results that qualitatively compare our algorithm with the army ant bridging behavior. Our work gives a plausible explanation of how convergence to globally optimal configurations can be achieved via local interactions by simple organisms (e.g., ants) with some limited computational power and access to random bits. The proposed algorithm also demonstrates the robustness of the stochastic approach to algorithms for programmable matter, as it is a surprisingly simple extension of our previous stochastic algorithm for compression.Comment: Published in Proc. of DNA23: DNA Computing and Molecular Programming - 23rd International Conference, 2017. An updated journal version will appear in the DNA23 Special Issue of Natural Computin

    Cerulean: A hybrid assembly using high throughput short and long reads

    Full text link
    Genome assembly using high throughput data with short reads, arguably, remains an unresolvable task in repetitive genomes, since when the length of a repeat exceeds the read length, it becomes difficult to unambiguously connect the flanking regions. The emergence of third generation sequencing (Pacific Biosciences) with long reads enables the opportunity to resolve complicated repeats that could not be resolved by the short read data. However, these long reads have high error rate and it is an uphill task to assemble the genome without using additional high quality short reads. Recently, Koren et al. 2012 proposed an approach to use high quality short reads data to correct these long reads and, thus, make the assembly from long reads possible. However, due to the large size of both dataset (short and long reads), error-correction of these long reads requires excessively high computational resources, even on small bacterial genomes. In this work, instead of error correction of long reads, we first assemble the short reads and later map these long reads on the assembly graph to resolve repeats. Contribution: We present a hybrid assembly approach that is both computationally effective and produces high quality assemblies. Our algorithm first operates with a simplified version of the assembly graph consisting only of long contigs and gradually improves the assembly by adding smaller contigs in each iteration. In contrast to the state-of-the-art long reads error correction technique, which requires high computational resources and long running time on a supercomputer even for bacterial genome datasets, our software can produce comparable assembly using only a standard desktop in a short running time.Comment: Peer-reviewed and presented as part of the 13th Workshop on Algorithms in Bioinformatics (WABI2013

    Bridging the gap between design and implementation of components libraries

    Get PDF
    Object-oriented design is usually driven by three main reusability principles: step-by-step design, design for reuse and design with reuse. However, these principles are just partially applied to the subsequent object-oriented implementation, often due to efficienc y constraints, yielding to a gap between design and implementation. In this paper we provide a solution for bridging this gap for a concrete framework, the one of designing and implementing container-like component libraries, such as STL, Booc h Components, etc. Our approach is based on a new design pattern together with its corresponding implementation. The proposal enhances the same principles that drive the design process: step-by--step implementation (adding just what is needed in every step), implementation with reuse (component implementations are reused while library implementation progresses and component hierarchies grow) and implementation for reuse (intermediate component implementations can be reused in many different points o f the hierarchy). We use our approach in two different manners: for building a brand-new container-like component library, and for reengineering an existing one, Booch Components in Ada95.Postprint (published version

    Structure and Function of a Mycobacterial NHEJ DNA Repair Polymerase

    Get PDF
    Non homologous end-joining (NHEJ)-mediated repair of DNA double-strand breaks in prokaryotes requires Ku and a specific multidomain DNA ligase (LigD). We present crystal structures of the primase/polymerisation domain (PolDom) of Mycobacterium tuberculosis LigD, alone and complexed with nucleotides. The PolDom structure combines the general fold of the archaeo-eukaryotic primase (AEP) superfamily with additional loops and domains that together form a deep cleft on the surface, likely used for DNA binding. Enzymatic analysis indicates that the PolDom of LigD, even in the absence of accessory domains and Ku proteins, has the potential to recognise DNA end-joining intermediates. Strikingly, one of the main signals for the specific and efficient binding of PolDom to DNA is the presence of a 5'-phosphate group, located at the single/double-stranded junction at both gapped and 3'-protruding DNA molecules. Although structurally unrelated, Pol lambda and Pol mu, the two eukaryotic DNA polymerases involved in NHEJ, are endowed with a similar capacity to bind a 5'-phosphate group. Other properties that are beneficial for NHEJ, such as the ability to generate template distortions and realignments of the primer, displayed by Pol lambda and Pol mu, are shared by the PolDom of bacterial LigD. In addition, PolDom can perform non-mutagenic translesion synthesis on termini containing modified bases. Significantly, ribonucleotide insertion appears to be a recurrent theme associated with NHEJ, maximised in this case by the deployment of a dedicated primase, although its in vivo relevance is unknown
    corecore