8,213 research outputs found

    Scalable Verification of Markov Decision Processes

    Get PDF
    Markov decision processes (MDP) are useful to model concurrent process optimisation problems, but verifying them with numerical methods is often intractable. Existing approximative approaches do not scale well and are limited to memoryless schedulers. Here we present the basis of scalable verification for MDPSs, using an O(1) memory representation of history-dependent schedulers. We thus facilitate scalable learning techniques and the use of massively parallel verification.Comment: V4: FMDS version, 12 pages, 4 figure

    GPU-accelerated discontinuous Galerkin methods on hybrid meshes

    Full text link
    We present a time-explicit discontinuous Galerkin (DG) solver for the time-domain acoustic wave equation on hybrid meshes containing vertex-mapped hexahedral, wedge, pyramidal and tetrahedral elements. Discretely energy-stable formulations are presented for both Gauss-Legendre and Gauss-Legendre-Lobatto (Spectral Element) nodal bases for the hexahedron. Stable timestep restrictions for hybrid meshes are derived by bounding the spectral radius of the DG operator using order-dependent constants in trace and Markov inequalities. Computational efficiency is achieved under a combination of element-specific kernels (including new quadrature-free operators for the pyramid), multi-rate timestepping, and acceleration using Graphics Processing Units.Comment: Submitted to CMAM

    Smart Sampling for Lightweight Verification of Markov Decision Processes

    Get PDF
    Markov decision processes (MDP) are useful to model optimisation problems in concurrent systems. To verify MDPs with efficient Monte Carlo techniques requires that their nondeterminism be resolved by a scheduler. Recent work has introduced the elements of lightweight techniques to sample directly from scheduler space, but finding optimal schedulers by simple sampling may be inefficient. Here we describe "smart" sampling algorithms that can make substantial improvements in performance.Comment: IEEE conference style, 11 pages, 5 algorithms, 11 figures, 1 tabl

    Microevolution of Helicobacter pylori during prolonged infection of single hosts and within families

    Get PDF
    Our understanding of basic evolutionary processes in bacteria is still very limited. For example, multiple recent dating estimates are based on a universal inter-species molecular clock rate, but that rate was calibrated using estimates of geological dates that are no longer accepted. We therefore estimated the short-term rates of mutation and recombination in Helicobacter pylori by sequencing an average of 39,300 bp in 78 gene fragments from 97 isolates. These isolates included 34 pairs of sequential samples, which were sampled at intervals of 0.25 to 10.2 years. They also included single isolates from 29 individuals (average age: 45 years) from 10 families. The accumulation of sequence diversity increased with time of separation in a clock-like manner in the sequential isolates. We used Approximate Bayesian Computation to estimate the rates of mutation, recombination, mean length of recombination tracts, and average diversity in those tracts. The estimates indicate that the short-term mutation rate is 1.4×10−6 (serial isolates) to 4.5×10−6 (family isolates) per nucleotide per year and that three times as many substitutions are introduced by recombination as by mutation. The long-term mutation rate over millennia is 5–17-fold lower, partly due to the removal of non-synonymous mutations due to purifying selection. Comparisons with the recent literature show that short-term mutation rates vary dramatically in different bacterial species and can span a range of several orders of magnitude

    A genomic approach to examine the complex evolution of laurasiatherian mammals

    Get PDF
    Recent phylogenomic studies have failed to conclusively resolve certain branches of the placental mammalian tree, despite the evolutionary analysis of genomic data from 32 species. Previous analyses of single genes and retroposon insertion data yielded support for different phylogenetic scenarios for the most basal divergences. The results indicated that some mammalian divergences were best interpreted not as a single bifurcating tree, but as an evolutionary network. In these studies the relationships among some orders of the super-clade Laurasiatheria were poorly supported, albeit not studied in detail. Therefore, 4775 protein-coding genes (6,196,263 nucleotides) were collected and aligned in order to analyze the evolution of this clade. Additionally, over 200,000 introns were screened in silico, resulting in 32 phylogenetically informative long interspersed nuclear elements (LINE) insertion events. The present study shows that the genome evolution of Laurasiatheria may best be understood as an evolutionary network. Thus, contrary to the common expectation to resolve major evolutionary events as a bifurcating tree, genome analyses unveil complex speciation processes even in deep mammalian divergences. We exemplify this on a subset of 1159 suitable genes that have individual histories, most likely due to incomplete lineage sorting or introgression, processes that can make the genealogy of mammalian genomes complex. These unexpected results have major implications for the understanding of evolution in general, because the evolution of even some higher level taxa such as mammalian orders may sometimes not be interpreted as a simple bifurcating pattern

    A Parallel Two-Pass MDL Context Tree Algorithm for Universal Source Coding

    Full text link
    We present a novel lossless universal source coding algorithm that uses parallel computational units to increase the throughput. The length-NN input sequence is partitioned into BB blocks. Processing each block independently of the other blocks can accelerate the computation by a factor of BB, but degrades the compression quality. Instead, our approach is to first estimate the minimum description length (MDL) source underlying the entire input, and then encode each of the BB blocks in parallel based on the MDL source. With this two-pass approach, the compression loss incurred by using more parallel units is insignificant. Our algorithm is work-efficient, i.e., its computational complexity is O(N/B)O(N/B). Its redundancy is approximately Blog(N/B)B\log(N/B) bits above Rissanen's lower bound on universal coding performance, with respect to any tree source whose maximal depth is at most log(N/B)\log(N/B)

    Sequential Monte Carlo Methods for Protein Folding

    Full text link
    We describe a class of growth algorithms for finding low energy states of heteropolymers. These polymers form toy models for proteins, and the hope is that similar methods will ultimately be useful for finding native states of real proteins from heuristic or a priori determined force fields. These algorithms share with standard Markov chain Monte Carlo methods that they generate Gibbs-Boltzmann distributions, but they are not based on the strategy that this distribution is obtained as stationary state of a suitably constructed Markov chain. Rather, they are based on growing the polymer by successively adding individual particles, guiding the growth towards configurations with lower energies, and using "population control" to eliminate bad configurations and increase the number of "good ones". This is not done via a breadth-first implementation as in genetic algorithms, but depth-first via recursive backtracking. As seen from various benchmark tests, the resulting algorithms are extremely efficient for lattice models, and are still competitive with other methods for simple off-lattice models.Comment: 10 pages; published in NIC Symposium 2004, eds. D. Wolf et al. (NIC, Juelich, 2004

    Convergence Rates of Gaussian ODE Filters

    Get PDF
    A recently-introduced class of probabilistic (uncertainty-aware) solvers for ordinary differential equations (ODEs) applies Gaussian (Kalman) filtering to initial value problems. These methods model the true solution xx and its first qq derivatives \emph{a priori} as a Gauss--Markov process X\boldsymbol{X}, which is then iteratively conditioned on information about x˙\dot{x}. This article establishes worst-case local convergence rates of order q+1q+1 for a wide range of versions of this Gaussian ODE filter, as well as global convergence rates of order qq in the case of q=1q=1 and an integrated Brownian motion prior, and analyses how inaccurate information on x˙\dot{x} coming from approximate evaluations of ff affects these rates. Moreover, we show that, in the globally convergent case, the posterior credible intervals are well calibrated in the sense that they globally contract at the same rate as the truncation error. We illustrate these theoretical results by numerical experiments which might indicate their generalizability to q{2,3,}q \in \{2,3,\dots\}.Comment: 26 pages, 5 figure

    Write Channel Model for Bit-Patterned Media Recording

    Full text link
    We propose a new write channel model for bit-patterned media recording that reflects the data dependence of write synchronization errors. It is shown that this model accommodates both substitution-like errors and insertion-deletion errors whose statistics are determined by an underlying channel state process. We study information theoretic properties of the write channel model, including the capacity, symmetric information rate, Markov-1 rate and the zero-error capacity.Comment: 11 pages, 12 figures, journa
    corecore