128,712 research outputs found

    Kauffman's adjacent possible in word order evolution

    Get PDF
    Word order evolution has been hypothesized to be constrained by a word order permutation ring: transitions involving orders that are closer in the permutation ring are more likely. The hypothesis can be seen as a particular case of Kauffman's adjacent possible in word order evolution. Here we consider the problem of the association of the six possible orders of S, V and O to yield a couple of primary alternating orders as a window to word order evolution. We evaluate the suitability of various competing hypotheses to predict one member of the couple from the other with the help of information theoretic model selection. Our ensemble of models includes a six-way model that is based on the word order permutation ring (Kauffman's adjacent possible) and another model based on the dual two-way of standard typology, that reduces word order to basic orders preferences (e.g., a preference for SV over VS and another for SO over OS). Our analysis indicates that the permutation ring yields the best model when favoring parsimony strongly, providing support for Kauffman's general view and a six-way typology.Comment: Minor corrections (small errors concerning the parameters of model 1, language, style,...) except for the mathematical arguments at the end of section "Further details about Model 2" of the supplementar

    Accelerating Permutation Testing in Voxel-wise Analysis through Subspace Tracking: A new plugin for SnPM

    Get PDF
    Permutation testing is a non-parametric method for obtaining the max null distribution used to compute corrected pp-values that provide strong control of false positives. In neuroimaging, however, the computational burden of running such an algorithm can be significant. We find that by viewing the permutation testing procedure as the construction of a very large permutation testing matrix, TT, one can exploit structural properties derived from the data and the test statistics to reduce the runtime under certain conditions. In particular, we see that TT is low-rank plus a low-variance residual. This makes TT a good candidate for low-rank matrix completion, where only a very small number of entries of TT (∼0.35%\sim0.35\% of all entries in our experiments) have to be computed to obtain a good estimate. Based on this observation, we present RapidPT, an algorithm that efficiently recovers the max null distribution commonly obtained through regular permutation testing in voxel-wise analysis. We present an extensive validation on a synthetic dataset and four varying sized datasets against two baselines: Statistical NonParametric Mapping (SnPM13) and a standard permutation testing implementation (referred as NaivePT). We find that RapidPT achieves its best runtime performance on medium sized datasets (50≤n≤20050 \leq n \leq 200), with speedups of 1.5x - 38x (vs. SnPM13) and 20x-1000x (vs. NaivePT). For larger datasets (n≥200n \geq 200) RapidPT outperforms NaivePT (6x - 200x) on all datasets, and provides large speedups over SnPM13 when more than 10000 permutations (2x - 15x) are needed. The implementation is a standalone toolbox and also integrated within SnPM13, able to leverage multi-core architectures when available.Comment: 36 pages, 16 figure

    Best and worst case permutations for random online domination of the path

    Get PDF
    We study a randomized algorithm for graph domination, by which, according to a uniformly chosen permutation, vertices are revealed and added to the dominating set if not already dominated. We determine the expected size of the dominating set produced by the algorithm for the path graph PnP_n and use this to derive the expected size for some related families of graphs. We then provide a much-refined analysis of the worst and best cases of this algorithm on PnP_n and enumerate the permutations for which the algorithm has the worst-possible performance and best-possible performance. The case of dominating the path graph has connections to previous work of Bouwer and Star, and of Gessel on greedily coloring the path.Comment: 13 pages, 1 figur

    Double symbolic joint entropy in nonlinear dynamic complexity analysis

    Full text link
    Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.Comment: 7 pages, 4 figure

    Sorting by Block Moves

    Get PDF
    The research in this thesis is focused on the problem of Block Sorting, which has applications in Computational Biology and in Optical Character Recognition (OCR). A block in a permutation is a maximal sequence of consecutive elements that are also consecutive in the identity permutation. BLOCK SORTING is the process of transforming an arbitrary permutation to the identity permutation through a sequence of block moves. Given an arbitrary permutation π and an integer m, the Block Sorting Problem, or the problem of deciding whether the transformation can be accomplished in at most m block moves has been shown to be NP-hard. After being known to be 3-approximable for over a decade, block sorting has been researched extensively and now there are several 2-approximation algorithms for its solution. This work introduces new structures on a permutation, which are called runs and ordered pairs, and are used to develop two new approximation algorithms. Both the new algorithms are 2-approximation algorithms, yielding the approximation ratio equal to the current best. This work also includes an analysis of both the new algorithms showing they are 2-approximation algorithms

    An Optimal Lower Bound for Buffer Management in Multi-Queue Switches

    Get PDF
    In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput. We present a tight lower bound of e/(e-1) ~ 1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule. Our result contradicts the claimed performance of the algorithm Random Permutation; we point out a flaw in its original analysis
    • …
    corecore