278 research outputs found

    On Searching a Table Consistent with Division Poset

    Get PDF
    Suppose Pn={1,2,...,n}P_n=\{1,2,...,n\} is a partially ordered set with the partial order defined by divisibility, that is, for any two distinct elements i,jPni,j\in P_n satisfying ii divides jj, i<Pnji<_{P_n} j. A table An={aii=1,2,...,n}A_n=\{a_i|i=1,2,...,n\} of distinct real numbers is said to be \emph{consistent} with PnP_n, provided for any two distinct elements i,j{1,2,...,n}i,j\in \{1,2,...,n\} satisfying ii divides jj, ai<aja_i< a_j. Given an real number xx, we want to determine whether xAnx\in A_n, by comparing xx with as few entries of AnA_n as possible. In this paper we investigate the complexity τ(n)\tau(n), measured in the number of comparisons, of the above search problem. We present a 55n72+O(ln2n)\frac{55n}{72}+O(\ln^2 n) search algorithm for AnA_n and prove a lower bound (3/4+17/2160)n+O(1)({3/4}+{17/2160})n+O(1) on τ(n)\tau(n) by using an adversary argument.Comment: 16 pages, no figure; same results, representation improved, add reference

    Modeling the evolution space of breakage fusion bridge cycles with a stochastic folding process

    Get PDF
    Breakage-Fusion-Bridge cycles in cancer arise when a broken segment of DNA is duplicated and an end from each copy joined together. This structure then 'unfolds' into a new piece of palindromic DNA. This is one mechanism responsible for the localised amplicons observed in cancer genome data. The process has parallels with paper folding sequences that arise when a piece of paper is folded several times and then unfolded. Here we adapt such methods to study the breakage-fusion-bridge structures in detail. We firstly consider discrete representations of this space with 2-d trees to demonstrate that there are 2^(n(n-1)/2) qualitatively distinct evolutions involving n breakage-fusion-bridge cycles. Secondly we consider the stochastic nature of the fold positions, to determine evolution likelihoods, and also describe how amplicons become localised. Finally we highlight these methods by inferring the evolution of breakage-fusion-bridge cycles with data from primary tissue cancer samples

    Fast linear-space computations of longest common subsequences

    Get PDF
    AbstractSpace saving techniques in computations of a longest common subsequence (LCS) of two strings are crucial in many applications, notably, in molecular sequence comparisons. For about ten years, however, the only linear-space LCS algorithm known required time quadratic in the length of the input, for all inputs. This paper reviews linear-space LCS computations in connection with two classical paradigms originally designed to take less than quadratic time in favorable circumstances. The objective is to achieve the space reduction without alteration of the asymptotic time complexity of the original algorithm. The first one of the resulting constructions takes time O(n(m−l)), and is thus suitable for cases where the LCS is expected to be close to the shortest input string. The second takes time O(ml log(min[s, m, 2nl])) and suits cases where one of the inputs is much shorter than the other. Here m and n (m⩽n) are the lengths of the two input strings, l is the length of the longest common subsequences and s is the size of the alphabet. Along the way, a very simple O(m(m−l)) time algorithm is also derived for the case of strings of equal length

    Efficient Sampling and Structure Learning of Bayesian Networks

    Full text link
    Bayesian networks are probabilistic graphical models widely employed to understand dependencies in high dimensional data, and even to facilitate causal discovery. Learning the underlying network structure, which is encoded as a directed acyclic graph (DAG) is highly challenging mainly due to the vast number of possible networks. Efforts have focussed on two fronts: constraint-based methods that perform conditional independence tests to exclude edges and score and search approaches which explore the DAG space with greedy or MCMC schemes. Here we synthesise these two fields in a novel hybrid method which reduces the complexity of MCMC approaches to that of a constraint-based method. Individual steps in the MCMC scheme only require simple table lookups so that very long chains can be efficiently obtained. Furthermore, the scheme includes an iterative procedure to correct for errors from the conditional independence tests. The algorithm offers markedly superior performance to alternatives, particularly because DAGs can also be sampled from the posterior distribution, enabling full Bayesian model averaging for much larger Bayesian networks.Comment: Revised version. 40 pages including 16 pages of supplement, 5 figures and 15 supplemental figures; R package BiDAG is available at https://CRAN.R-project.org/package=BiDA

    Efficient computation of rank probabilities in posets

    Get PDF
    As the title of this work indicates, the central theme in this work is the computation of rank probabilities of posets. Since the probability space consists of the set of all linear extensions of a given poset equipped with the uniform probability measure, in first instance we develop algorithms to explore this probability space efficiently. We consider in particular the problem of counting the number of linear extensions and the ability to generate extensions uniformly at random. Algorithms based on the lattice of ideals representation of a poset are developed. Since a weak order extension of a poset can be regarded as an order on the equivalence classes of a partition of the given poset not contradicting the underlying order, and thus as a generalization of the concept of a linear extension, algorithms are developed to count and generate weak order extensions uniformly at random as well. However, in order to reduce the inherent complexity of the problem, the cardinalities of the equivalence classes is fixed a priori. Due to the exponential nature of these algorithms this approach is still not always feasible, forcing one to resort to approximative algorithms if this is the case. It is well known that Markov chain Monte Carlo methods can be used to generate linear extensions uniformly at random, but no such approaches have been used to generate weak order extensions. Therefore, an algorithm that can be used to sample weak order extensions uniformly at random is introduced. A monotone assignment of labels to objects from a poset corresponds to the choice of a weak order extension of the poset. Since the random monotone assignment of such labels is a step in the generation process of random monotone data sets, the ability to generate random weak order extensions clearly is of great importance. The contributions from this part therefore prove useful in e.g. the field of supervised classification, where a need for synthetic random monotone data sets is present. The second part focuses on the ranking of the elements of a partially ordered set. Algorithms for the computation of the (mutual) rank probabilities that avoid having to enumerate all linear extensions are suggested and applied to a real-world data set containing pollution data of several regions in Baden-Württemberg (Germany). With the emergence of several initiatives aimed at protecting the environment like the REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) project of the European Union, the need for objective methods to rank chemicals, regions, etc. on the basis of several criteria still increases. Additionally, an interesting relation between the mutual rank probabilities and the average rank probabilities is proven. The third and last part studies the transitivity properties of the mutual rank probabilities and the closely related linear extension majority cycles or LEM cycles for short. The type of transitivity is translated into the cycle-transitivity framework, which has been tailor-made for characterizing transitivity of reciprocal relations, and is proven to be situated between strong stochastic transitivity and a new type of transitivity called delta*-transitivity. It is shown that the latter type is situated between strong stochastic transitivity and a kind of product transitivity. Furthermore, theoretical upper bounds for the minimum cutting level to avoid LEM cycles are found. Cutting levels for posets on up to 13 elements are obtained experimentally and a theoretic lower bound for the cutting level to avoid LEM cycles of length 4 is computed. The research presented in this work has been published in international peer-reviewed journals and has been presented on international conferences. A Java implementation of several of the algorithms presented in this work, as well as binary files containing all posets on up to 13 elements with LEM cycles, can be downloaded from the website http://www.kermit.ugent.be

    A gravitational theory of quantum mechanics

    Get PDF
    An explanation for quantum mechanics is given in terms of a classical theory (general relativity) for the first time. Specifically, it is shown that certain structures in classical general relativity can give rise to the non-classical logic normally associated with quantum mechanics. An artificial classical model of quantum logic is constructed to show how the Hilbert space structure of quantum mechanics is a natural way to describe a measurement-dependent stochastic process. A 4-geon model of an elementary particle is proposed which is asymptotically flat, particle-like and has a non-trivial causal structure. The usual Cauchy data are no longer sufficient to determine a unique evolution; the measurement apparatus itself can impose further non-redundant boundary conditions. When measurements of an object provide additional non-redundant boundary conditions, the associated propositions would fail to satisfy the distributive law of classical physics. Using the 4-geon model, an orthomodular lattice of propositions, characteristic of quantum mechanics, is formally constructed within the framework of classical general relativity. The model described provides a classical gravitational basis for quantum mechanics, obviating the need for quantum gravity. The equations of quantum mechanics are unmodified, but quantum behaviour is not universal; classical particles and waves could exist and there is no graviton

    Models, Composability, and Validity

    Get PDF
    Composability is the capability to select and assemble simulation components in various combinations into simulation systems to satisfy specific user requirements. The defining characteristic of composability is the ability to combine and recombine components into different simulation systems for different purposes. The ability to compose simulation systems from repositories of reusable components has been a highly sought after goal among modeling and simulation developers. The expected benefits of robust, general composability include reduced simulation development cost and time, increased validity and reliability of simulation results, and increased involvement of simulation users in the process. Consequently, composability is an active research area, with both software engineering and theoretical approaches being developed. Composability exists in two forms, syntactic and semantic (also known as engineering and modeling). Syntactic composability is the implementation of components so that they can be connected. Semantic composability answers the question of whether the models implemented in the composition can be meaningfully composed
    corecore