71 research outputs found

    Ramified rectilinear polygons: coordinatization by dendrons

    Full text link
    Simple rectilinear polygons (i.e. rectilinear polygons without holes or cutpoints) can be regarded as finite rectangular cell complexes coordinatized by two finite dendrons. The intrinsic l1l_1-metric is thus inherited from the product of the two finite dendrons via an isometric embedding. The rectangular cell complexes that share this same embedding property are called ramified rectilinear polygons. The links of vertices in these cell complexes may be arbitrary bipartite graphs, in contrast to simple rectilinear polygons where the links of points are either 4-cycles or paths of length at most 3. Ramified rectilinear polygons are particular instances of rectangular complexes obtained from cube-free median graphs, or equivalently simply connected rectangular complexes with triangle-free links. The underlying graphs of finite ramified rectilinear polygons can be recognized among graphs in linear time by a Lexicographic Breadth-First-Search. Whereas the symmetry of a simple rectilinear polygon is very restricted (with automorphism group being a subgroup of the dihedral group D4D_4), ramified rectilinear polygons are universal: every finite group is the automorphism group of some ramified rectilinear polygon.Comment: 27 pages, 6 figure

    On partitioning multivariate self-affine time series

    Get PDF
    Given a multivariate time series, possibly of high dimension, with unknown and time-varying joint distribution, it is of interest to be able to completely partition the time series into disjoint, contiguous subseries, each of which has different distributional or pattern attributes from the preceding and succeeding subseries. An additional feature of many time series is that they display self-affinity, so that subseries at one time scale are similar to subseries at another after application of an affine transformation. Such qualities are observed in time series from many disciplines, including biology, medicine, economics, finance, and computer science. This paper defines the relevant multiobjective combinatorial optimization problem with limited assumptions as a biobjective one, and a specialized evolutionary algorithm is presented which finds optimal self-affine time series partitionings with a minimum of choice parameters. The algorithm not only finds partitionings for all possible numbers of partitions given data constraints, but also for self-affinities between these partitionings and some fine-grained partitioning. The resulting set of Pareto-efficient solution sets provides a rich representation of the self-affine properties of a multivariate time series at different locations and time scales

    Trees, Tight-Spans and Point Configuration

    Get PDF
    Tight-spans of metrics were first introduced by Isbell in 1964 and rediscovered and studied by others, most notably by Dress, who gave them this name. Subsequently, it was found that tight-spans could be defined for more general maps, such as directed metrics and distances, and more recently for diversities. In this paper, we show that all of these tight-spans as well as some related constructions can be defined in terms of point configurations. This provides a useful way in which to study these objects in a unified and systematic way. We also show that by using point configurations we can recover results concerning one-dimensional tight-spans for all of the maps we consider, as well as extend these and other results to more general maps such as symmetric and unsymmetric maps.Comment: 21 pages, 2 figure

    Computing the blocks of a quasi-median graph

    Get PDF
    Quasi-median graphs are a tool commonly used by evolutionary biologists to visualise the evolution of molecular sequences. As with any graph, a quasi-median graph can contain cut vertices, that is, vertices whose removal disconnect the graph. These vertices induce a decomposition of the graph into blocks, that is, maximal subgraphs which do not contain any cut vertices. Here we show that the special structure of quasi-median graphs can be used to compute their blocks without having to compute the whole graph. In particular we present an algorithm that, for a collection of nn aligned sequences of length mm, can compute the blocks of the associated quasi-median graph together with the information required to correctly connect these blocks together in run time O(n2m2)\mathcal O(n^2m^2), independent of the size of the sequence alphabet. Our primary motivation for presenting this algorithm is the fact that the quasi-median graph associated to a sequence alignment must contain all most parsimonious trees for the alignment, and therefore precomputing the blocks of the graph has the potential to help speed up any method for computing such trees.Comment: 17 pages, 2 figure

    Learning Probabilistic Logic Programs in Continuous Domains

    Get PDF
    The field of statistical relational learning aims at unifying logic and probability to reason and learn from data. Perhaps the most successful paradigm in the field is probabilistic logic programming: the enabling of stochastic primitives in logic programming, which is now increasingly seen to provide a declarative background to complex machine learning applications. While many systems offer inference capabilities, the more significant challenge is that of learning meaningful and interpretable symbolic representations from data. In that regard, inductive logic programming and related techniques have paved much of the way for the last few decades. Unfortunately, a major limitation of this exciting landscape is that much of the work is limited to finite-domain discrete probability distributions. Recently, a handful of systems have been extended to represent and perform inference with continuous distributions. The problem, of course, is that classical solutions for inference are either restricted to well-known parametric families (e.g., Gaussians) or resort to sampling strategies that provide correct answers only in the limit. When it comes to learning, moreover, inducing representations remains entirely open, other than "data-fitting" solutions that force-fit points to aforementioned parametric families. In this paper, we take the first steps towards inducing probabilistic logic programs for continuous and mixed discrete-continuous data, without being pigeon-holed to a fixed set of distribution families. Our key insight is to leverage techniques from piecewise polynomial function approximation theory, yielding a principled way to learn and compositionally construct density functions. We test the framework and discuss the learned representations.Comment: Accepted at the 2018 KR Workshop on Hybrid Reasoning and Learnin
    corecore