203,814 research outputs found

    A low complexity algorithm for non-monotonically evolving fronts

    Full text link
    A new algorithm is proposed to describe the propagation of fronts advected in the normal direction with prescribed speed function F. The assumptions on F are that it does not depend on the front itself, but can depend on space and time. Moreover, it can vanish and change sign. To solve this problem the Level-Set Method [Osher, Sethian; 1988] is widely used, and the Generalized Fast Marching Method [Carlini et al.; 2008] has recently been introduced. The novelty of our method is that its overall computational complexity is predicted to be comparable to that of the Fast Marching Method [Sethian; 1996], [Vladimirsky; 2006] in most instances. This latter algorithm is O(N^n log N^n) if the computational domain comprises N^n points. Our strategy is to use it in regions where the speed is bounded away from zero -- and switch to a different formalism when F is approximately 0. To this end, a collection of so-called sideways partial differential equations is introduced. Their solutions locally describe the evolving front and depend on both space and time. The well-posedness of those equations, as well as their geometric properties are addressed. We then propose a convergent and stable discretization of those PDEs. Those alternative representations are used to augment the standard Fast Marching Method. The resulting algorithm is presented together with a thorough discussion of its features. The accuracy of the scheme is tested when F depends on both space and time. Each example yields an O(1/N) global truncation error. We conclude with a discussion of the advantages and limitations of our method.Comment: 30 pages, 12 figures, 1 tabl

    Algorithmic statistics: forty years later

    Full text link
    Algorithmic statistics has two different (and almost orthogonal) motivations. From the philosophical point of view, it tries to formalize how the statistics works and why some statistical models are better than others. After this notion of a "good model" is introduced, a natural question arises: it is possible that for some piece of data there is no good model? If yes, how often these bad ("non-stochastic") data appear "in real life"? Another, more technical motivation comes from algorithmic information theory. In this theory a notion of complexity of a finite object (=amount of information in this object) is introduced; it assigns to every object some number, called its algorithmic complexity (or Kolmogorov complexity). Algorithmic statistic provides a more fine-grained classification: for each finite object some curve is defined that characterizes its behavior. It turns out that several different definitions give (approximately) the same curve. In this survey we try to provide an exposition of the main results in the field (including full proofs for the most important ones), as well as some historical comments. We assume that the reader is familiar with the main notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde

    Contextual Information Retrieval based on Algorithmic Information Theory and Statistical Outlier Detection

    Full text link
    The main contribution of this paper is to design an Information Retrieval (IR) technique based on Algorithmic Information Theory (using the Normalized Compression Distance- NCD), statistical techniques (outliers), and novel organization of data base structure. The paper shows how they can be integrated to retrieve information from generic databases using long (text-based) queries. Two important problems are analyzed in the paper. On the one hand, how to detect "false positives" when the distance among the documents is very low and there is actual similarity. On the other hand, we propose a way to structure a document database which similarities distance estimation depends on the length of the selected text. Finally, the experimental evaluations that have been carried out to study previous problems are shown.Comment: Submitted to 2008 IEEE Information Theory Workshop (6 pages, 6 figures

    Supervised Classification: Quite a Brief Overview

    Full text link
    The original problem of supervised classification considers the task of automatically assigning objects to their respective classes on the basis of numerical measurements derived from these objects. Classifiers are the tools that implement the actual functional mapping from these measurements---also called features or inputs---to the so-called class label---or output. The fields of pattern recognition and machine learning study ways of constructing such classifiers. The main idea behind supervised methods is that of learning from examples: given a number of example input-output relations, to what extent can the general mapping be learned that takes any new and unseen feature vector to its correct class? This chapter provides a basic introduction to the underlying ideas of how to come to a supervised classification problem. In addition, it provides an overview of some specific classification techniques, delves into the issues of object representation and classifier evaluation, and (very) briefly covers some variations on the basic supervised classification task that may also be of interest to the practitioner

    Segmentation of the evolving left ventricle by learning the dynamics

    Get PDF
    We propose a method for recursive segmentation of the left ventricle (LV) across a temporal sequence of magnetic resonance (MR) images. The approach involves a technique for learning the LV boundary dynamics together with a particle-based inference algorithm on a loopy graphical model capturing the temporal periodicity of the heart. The dynamic system state is a low-dimensional representation of the boundary, and boundary estimation involves incorporating curve evolution into state estimation. By formulating the problem as one of state estimation, the segmentation at each particular time is based not only on the data observed at that instant, but also on predictions based on past and future boundary estimates. We assess and demonstrate the effectiveness of the proposed framework on a large data set of breath-hold cardiac MR image sequences
    • …
    corecore