37 research outputs found

    Efficient Distance Transformation for Path-based Metrics

    Get PDF
    In many applications, separable algorithms have demonstrated their efficiency to perform high performance volumetric processing of shape, such as distance transformation or medial axis extraction. In the literature, several authors have discussed about conditions on the metric to be considered in a separable approach. In this article, we present generic separable algorithms to efficiently compute Voronoi maps and distance transformations for a large class of metrics. Focusing on path-based norms (chamfer masks, neighborhood sequences...), we propose efficient algorithms to compute such volumetric transformation in dimension nn. We describe a new O(nNnlogN(n+logf))O(n\cdot N^n\cdot\log{N}\cdot(n+\log f)) algorithm for shapes in a NnN^n domain for chamfer norms with a rational ball of ff facets (compared to O(fn2Nn)O(f^{\lfloor\frac{n}{2}\rfloor}\cdot N^n) with previous approaches). Last we further investigate an even more elaborate algorithm with the same worst-case complexity, but reaching a complexity of O(nNnlogf(n+logf))O(n\cdot N^n\cdot\log{f}\cdot(n+\log f)) experimentally, under assumption of regularity distribution of the mask vectors

    Representation of Imprecise Digital Objects

    No full text
    International audienceIn this paper, we investigate a new framework to handle noisy digital objects. We consider digital closed simple 4-connected curves that are the result of an imperfect digital conversion (scan, picture, etc), and call digital imprecise contours such curves for which an imprecision value is known at each point. This imprecision value stands for the radius of a ball around each point, such that the result of a perfect digitization lies in the union of all the balls. In the first part, we show how to define an imprecise digital object from such an imprecise digital contour. To do so, we define three classes of pixels : inside, outside and uncertain pixels. In the second part of the paper, we build on this definition for a volumetric analysis (as opposed to contour analysis) of imprecise digital objects. From so-called toleranced balls, a filtration of objects, called λ-objects is defined. We show how to define a set of sites to encode this filtration of objects

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    Doctor of Philosophy in Computing

    Get PDF
    dissertationIn the last two decades, an increasingly large amount of data has become available. Massive collections of videos, astronomical observations, social networking posts, network routing information, mobile location history and so forth are examples of real world data requiring processing for applications ranging from classi?cation to predictions. Computational resources grow at a far more constrained rate, and hence the need for ef?cient algorithms that scale well. Over the past twenty years high quality theoretical algorithms have been developed for two central problems: nearest neighbor search and dimensionality reduction over Euclidean distances in worst case distributions. These two tasks are interesting in their own right. Nearest neighbor corresponds to a database query lookup, while dimensionality reduction is a form of compression on massive data. Moreover, these are also subroutines in algorithms ranging from clustering to classi?cation. However, many highly relevant settings and distance measures have not received similar attention to that of worst case point sets in Euclidean space. The Bregman divergences include the information theoretic distances, such as entropy, of most relevance in many machine learning applications and yet prior to this dissertation lacked ef?cient dimensionality reductions, nearest neighbor algorithms, or even lower bounds on what could be possible. Furthermore, even in the Euclidean setting, theoretical algorithms do not leverage that almost all real world datasets have signi?cant low-dimensional substructure. In this dissertation, we explore different models and techniques for similarity search and dimensionality reduction. What upper bounds can be obtained for nearest neighbors for Bregman divergences? What upper bounds can be achieved for dimensionality reduction for information theoretic measures? Are these problems indeed intrinsically of harder computational complexity than in the Euclidean setting? Can we improve the state of the art nearest neighbor algorithms for real world datasets in Euclidean space? These are the questions we investigate in this dissertation, and that we shed some new insight on. In the ?rst part of our dissertation, we focus on Bregman divergences. We exhibit nearest neighbor algorithms, contingent on a distributional constraint on the datasets. We next show lower bounds suggesting that is in some sense inherent to the problem complexity. After this we explore dimensionality reduction techniques for the Jensen-Shannon and Hellinger distances, two popular information theoretic measures. In the second part, we show that even for the more well-studied Euclidean case, worst case nearest neighbor algorithms can be improved upon sharply for real world datasets with spectral structure
    corecore