690 research outputs found

    04131 Abstracts Collection -- Geometric Properties from Incomplete Data

    Get PDF
    From 21.03.04 to 26.03.04, the Dagstuhl Seminar 04131 ``Geometric Properties from Incomplete Data\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Gauss Digitization of Simple Polygons

    Get PDF
    Digitization is a process of discretizing a continuous object X⊂R2X ⊂ R 2 to obtain a digital object X⊂Z2X ⊂ Z 2. This document addresses the Gauss digitization of continuous objects. In particular, we are interested in computing the digitized object of simple polygons. The Gauss digitization of X , denoted by X, is defined as the set of integer points being inside X. More specifically, X=X∩Z2X = X ∩ Z 2. This problem of digitization is related to the point-in-polygon (PIP) problem in computational geometry. Indeed, computing the digitized object X of a given polygonal object X is equivalent to finding all integer points laying inside or on the boundary of X. In this document, we present an implementation of computing the Gauss digitization of polygons using a ray casting based approach

    CODE: Coherence based decision boundaries for feature correspondence

    Get PDF
    A key challenge in feature correspondence is the difficulty in differentiating true and false matches at a local descriptor level. This forces adoption of strict similarity thresholds that discard many true matches. However, if analyzed at a global level, false matches are usually randomly scattered while true matches tend to be coherent (clustered around a few dominant motions), thus creating a coherence based separability constraint. This paper proposes a non-linear regression technique that can discover such a coherence based separability constraint from highly noisy matches and embed it into a correspondence likelihood model. Once computed, the model can filter the entire set of nearest neighbor matches (which typically contains over 90 percent false matches) for true matches. We integrate our technique into a full feature correspondence system which reliably generates large numbers of good quality correspondences over wide baselines where previous techniques provide few or no matches

    Robust and Optimal Methods for Geometric Sensor Data Alignment

    Get PDF
    Geometric sensor data alignment - the problem of finding the rigid transformation that correctly aligns two sets of sensor data without prior knowledge of how the data correspond - is a fundamental task in computer vision and robotics. It is inconvenient then that outliers and non-convexity are inherent to the problem and present significant challenges for alignment algorithms. Outliers are highly prevalent in sets of sensor data, particularly when the sets overlap incompletely. Despite this, many alignment objective functions are not robust to outliers, leading to erroneous alignments. In addition, alignment problems are highly non-convex, a property arising from the objective function and the transformation. While finding a local optimum may not be difficult, finding the global optimum is a hard optimisation problem. These key challenges have not been fully and jointly resolved in the existing literature, and so there is a need for robust and optimal solutions to alignment problems. Hence the objective of this thesis is to develop tractable algorithms for geometric sensor data alignment that are robust to outliers and not susceptible to spurious local optima. This thesis makes several significant contributions to the geometric alignment literature, founded on new insights into robust alignment and the geometry of transformations. Firstly, a novel discriminative sensor data representation is proposed that has better viewpoint invariance than generative models and is time and memory efficient without sacrificing model fidelity. Secondly, a novel local optimisation algorithm is developed for nD-nD geometric alignment under a robust distance measure. It manifests a wider region of convergence and a greater robustness to outliers and sampling artefacts than other local optimisation algorithms. Thirdly, the first optimal solution for 3D-3D geometric alignment with an inherently robust objective function is proposed. It outperforms other geometric alignment algorithms on challenging datasets due to its guaranteed optimality and outlier robustness, and has an efficient parallel implementation. Fourthly, the first optimal solution for 2D-3D geometric alignment with an inherently robust objective function is proposed. It outperforms existing approaches on challenging datasets, reliably finding the global optimum, and has an efficient parallel implementation. Finally, another optimal solution is developed for 2D-3D geometric alignment, using a robust surface alignment measure. Ultimately, robust and optimal methods, such as those in this thesis, are necessary to reliably find accurate solutions to geometric sensor data alignment problems

    A Unified Surface Geometric Framework for Feature-Aware Denoising, Hole Filling and Context-Aware Completion

    Get PDF
    Technologies for 3D data acquisition and 3D printing have enormously developed in the past few years, and, consequently, the demand for 3D virtual twins of the original scanned objects has increased. In this context, feature-aware denoising, hole filling and context-aware completion are three essential (but far from trivial) tasks. In this work, they are integrated within a geometric framework and realized through a unified variational model aiming at recovering triangulated surfaces from scanned, damaged and possibly incomplete noisy observations. The underlying non-convex optimization problem incorporates two regularisation terms: a discrete approximation of the Willmore energy forcing local sphericity and suited for the recovery of rounded features, and an approximation of the l(0) pseudo-norm penalty favouring sparsity in the normal variation. The proposed numerical method solving the model is parameterization-free, avoids expensive implicit volumebased computations and based on the efficient use of the Alternating Direction Method of Multipliers. Experiments show how the proposed framework can provide a robust and elegant solution suited for accurate restorations even in the presence of severe random noise and large damaged areas

    Analysis of Optical Flow Algorithms for Denoising

    Get PDF
    When a video sequence is recorded in low-light conditions, the image often become noisy. Standard methods for noise reduction have difficulties with motion. But the interesting parts in a video is often the ones that are moving, for instance a burglar captured in a surveillance video. One approach for denoising video sequences is to use temporal filtering controlled by optical flow, which describes how pixels move between two image frames. Today, there exists few studies comparing how different optical flow algorithms perform on noisy video sequences. Four different algorithms have been analyzed in the thesis. Moreover, a comparison on how well they can be used to improve the result of a temporal noise filter has been done. The conclusion of the comparison is that optical flow is useful for noise reduction. Algorithms based on patch matching and edge consistency perform better than algorithms based on color consistency. A recommendation for future work is to combine the best parts of each algorithm to develop a new optical flow algorithm, specialized on noisy image sequences. Furthermore, develop and implement a sophisticated optical flow based noise filter in camera hardware
    • …
    corecore