8 research outputs found

    Properties of Gauss digitized sets and digital surface integration

    Get PDF
    International audienceThis paper presents new topological and geometrical properties of Gauss digitizations of Euclidean shapes, most of them holding in arbitrary dimension dd. We focus on rr-regular shapes sampled by Gauss digitization at gridstep hh. The digitized boundary is shown to be close to the Euclidean boundary in the Hausdorff sense, the minimum distance d2h\frac{\sqrt{d}}{2}h being achieved by the projection map ξ\xi induced by the Euclidean distance. Although it is known that Gauss digitized boundaries may not be manifold when d≥3d \ge 3, we show that non-manifoldness may only occur in places where the normal vector is almost aligned with some digitization axis, and the limit angle decreases with hh. We then have a closer look at the projection of the digitized boundary onto the continuous boundary by ξ\xi. We show that the size of its non-injective part tends to zero with hh. This leads us to study the classical digital surface integration scheme, which allocates a measure to each surface element that is proportional to the cosine of the angle between an estimated normal vector and the trivial surface element normal vector. We show that digital integration is convergent whenever the normal estimator is multigrid convergent, and we explicit the convergence speed. Since convergent estimators are now available in the litterature, digital integration provides a convergent measure for digitized objects

    Correspondence between Topological and Discrete Connectivities in Hausdorff Discretization

    Get PDF
    We consider \emph{Hausdorff discretization} from a metric space EE to a discrete subspace DD, which associates to a closed subset FF of EE any subset SS of DD minimizing the Hausdorff distance between FF and SS; this minimum distance, called the \emph{Hausdorff radius} of FF and written rH(F)r_H(F), is bounded by the resolution of DD. We call a closed set FF \emph{separated} if it can be partitioned into two non-empty closed subsets F1F_1 and F2F_2 whose mutual distances have a strictly positive lower bound. Assuming some minimal topological properties of EE and DD (satisfied in Rn\R^n and Zn\Z^n), we show that given a non-separated closed subset FF of EE, for any r>rH(F)r>r_H(F), every Hausdorff discretization of FF is connected for the graph with edges linking pairs of points of DD at distance at most 2r2r. When FF is connected, this holds for r=rH(F)r=r_H(F), and its greatest Hausdorff discretization belongs to the partial connection generated by the traces on DD of the balls of radius rH(F)r_H(F). However, when the closed set FF is separated, the Hausdorff discretizations are disconnected whenever the resolution of DD is small enough. In the particular case where E=RnE=\R^n and D=ZnD=\Z^n with norm-based distances, we generalize our previous results for n=2n=2. For a norm invariant under changes of signs of coordinates, the greatest Hausdorff discretization of a connected closed set is axially connected. For the so-called \emph{coordinate-homogeneous} norms, which include the LpL_p norms, we give an adjacency graph for which all Hausdorff discretizations of a connected closed set are connected

    Courbure discrète : théorie et applications

    Get PDF
    International audienceThe present volume contains the proceedings of the 2013 Meeting on discrete curvature, held at CIRM, Luminy, France. The aim of this meeting was to bring together researchers from various backgrounds, ranging from mathematics to computer science, with a focus on both theory and applications. With 27 invited talks and 8 posters, the conference attracted 70 researchers from all over the world. The challenge of finding a common ground on the topic of discrete curvature was met with success, and these proceedings are a testimony of this wor

    Digitizations Preserving Topological and Differential Geometric Properties

    No full text
    In this paper we present conditions which guarantee that every digitization process preserves important topological and differential geometric properties. These conditions also allow us to determine the correct digitization resolution for a given class of real objects. Knowing that these properties are invariant under digitization, we can then use them in feature-based recognition. Moreover, these conditions imply that only a few digital patterns can occur as neighborhoods of boundary points in the digitization. This is very useful for noise detection, since if the neighborhood of a boundary point does not match one of these patterns, it must be due to noise. Our definition of a digitization approximates many real digitization processes. The digitization process is modeled as a mapping from continuous sets representing real objects to discrete sets represented as digital images. We show that an object A and the digitization of A are homotopy equivalent. This, for example, implies that..

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    Efficient image duplicate detection based on image analysis

    Get PDF
    This thesis is about the detection of duplicated images. More precisely, the developed system is able to discriminate possibly modified copies of original images from other unrelated images. The proposed method is referred to as content-based since it relies only on content analysis techniques rather than using image tagging as done in watermarking. The proposed content-based duplicate detection system classifies a test image by associating it with a label that corresponds to one of the original known images. The classification is performed in four steps. In the first step, the test image is described by using global statistics about its content. In the second step, the most likely original images are efficiently selected using a spatial indexing technique called R-Tree. The third step consists in using binary detectors to estimate the probability that the test image is a duplicate of the original images selected in the second step. Indeed, each original image known to the system is associated with an adapted binary detector, based on a support vector classifier, that estimates the probability that a test image is one of its duplicate. Finally, the fourth and last step consists in choosing the most probable original by picking that with the highest estimated probability. Comparative experiments have shown that the proposed content-based image duplicate detector greatly outperforms detectors using the same image description but based on a simpler distance functions rather than using a classification algorithm. Additional experiments are carried out so as to compare the proposed system with existing state of the art methods. Accordingly, it also outperforms the perceptual distance function method, which uses similar statistics to describe the image. While the proposed method is slightly outperformed by the key points method, it is five to ten times less complex in terms of computational requirements. Finally, note that the nature of this thesis is essentially exploratory since it is one of the first attempts to apply machine learning techniques to the relatively recent field of content-based image duplicate detection

    <title>Digitizations preserving topological and differential geometric properties</title>

    No full text
    corecore