428 research outputs found

    Recognition of feature curves on 3D shapes using an algebraic approach to Hough transforms

    Get PDF
    Feature curves are largely adopted to highlight shape features, such as sharp lines, or to divide surfaces into meaningful segments, like convex or concave regions. Extracting these curves is not sufficient to convey prominent and meaningful information about a shape. We have first to separate the curves belonging to features from those caused by noise and then to select the lines, which describe non-trivial portions of a surface. The automatic detection of such features is crucial for the identification and/or annotation of relevant parts of a given shape. To do this, the Hough transform (HT) is a feature extraction technique widely used in image analysis, computer vision and digital image processing, while, for 3D shapes, the extraction of salient feature curves is still an open problem. Thanks to algebraic geometry concepts, the HT technique has been recently extended to include a vast class of algebraic curves, thus proving to be a competitive tool for yielding an explicit representation of the diverse feature lines equations. In the paper, for the first time we apply this novel extension of the HT technique to the realm of 3D shapes in order to identify and localize semantic features like patterns, decorations or anatomical details on 3D objects (both complete and fragments), even in the case of features partially damaged or incomplete. The method recognizes various features, possibly compound, and it selects the most suitable feature profiles among families of algebraic curves

    Direct Observation of Cosmic Strings via their Strong Gravitational Lensing Effect: II. Results from the HST/ACS Image Archive

    Full text link
    We have searched 4.5 square degrees of archival HST/ACS images for cosmic strings, identifying close pairs of similar, faint galaxies and selecting groups whose alignment is consistent with gravitational lensing by a long, straight string. We find no evidence for cosmic strings in five large-area HST treasury surveys (covering a total of 2.22 square degrees), or in any of 346 multi-filter guest observer images (1.18 square degrees). Assuming that simulations ccurately predict the number of cosmic strings in the universe, this non-detection allows us to place upper limits on the unitless Universal cosmic string tension of G mu/c^2 < 2.3 x 10^-6, and cosmic string density of Omega_s < 2.1 x 10^-5 at the 95% confidence level (marginalising over the other parameter in each case). We find four dubious cosmic string candidates in 318 single filter guest observer images (1.08 square degrees), which we are unable to conclusively eliminate with existing data. The confirmation of any one of these candidates as cosmic strings would imply G mu/c^2 ~ 10^-6 and Omega_s ~ 10^-5. However, we estimate that there is at least a 92% chance that these string candidates are random alignments of galaxies. If we assume that these candidates are indeed false detections, our final limits on G mu/c^2 and Omega_s fall to 6.5 x 10^-7 and 7.3 x 10^-6. Due to the extensive sky coverage of the HST/ACS image archive, the above limits are universal. They are quite sensitive to the number of fields being searched, and could be further reduced by more than a factor of two using forthcoming HST data.Comment: 21 pages, 18 figure

    Coronal loop detection from solar images and extraction of salient contour groups from cluttered images.

    Get PDF
    This dissertation addresses two different problems: 1) coronal loop detection from solar images: and 2) salient contour group extraction from cluttered images. In the first part, we propose two different solutions to the coronal loop detection problem. The first solution is a block-based coronal loop mining method that detects coronal loops from solar images by dividing the solar image into fixed sized blocks, labeling the blocks as Loop or Non-Loop , extracting features from the labeled blocks, and finally training classifiers to generate learning models that can classify new image blocks. The block-based approach achieves 64% accuracy in IO-fold cross validation experiments. To improve the accuracy and scalability, we propose a contour-based coronal loop detection method that extracts contours from cluttered regions, then labels the contours as Loop and Non-Loop , and extracts geometric features from the labeled contours. The contour-based approach achieves 85% accuracy in IO-fold cross validation experiments, which is a 20% increase compared to the block-based approach. In the second part, we propose a method to extract semi-elliptical open curves from cluttered regions. Our method consists of the following steps: obtaining individual smooth contours along with their saliency measures; then starting from the most salient contour, searching for possible grouping options for each contour; and continuing the grouping until an optimum solution is reached. Our work involved the design and development of a complete system for coronal loop mining in solar images, which required the formulation of new Gestalt perceptual rules and a systematic methodology to select and combine them in a fully automated judicious manner using machine learning techniques that eliminate the need to manually set various weight and threshold values to define an effective cost function. After finding salient contour groups, we close the gaps within the contours in each group and perform B-spline fitting to obtain smooth curves. Our methods were successfully applied on cluttered solar images from TRACE and STEREO/SECCHI to discern coronal loops. Aerial road images were also used to demonstrate the applicability of our grouping techniques to other contour-types in other real applications

    Hough Transform Implementation For Event-Based Systems: Concepts and Challenges

    Get PDF
    Hough transform (HT) is one of the most well-known techniques in computer vision that has been the basis of many practical image processing algorithms. HT however is designed to work for frame-based systems such as conventional digital cameras. Recently, event-based systems such as Dynamic Vision Sensor (DVS) cameras, has become popular among researchers. Event-based cameras have a significantly high temporal resolution (1 μs), but each pixel can only detect change and not color. As such, the conventional image processing algorithms cannot be readily applied to event-based output streams. Therefore, it is necessary to adapt the conventional image processing algorithms for event-based cameras. This paper provides a systematic explanation, starting from extending conventional HT to 3D HT, adaptation to event-based systems, and the implementation of the 3D HT using Spiking Neural Networks (SNNs). Using SNN enables the proposed solution to be easily realized on hardware using FPGA, without requiring CPU or additional memory. In addition, we also discuss techniques for optimal SNN-based implementation using efficient number of neurons for the required accuracy and resolution along each dimension, without increasing the overall computational complexity. We hope that this will help to reduce the gap between event-based and frame-based systems

    Semi-Supervised Pattern Recognition and Machine Learning for Eye-Tracking

    Get PDF
    The first step in monitoring an observer’s eye gaze is identifying and locating the image of their pupils in video recordings of their eyes. Current systems work under a range of conditions, but fail in bright sunlight and rapidly varying illumination. A computer vision system was developed to assist with the recognition of the pupil in every frame of a video, in spite of the presence of strong first-surface reflections off of the cornea. A modified Hough Circle detector was developed that incorporates knowledge that the pupil is darker than the surrounding iris of the eye, and is able to detect imperfect circles, partial circles, and ellipses. As part of processing the image is modified to compensate for the distortion of the pupil caused by the out-of-plane rotation of the eye. A sophisticated noise cleaning technique was developed to mitigate first surface reflections, enhance edge contrast, and reduce image flare. Semi-supervised human input and validation is used to train the algorithm. The final results are comparable to those achieved using a human analyst, but require only a tenth of the human interaction

    Accurate automatic localization of surfaces of revolution for self-calibration and metric reconstruction

    Get PDF
    In this paper, we address the problem of the automatic metric reconstruction Surface of Revolution (SOR) from a single uncalibrated view. The apparent contour and the visible portions of the imaged SOR cross sections are extracted and classified. The harmonic homology that models the image projection of the SOR is also estimated. The special care devoted to accuracy and robustness with respect to outliers makes the approach suitable for automatic camera calibration and metric reconstruction from single uncalibrated views of a SOR. Robustness and accuracy are obtained by embedding a graph-based grouping strategy (Euclidean Minimum Spanning Tree) into an Iterative Closest Point framework for projective curve alignment at multiple scales. Classification of SOR curves is achieved through a 2-dof voting scheme based on a pencil of conics novel parametrization. The main contribution of this work is to extend the domain of automatic single view reconstruction from piecewise planar scenes to scenes including curved surfaces, thus allowing to create automatically realistic image models of man-made objects. Experimental results with real images taken from the internet are reported, and the effectiveness and limitations of the approach are discussed
    • …
    corecore