2,590 research outputs found

    Automated Quantitative Description of Spiral Galaxy Arm-Segment Structure

    Full text link
    We describe a system for the automatic quantification of structure in spiral galaxies. This enables translation of sky survey images into data needed to help address fundamental astrophysical questions such as the origin of spiral structure---a phenomenon that has eluded theoretical description despite 150 years of study (Sellwood 2010). The difficulty of automated measurement is underscored by the fact that, to date, only manual efforts (such as the citizen science project Galaxy Zoo) have been able to extract information about large samples of spiral galaxies. An automated approach will be needed to eliminate measurement subjectivity and handle the otherwise-overwhelming image quantities (up to billions of images) from near-future surveys. Our approach automatically describes spiral galaxy structure as a set of arcs, precisely describing spiral arm segment arrangement while retaining the flexibility needed to accommodate the observed wide variety of spiral galaxy structure. The largest existing quantitative measurements were manually-guided and encompassed fewer than 100 galaxies, while we have already applied our method to more than 29,000 galaxies. Our output matches previous information, both quantitatively over small existing samples, and qualitatively against human classifications from Galaxy Zoo.Comment: 9 pages;4 figures; 2 tables; accepted to CVPR (Computer Vision and Pattern Recognition), June 2012, Providence, Rhode Island, June 16-21, 201

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Optic nerve head segmentation

    Get PDF
    Reliable and efficient optic disk localization and segmentation are important tasks in automated retinal screening. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents an algorithm for the localization and segmentation of the optic nerve head boundary in low-resolution images (about 20 /spl mu//pixel). Optic disk localization is achieved using specialized template matching, and segmentation by a deformable contour model. The latter uses a global elliptical model and a local deformable model with variable edge-strength dependent stiffness. The algorithm is evaluated against a randomly selected database of 100 images from a diabetic screening programme. Ten images were classified as unusable; the others were of variable quality. The localization algorithm succeeded on all bar one usable image; the contour estimation algorithm was qualitatively assessed by an ophthalmologist as having Excellent-Fair performance in 83% of cases, and performs well even on blurred image

    A parallel windowing approach to the Hough transform for line segment detection

    Get PDF
    In the wide range of image processing and computer vision problems, line segment detection has always been among the most critical headlines. Detection of primitives such as linear features and straight edges has diverse applications in many image understanding and perception tasks. The research presented in this dissertation is a contribution to the detection of straight-line segments by identifying the location of their endpoints within a two-dimensional digital image. The proposed method is based on a unique domain-crossing approach that takes both image and parameter domain information into consideration. First, the straight-line parameters, i.e. location and orientation, have been identified using an advanced Fourier-based Hough transform. As well as producing more accurate and robust detection of straight-lines, this method has been proven to have better efficiency in terms of computational time in comparison with the standard Hough transform. Second, for each straight-line a window-of-interest is designed in the image domain and the disturbance caused by the other neighbouring segments is removed to capture the Hough transform buttery of the target segment. In this way, for each straight-line a separate buttery is constructed. The boundary of the buttery wings are further smoothed and approximated by a curve fitting approach. Finally, segments endpoints were identified using buttery boundary points and the Hough transform peak. Experimental results on synthetic and real images have shown that the proposed method enjoys a superior performance compared with the existing similar representative works

    IRIS Hand: Smart Robotic Prosthesis

    Get PDF
    This project involved the design and development of an operational first prototype for the IRIS platform – an anthropomorphic robotic hand capable of autonomously determining the shape of an object and selecting the most appropriate method for grabbing said object. Autonomy of the device is achieved through the use of a unique control system which takes input from sensors embedded in the hand to determine the shape of an object, the position of each finger, grip strength, and the quality of grip. The intended use for this technology is in the medical field as a prosthesis. The advantage of our system as a prosthesis is that its autonomous functions allow the user to access a wide variety of functionality more quickly and easily than similar, commercially available products

    Low to medium level image processing for a mobile robot

    Get PDF
    The use of visual perception in autonomous mobile systems was approached with caution by mobile robot developers because of the high computational cost and huge memory requirements of most image processing operations. When used, the image processing is implemented on multiprocessors or complex and expensive systems, thereby requiring the robot to be wired or radio controlled from the computer system base
    corecore