7,848 research outputs found

    Cleaning sky survey databases using Hough Transform and Renewal String approaches

    Get PDF
    Large astronomical databases obtained from sky surveys such as the SuperCOSMOS Sky Survey (SSS) invariably suffer from spurious records coming from artefactual effects of the telescope, satellites and junk objects in orbit around earth and physical defects on the photographic plate or CCD. Though relatively small in number these spurious records present a significant problem in many situations where they can become a large proportion of the records potentially of interest to a given astronomer. Accurate and robust techniques are needed for locating and flagging such spurious objects, and we are undertaking a programme investigating the use of machine learning techniques in this context. In this paper we focus on the four most common causes of unwanted records in the SSS: satellite or aeroplane tracks, scratches, fibres and other linear phenomena introduced to the plate, circular halos around bright stars due to internal reflections within the telescope and diffraction spikes near to bright stars. Appropriate techniques are developed for the detection of each of these. The methods are applied to the SSS data to develop a dataset of spurious object detections, along with confidence measures, which can allow these unwanted data to be removed from consideration. These methods are general and can be adapted to other astronomical survey data.Comment: Accepted for MNRAS. 17 pages, latex2e, uses mn2e.bst, mn2e.cls, md706.bbl, shortbold.sty (all included). All figures included here as low resolution jpegs. A version of this paper including the figures can be downloaded from http://www.anc.ed.ac.uk/~amos/publications.html and more details on this project can be found at http://www.anc.ed.ac.uk/~amos/sattrackres.htm

    Image feature analysis using the Multiresolution Fourier Transform

    Get PDF
    The problem of identifying boundary contours or line structures is widely recognised as an important component in many applications of image analysis and computer vision. Typical solutions to the problem employ some form of edge detection followed by line following or, more commonly in recent years, Hough transforms. Because of the processing requirements of such methods and to try to improve the robustness of the algorithms, a number of authors have explored the use of multiresolution approaches to the problem. Non-parametric, iterative approaches such as relaxation labelling and "Snakes" have also been used. This thesis presents a boundary detection algorithm based on a multiresolution image representation, the Multiresolution Fourier Transform (MFT), which represents an image over a range of spatial/spatial-frequency resolutions. A quadtree based image model is described in which each leaf is a region which can be modelled using one of a set of feature classes. Consideration is given to using linear and circular arc features for this modelling, and frequency domain models are developed for them. A general model based decision process is presented and shown to be applicable to detecting local image features, selecting the most appropriate scale for modelling each region of the image and linking the local features into the region boundary structures of the image. The use of a consistent inference process for all of the subtasks used in the boundary detection represents a significant improvement over the adhoc assemblies of estimation and detection that have been common in previous work. Although the process is applied using a restricted set of local features, the framework presented allows for expansion of the number of boundary feature models and the possible inclusion of models of region properties. Results are presented demonstrating the effective application of these procedures to a number of synthetic and natural images

    Vanishing Point Detection with Direct and Transposed Fast Hough Transform inside the neural network

    Get PDF
    In this paper, we suggest a new neural network architecture for vanishing point detection in images. The key element is the use of the direct and transposed Fast Hough Transforms separated by convolutional layer blocks with standard activation functions. It allows us to get the answer in the coordinates of the input image at the output of the network and thus to calculate the coordinates of the vanishing point by simply selecting the maximum. Besides, it was proved that calculation of the transposed Fast Hough Transform can be performed using the direct one. The use of integral operators enables the neural network to rely on global rectilinear features in the image, and so it is ideal for detecting vanishing points. To demonstrate the effectiveness of the proposed architecture, we use a set of images from a DVR and show its superiority over existing methods. Note, in addition, that the proposed neural network architecture essentially repeats the process of direct and back projection used, for example, in computed tomography.Comment: 9 pages, 9 figures, submitted to "Computer Optics"; extra experiment added, new theorem proof added, references added; typos correcte

    A Comparative study of Arabic handwritten characters invariant feature

    Get PDF
    This paper is practically interested in the unchangeable feature of Arabic handwritten character. It presents results of comparative study achieved on certain features extraction techniques of handwritten character, based on Hough transform, Fourier transform, Wavelet transform and Gabor Filter. Obtained results show that Hough Transform and Gabor filter are insensible to the rotation and translation, Fourier Transform is sensible to the rotation but insensible to the translation, in contrast to Hough Transform and Gabor filter, Wavelets Transform is sensitive to the rotation as well as to the translation

    Ship Wake Detection in SAR Images via Sparse Regularization

    Get PDF
    In order to analyse synthetic aperture radar (SAR) images of the sea surface, ship wake detection is essential for extracting information on the wake generating vessels. One possibility is to assume a linear model for wakes, in which case detection approaches are based on transforms such as Radon and Hough. These express the bright (dark) lines as peak (trough) points in the transform domain. In this paper, ship wake detection is posed as an inverse problem, which the associated cost function including a sparsity enforcing penalty, i.e. the generalized minimax concave (GMC) function. Despite being a non-convex regularizer, the GMC penalty enforces the overall cost function to be convex. The proposed solution is based on a Bayesian formulation, whereby the point estimates are recovered using maximum a posteriori (MAP) estimation. To quantify the performance of the proposed method, various types of SAR images are used, corresponding to TerraSAR-X, COSMO-SkyMed, Sentinel-1, and ALOS2. The performance of various priors in solving the proposed inverse problem is first studied by investigating the GMC along with the L1, Lp, nuclear and total variation (TV) norms. We show that the GMC achieves the best results and we subsequently study the merits of the corresponding method in comparison to two state-of-the-art approaches for ship wake detection. The results show that our proposed technique offers the best performance by achieving 80% success rate.Comment: 18 page

    Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy

    Full text link
    In this paper we present a simple and robust method for self-correction of camera distortion using single images of scenes which contain straight lines. Since the most common distortion can be modelled as radial distortion, we illustrate the method using the Harris radial distortion model, but the method is applicable to any distortion model. The method is based on transforming the edgels of the distorted image to a 1-D angular Hough space, and optimizing the distortion correction parameters which minimize the entropy of the corresponding normalized histogram. Properly corrected imagery will have fewer curved lines, and therefore less spread in Hough space. Since the method does not rely on any image structure beyond the existence of edgels sharing some common orientations and does not use edge fitting, it is applicable to a wide variety of image types. For instance, it can be applied equally well to images of texture with weak but dominant orientations, or images with strong vanishing points. Finally, the method is performed on both synthetic and real data revealing that it is particularly robust to noise.Comment: 9 pages, 5 figures Corrected errors in equation 1

    On The Continuous Steering of the Scale of Tight Wavelet Frames

    Full text link
    In analogy with steerable wavelets, we present a general construction of adaptable tight wavelet frames, with an emphasis on scaling operations. In particular, the derived wavelets can be "dilated" by a procedure comparable to the operation of steering steerable wavelets. The fundamental aspects of the construction are the same: an admissible collection of Fourier multipliers is used to extend a tight wavelet frame, and the "scale" of the wavelets is adapted by scaling the multipliers. As an application, the proposed wavelets can be used to improve the frequency localization. Importantly, the localized frequency bands specified by this construction can be scaled efficiently using matrix multiplication
    corecore