691 research outputs found

    Automatic extraction of planetary image features

    Get PDF
    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented

    Measuring the core rotation of red giant stars

    Full text link
    Red giant stars present mixed modes, which behave as pressure modes in the convective envelope and as gravity modes in the radiative interior. This mixed character allows to probe the physical conditions in their core. With the advent of long-duration time series from space-borne missions such as CoRoT and Kepler, it becomes possible to study the red giant core rotation. As more than 15 000 red giant light curves have been recorded, it is crucial to develop a robust and efficient method to measure this rotation. Such measurements of thousands of mean core rotation would open the way to a deeper understanding of the physical mechanisms that are able to transport angular momentum from the core to the envelope in red giants. In this work, we detail the principle of the method we developed to obtain automatic measurements of the red giant mean core rotation. This method is based on the stretching of the oscillation spectra and on the use of the so-called Hough transform. We finally validate this method for stars on the red giant branch, where overlapping rotational splittings and mixed-mode spacings produce complicated frequency spectra.Comment: 8 pages, 3 figures, 1 tabl

    The Hough Transform and the Impact of Chronic Leukemia on the Compact Bone Tissue from CT-Images Analysis

    Get PDF
    Computational analysis of X-ray Computed Tomography (CT) images allows the assessment of alteration of bone structure in adult patients with Advanced Chronic Lymphocytic Leukemia (ACLL), and may even offer a powerful tool to assess the development of the disease (prognostic potential). The crucial requirement for this kind of analysis is the application of a pattern recognition method able to accurately segment the intra-bone space in clinical CT images of the human skeleton. Our purpose is to show how this task can be accomplished by a procedure based on the use of the Hough transform technique for special families of algebraic curves. The dataset used for this study is composed of sixteen subjects including eight control subjects, one ACLL survivor, and seven ACLL victims. We apply the Hough transform approach to the set of CT images of appendicular bones for detecting the compact and trabecular bone contours by using ellipses, and we use the computed semi-axes values to infer information on bone alterations in the population affected by ACLL. The effectiveness of this method is proved against ground truth comparison. We show that features depending on the semi-axes values detect a statistically significant difference between the class of control subjects plus the ACLL survivor and the class of ACLL victims

    Automatic Reconstruction of Fault Networks from Seismicity Catalogs: 3D Optimal Anisotropic Dynamic Clustering

    Get PDF
    We propose a new pattern recognition method that is able to reconstruct the 3D structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering method, that originally partitions a set of datapoints into clusters, using a global minimization criterion over the spatial inertia of those clusters. The new method improves on it by taking into account the full spatial inertia tensor of each cluster, in order to partition the dataset into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size and orientation. The main tunable parameter is the accuracy of the earthquake localizations, which fixes the resolution, i.e. the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog, the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm reconstructs successfully the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershocks sequence of the 28th June 1992 Landers earthquake in Southern California, the reconstructed plane segments fully agree with faults already known on geological maps, or with blind faults that appear quite obvious on longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multi-scale study of the inner structure of fault zones

    A. Eye Detection Using Varients of Hough Transform B. Off-Line Signature Verification

    Get PDF
    PART (A): EYE DETECTION USING VARIANTS OF HOUGH TRANSFORM: Broadly eye detection is the process of tracking the location of human eye in a face image. Previous approaches use complex techniques like neural network, Radial Basis Function networks, Multi-Layer Perceptrons etc. In the developed project human eye is modeled as a circle (iris; the black circular region of eye) enclosed inside an ellipse (eye-lashes). Due to the sudden intensity variations in the iris with respect the inner region of eye-lashes the probability of false acceptance is very less. Since the image taken is a face image the probability of false acceptance further reduces. Hough transform is used for circle (iris) and ellipse (eye-lash) detection. Hough transform was the obvious choice because of its resistance towards the holes in the boundary and noise present in the image. Image smoothing is done to reduce the presence of noise in the image further it makes the image better for further processing like edge detection (Prewitt method). Compared to the aforementioned models the proposed model is simple and efficient. The proposed model can further be improved by including various features like orientation angle of eye-lashes (which is assumed constant in the proposed model), and by making the parameters adaptive. PART (B): OFF-LINE SIGNATURE VERIFICATION: Hand-written signature is widely used for authentication and identification of individual. It has been the target for fraudulence ever since. A novel off-line signature verification algorithm has been developed and tested successfully. Since the hand-written signature can be random, because of presence of various curves and features, techniques like character recognition cannot be applied for signature verification. The proposed algorithm incorporates a soft-computing technique “CLUSTERING” for extraction of feature points from the image of the signature. These feature points or centers are updated using the clustering update equations for required number of times, then these acts as extracted feature points of the signature image. To avoid interpersonal variation 6 to 8 signature images of the same person are taken and feature points are trained. These trained feature points are compared with the test signature images and based on a specific threshold, the signature is declared original or forgery. This approach works well if there is a high variation in the original signature, but for signatures with low variation, it produces incorrect results
    corecore