3,057 research outputs found

    On a shape adaptive image ray transform

    No full text
    A conventional approach to image analysis is to perform separately feature extraction at a low level (such as edge detection) and follow this with high level feature extraction to determine structure (e.g. by collecting edge points using the Hough transform. The original image Ray Transform (IRT) demonstrated capability to extract structures at a low level. Here we extend the IRT to add shape specificity that makes it select specific shapes rather than just edges, the new capability is achieved by addition of a single parameter that controls which shape is elected by the extended IRT. The extended approach can then perform low-and high-level feature extraction simultaneously. We show how the IRT process can be extended to focus on chosen shapes such as lines and circles. We confirm the new capability by application of conventional methods for exact shape location. We analyze performance with images from the Caltech-256 dataset and show that the new approach can indeed select chosen shapes. Further research could capitalize on the new extraction ability to extend descriptive capability

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    An Iris Authentication System Based on Artificial Neural Networks

    Get PDF
    An iris authentication system verifies the authenticity of a person based on their iris features. The iris features are extracted through wavelet transform of the isolated iris from modified iris images. A level 5 wavelet decomposition is performed on the images, and the resulting low-frequency wavelet coefficients represent the inputs to the artificial neural network. The artificial neural network reads these features as inputs, and classifies each set of inputs according to their target identity. This authentication system currently classifies up to 10 people. The irises used for classification represent ideal situations with minimum eyelash and eyelid interference

    Site Characterization Using Integrated Imaging Analysis Methods on Satellite Data of the Islamabad, Pakistan, Region

    Get PDF
    We develop an integrated digital imaging analysis approach to produce a first-approximation site characterization map for Islamabad, Pakistan, based on remote-sensing data. We apply both pixel-based and object-oriented digital imaging analysis methods to characterize detailed (1:50,000) geomorphology and geology from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) satellite imagery. We use stereo-correlated relative digital elevation models (rDEMs) derived from ASTER data, as well as spectra in the visible near-infrared (VNIR) to thermal infrared (TIR) domains. The resulting geomorphic units in the study area are classified as mountain (including the Margala Hills and the Khairi Murat Ridge), piedmont, and basin terrain units. The local geologic units are classified as limestone in the Margala Hills and the Khairi Murat Ridge and sandstone rock types for the piedmonts and basins. Shear-wave velocities for these units are assigned in ranges based on established correlations in California. These ranges include Vs30-values to be greater than 500 m/sec for mountain units, 200–600 m/sec for piedmont units, and less than 300 m/sec for basin units. While the resulting map provides the basis for incorporating site response in an assessment of seismic hazard for Islamabad, it also demonstrates the potential use of remote-sensing data for site characterization in regions where only limited conventional mapping has been done

    Automated Fovea Detection Based on Unsupervised Retinal Vessel Segmentation Method

    Get PDF
    The Computer Assisted Diagnosis systems could save workloads and give objective diagnostic to ophthalmologists. At first level of automated screening of systems feature extraction is the fundamental step. One of these retinal features is the fovea. The fovea is a small fossa on the fundus, which is represented by a deep-red or red-brown color in color retinal images. By observing retinal images, it appears that the main vessels diverge from the optic nerve head and follow a specific course that can be geometrically modeled as a parabola, with a common vertex inside the optic nerve head and the fovea located along the apex of this parabola curve. Therefore, based on this assumption, the main retinal blood vessels are segmented and fitted to a parabolic model. With respect to the core vascular structure, we can thus detect fovea in the fundus images. For the vessel segmentation, our algorithm addresses the image locally where homogeneity of features is more likely to occur. The algorithm is composed of 4 steps: multi-overlapping windows, local Radon transform, vessel validation, and parabolic fitting. In order to extract blood vessels, sub-vessels should be extracted in local windows. The high contrast between blood vessels and image background in the images cause the vessels to be associated with peaks in the Radon space. The largest vessels, using a high threshold of the Radon transform, determines the main course or overall configuration of the blood vessels which when fitted to a parabola, leads to the future localization of the fovea. In effect, with an accurate fit, the fovea normally lies along the slope joining the vertex and the focus. The darkest region along this line is the indicative of the fovea. To evaluate our method, we used 220 fundus images from a rural database (MUMS-DB) and one public one (DRIVE). The results show that, among 20 images of the first public database (DRIVE) we detected fovea in 85% of them. Also for the MUMS-DB database among 200 images we detect fovea correctly in 83% on them

    Iris Segmentation Analysis using Integro-Differential Operator and Hough Transform in Biometric System

    Get PDF
    Iris segmentation is foremost part of iris recognition system. There are four steps in iris recognition: segmentation,normalization, encoding and matching. Here, iris segmentation has been implemented using Hough Transform and Integro-Differential Operator techniques. The performance of iris recognition system depends on segmentation and normalization technique. Iris recognition systems capture an image from individual eye. Then the image captured is segmented and normalized for encoding process. The matching technique, Hamming Distance, is used to match the iris codes of iris in the database weather it is same with the newly enrolled for verification stage. These processes produce values of average circle pupil,average circle iris, error rate and edge points. The values provide acceptable measures of accuracy False Accept Rate (FAR) or False Reject Rate (FRR). Hough Transform algorithm, provide better performance, at the expense of higher computational complexity. It is used to evolve a contour that can fit to a non-circular iris boundary. However, edge information is required to control the evolution and stopping the contour. The performance of Hough Transform for CASIA database was 80.88% due to the lack of edge information. The GAR value using Hough Transform is 98.9% genuine while 98.6% through Integro-Differential Operator
    • …
    corecore