713 research outputs found

    Multi-Scale Edge Detection Algorithms and Their Information-Theoretic Analysis in the Context of Visual Communication

    Get PDF
    The unrealistic assumption that noise can be modeled as independent, additive and uniform can lead to problems when edge detection methods are applied to low signal-to-noise ratio (SNR) images. The main reason for this is because the filter scale and the threshold for the gradient are difficult to determine at a regional or local scale when the noise estimate is on a global scale. Therefore, in this dissertation, we attempt to solve these problems by using more than one filter to detect the edges and discarding the global thresholding method in the edge discrimination. The proposed multi-scale edge detection algorithms utilize the multi-scale description to detect and localize edges. Furthermore, instead of using the single default global threshold, a local dynamic threshold is introduced to discriminate between edges and non-edges. The proposed algorithms also perform connectivity analysis on edge maps to ensure that small, disconnected edges are removed. Experiments where the methods are applied to a sequence of images of the same scene with different SNRs show the methods to be robust to noise. Additionally, a new noise reduction algorithm based on the multi-scale edge analysis is proposed. In general, an edge—high frequency information in an image—would be filtered or suppressed after image smoothing. With the help of multi-scale edge detection algorithms, the overall edge structure of the original image could be preserved when only the isolated edge information that represents noise gets filtered out. Experimental results show that this method is robust to high levels of noise, correctly preserving the edges. We also propose a new method for evaluating the performance of edge detection algorithms. It is based on information-theoretic analysis of the edge detection algorithms in the context of an end-to-end visual communication channel. We use the information between the scene and the output of the edge-detection algorithm, ala Shannon, to evaluate the performance. An edge detection algorithm is considered to have high performance only if the information rate from the scene to the edge approaches the maximum possible. Therefore, this information-theoretic analysis becomes a new method to allow comparison between different edge detection operators for a given end-to-end image processing system

    Visual focus of attention estimation using eye center localization

    Get PDF
    Estimating people visual focus of attention (VFOA) plays a crucial role in various practical systems such as human-robot interaction. It is challenging to extract the cue of the VFOA of a person due to the difficulty of recognizing gaze directionality. In this paper, we propose an improved integrodifferential approach to represent gaze via efficiently and accurately localizing the eye center in lower resolution image. The proposed method takes advantage of the drastic intensity changes between the iris and the sclera and the grayscale of the eye center as well. The number of kernels is optimized to convolute the original eye region image, and the eye center is located via searching the maximum ratio derivative of the neighbor curve magnitudes in the convolution image. Experimental results confirm that the algorithm outperforms the state-of-the-art methods in terms of computational cost, accuracy, and robustness to illumination changes

    Recognition of License Plates and Optical Nerve Pattern Detection Using Hough Transform

    Get PDF
    The global technique of detection of the features is Hough transform used in image processing, computer vision and image analysis. The detection of prominent line of the object under consideration is the main purpose of the Hough transform which is carried out by the process of voting. The first part of this work is the use of Hough transform as feature vector, tested on Indian license plate system, having font of UK standard and UK standard 3D, which has ten slots for characters and numbers.So tensub images are obtained.These sub images are fed to Hough transform and Hough peaks to extract the Hough peaks information. First two Hough peaks are taken into account for the recognition purposes. The edge detection along with image rotation is also used prior to the implementation of Hough transform in order to get the edges of the gray scale image. Further, the image rotation angle is varied; the superior results are taken under consideration. The second part of this work makes the use of Hough transform and Hough peaks, for examining the optical nerve patterns of eye. An available database for RIM-one is used to serve the purpose. The optical nerve pattern is unique for every human being and remains almost unchanged throughout the life time. So the purpose is to detect the change in the pattern report the abnormality, to make automatic system so capable that they can replace the experts of that field. For this detection purpose Hough Transform and Hough Peaks are used and the fact that these nerve patterns are unique in every sense is confirmed

    An application of ARX stochastic models to iris recognition

    Get PDF
    We present a new approach for iris recognition based on stochastic autoregressive models with exogenous input (ARX). Iris recognition is a method to identify persons, based on the analysis of the eye iris. A typical iris recognition system is composed of four phases: image acquisition and preprocessing, iris localization and extraction, iris features characterization, and comparison and matching. The main contribution in this work is given in the step of characterization of iris features by using ARX models. In our work every iris in database is represented by an ARX model learned from data. In the comparison and matching step, data taken from iris sample are substituted into every ARX model and residuals are generated. A decision of accept or reject is taken based on residuals and on a threshold calculated experimentally. We conduct experiments with two different databases. Under certain conditions, we found a rate of successful identifications in the order of 99.7 % for one database and 100 % for the other.Applications in Artificial Intelligence - ApplicationsRed de Universidades con Carreras en Informática (RedUNCI

    Edge Contours

    Get PDF
    The accuracy with which a computer vision system is able to identify objects in an image is heavily dependent upon the accuracy of the low level processes that identify which points lie on the edges of an object. In order to remove noise and fine texture from an image, it is usually smoothed before edge detection is performed. This smoothing causes edges to be displaced from their actual location in the image. Knowledge about the changes that occur with different degrees of smoothing (scales) and the physical conditions that cause these changes is essential to proper interpretation of the results obtained. In this work the amount of delocalization and the magnitude of the response to the Normalized Gradient of Gaussian operator are analyzed as a function of σ, the standard deviation of the Gaussian. As a result of this analysis it was determined that edge points could be characterized as to slope, contrast, and proximity to other edges. The analysis is also used to define the size that the neighborhood of an edge point must be in order to assure its containing the delocalized edge point at another scale when σ is known. Given this theoretical background, an algorithm was developed to obtain sequential lists of edge points. This used multiple scales in order to achieve the superior localization and detection of weak edges possible with smaller scales combined with the noise suppression of the larger scales. The edge contours obtained with this method are significantly better than those achieved with a single scale. A second algorithm was developed to allow sets of edge contour points to be represented as active contours so that interaction with a higher level process is possible. This higher level process could do such things as determine where corners or discontinuities could appear. The algorithm developed here allows hard constraints and represents a significant improvement in speed over previous algorithms allowing hard constraints, being linear rather than cubic

    Real-Time Edge Detection using Sundance Video and Image Processing System

    Get PDF
    Edge detection from images is one of the most important concerns in digital image and video processing. With development in technology, edge detection has been greatly benefited and new avenues for research opened up, one such field being the real time video and image processing whose applications have allowed other digital image and video processing. It consists of the implementation of various image processing algorithms like edge detection using sobel, prewitt, canny and laplacian etc. A different technique is reported to increase the performance of the edge detection. The algorithmic computations in real-time may have high level of time based complexity and hence the use of Sundance Module Video and Image processing system for the implementation of such algorithms is proposed here. In this module is based on the Sundance module SMT339 processor is a dedicated high speed image processing module for use in a wide range of image analysis systems. This processor is combination of the DSP and FPGA processor. The image processing engine is based upon the „Texas Instruments‟ TMS320DM642 Video Digital Signal Processor. And A powerful Vitrex-4 FPGA (XC4VFX60-10) is used onboard as the FPGA processing unit for image data. It is observed that techniques which follow the stage process of detection of noise and filtering of noisy pixels achieve better performance than others. In this thesis such schemes of sobel, prewitt, canny and laplacian detector are proposed

    SAR Image Edge Detection: Review and Benchmark Experiments

    Get PDF
    Edges are distinct geometric features crucial to higher level object detection and recognition in remote-sensing processing, which is a key for surveillance and gathering up-to-date geospatial intelligence. Synthetic aperture radar (SAR) is a powerful form of remote-sensing. However, edge detectors designed for optical images tend to have low performance on SAR images due to the presence of the strong speckle noise-causing false-positives (type I errors). Therefore, many researchers have proposed edge detectors that are tailored to deal with the SAR image characteristics specifically. Although these edge detectors might achieve effective results on their own evaluations, the comparisons tend to include a very limited number of (simulated) SAR images. As a result, the generalized performance of the proposed methods is not truly reflected, as real-world patterns are much more complex and diverse. From this emerges another problem, namely, a quantitative benchmark is missing in the field. Hence, it is not currently possible to fairly evaluate any edge detection method for SAR images. Thus, in this paper, we aim to close the aforementioned gaps by providing an extensive experimental evaluation for SAR images on edge detection. To that end, we propose the first benchmark on SAR image edge detection methods established by evaluating various freely available methods, including methods that are considered to be the state of the art

    Image mosaicing of panoramic images

    Get PDF
    Image mosaicing is combining or stitching several images of a scene or object taken from different angles into a single image with a greater angle of view. This is practised a developing field. Recent years have seen quite a lot of advancement in the field. Many algorithms have been developed over the years. Our work is based on feature based approach of image mosaicing. The steps in image mosaic consist of feature point detection, feature point descriptor extraction and feature point matching. RANSAC algorithm is applied to eliminate variety of mismatches and acquire transformation matrix between the images. The input image is transformed with the right mapping model for image stitching. Therefore, this paper proposes an algorithm for mosaicing two images efficiently using Harris-corner feature detection method, RANSAC feature matching method and then image transformation, warping and by blending methods

    A statistical sampling strategy for iris recognition

    Get PDF
    We present a new approach for iris recognition based on a random sampling strategy. Iris recognition is a method to identify individuals, based on the analysis of the eye iris. This technique has received a great deal of attention lately, mainly due to iris unique characterics: highly randomized appearance and impossibility to alter its features. A typical iris recognition system is composed of four phases: image acquisition and preprocessing, iris localization and extraction, iris features characterization, and comparison and matching. Our work uses standard integrodifferential operators to locate the iris. Then, we process iris image with histogram equalization to compensate for illumination variations.The characterization of iris features is performed by using accumulated histograms. These histograms are built from randomly selected subimages of iris. After that, a comparison is made between accumulated histograms of couples of iris samples, and a decision is taken based on their differences and on a threshold calculated experimentally. We ran experiments with a database of 210 iris, extracted from 70 individuals, and found a rate of succesful identifications in the order of 97 %.Applications in Artificial Intelligence - ApplicationsRed de Universidades con Carreras en Informática (RedUNCI
    corecore