693 research outputs found

    Automatic Color Segmentation of Images with Application to Detection of Variegated Coloring in Skin Tumors

    Get PDF
    A description is given of a computer vision system, developed to serve as the front-end of a medical expert system, that automates visual feature identification for skin tumor evaluation. The general approach is to create different software modules that detect the presence or absence of critical features. Image analysis with artificial intelligence (AI) techniques, such as the use of heuristics incorporated into image processing algorithms, is the primary approach. On a broad scale, this research addressed the problem of segmentation of a digital image based on color information. The algorithm that was developed to segment the image strictly on the basis of color information was shown to be a useful aid in the identification of tumor border, ulcer, and other features of interest. As a specific application example, the method was applied to 200 digitized skin tumor images to identify the feature called variegated coloring. Extensive background information is provided, and the development of the algorithm is described

    Human-centered display design : balancing technology & perception

    Get PDF

    Robust Specularity Removal from Hand-held Videos

    Get PDF
    Specular reflection exists when one tries to record a photo or video through a transparent glass medium or opaque surfaces such as plastics, ceramics, polyester and human skin, which can be well described as the superposition of a transmitted layer and a reflection layer. These specular reflections often confound the algorithms developed for image analysis, computer vision and pattern recognition. To obtain a pure diffuse reflection component, specularity (highlights) needs to be removed. To handle this problem, a novel and robust algorithm is formulated. The contributions of this work are three-fold.;First, the smoothness of the video along with the temporal coherence and illumination changes are preserved by reducing the flickering and jagged edges caused by hand-held video acquisition and homography transformation respectively.;Second, this algorithm is designed to improve upon the state-of-art algorithms by automatically selecting the region of interest (ROI) for all the frames, reducing the computational time and complexity by utilizing the luminance (Y) channel and exploiting the Augmented Lagrange Multiplier (ALM) with Alternating Direction Minimizing (ADM) to facilitate the derivation of solution algorithms.;Third, a quantity metrics is devised, which objectively quantifies the amount of specularity in each frame of a hand-held video. The proposed specularity removal algorithm is compared against existing state-of-art algorithms using the newly-developed quantity metrics. Experimental results validate that the developed algorithm has superior performance in terms of computation time, quality and accuracy

    Image enhancement for underwater mining applications

    Get PDF
    The exploration of water bodies from the sea to land filled water spaces has seen a continuous increase with new technologies such as robotics. Underwater images is one of the main sensor resources used but suffer from added problems due to the environment. Multiple methods and techniques have provided a way to correct the color, clear the poor quality and enhance the features. In this thesis work, we present the work of an Image Cleaning and Enhancement Technique which is based on performing color correction on images incorporated with Dark Channel Prior (DCP) and then taking the converted images and modifying them into the Long, Medium and Short (LMS) color space, as this space is the region in which the human eye perceives colour. This work is being developed at LSA (Laboratório de Sistema Autónomos) robotics and autonomous systems laboratory. Our objective is to improve the quality of images for and taken by robots with the particular emphasis on underwater flooded mines. This thesis work describes the architecture and the developed solution. A comparative analysis with state of the art methods and of our proposed solution is presented. Results from missions taken by the robot in operational mine scenarios are presented and discussed and allowing for the solution characterization and validation

    Real-time object detection using monocular vision for low-cost automotive sensing systems

    Get PDF
    This work addresses the problem of real-time object detection in automotive environments using monocular vision. The focus is on real-time feature detection, tracking, depth estimation using monocular vision and finally, object detection by fusing visual saliency and depth information. Firstly, a novel feature detection approach is proposed for extracting stable and dense features even in images with very low signal-to-noise ratio. This methodology is based on image gradients, which are redefined to take account of noise as part of their mathematical model. Each gradient is based on a vector connecting a negative to a positive intensity centroid, where both centroids are symmetric about the centre of the area for which the gradient is calculated. Multiple gradient vectors define a feature with its strength being proportional to the underlying gradient vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows superior performance over other contemporary detectors in terms of keypoint density, tracking accuracy, illumination invariance, rotation invariance, noise resistance and detection time. The DeGraF features form the basis for two new approaches that perform dense 3D reconstruction from a single vehicle-mounted camera. The first approach tracks DeGraF features in real-time while performing image stabilisation with minimal computational cost. This means that despite camera vibration the algorithm can accurately predict the real-world coordinates of each image pixel in real-time by comparing each motion-vector to the ego-motion vector of the vehicle. The performance of this approach has been compared to different 3D reconstruction methods in order to determine their accuracy, depth-map density, noise-resistance and computational complexity. The second approach proposes the use of local frequency analysis of i ii gradient features for estimating relative depth. This novel method is based on the fact that DeGraF gradients can accurately measure local image variance with subpixel accuracy. It is shown that the local frequency by which the centroid oscillates around the gradient window centre is proportional to the depth of each gradient centroid in the real world. The lower computational complexity of this methodology comes at the expense of depth map accuracy as the camera velocity increases, but it is at least five times faster than the other evaluated approaches. This work also proposes a novel technique for deriving visual saliency maps by using Division of Gaussians (DIVoG). In this context, saliency maps express the difference of each image pixel is to its surrounding pixels across multiple pyramid levels. This approach is shown to be both fast and accurate when evaluated against other state-of-the-art approaches. Subsequently, the saliency information is combined with depth information to identify salient regions close to the host vehicle. The fused map allows faster detection of high-risk areas where obstacles are likely to exist. As a result, existing object detection algorithms, such as the Histogram of Oriented Gradients (HOG) can execute at least five times faster. In conclusion, through a step-wise approach computationally-expensive algorithms have been optimised or replaced by novel methodologies to produce a fast object detection system that is aligned to the requirements of the automotive domain

    Semantik renk değişmezliği

    Get PDF
    Color constancy aims to perceive the actual color of an object, disregarding the effectof the light source. Recent works showed that utilizing the semantic information inan image enhances the performance of the computational color constancy methods.Considering the recent success of the segmentation methods and the increased numberof labeled images, we propose a color constancy method that combines individualilluminant estimations of detected objects which are computed using the classes of theobjects and their associated colors. Then we introduce a weighting system that valuesthe applicability of the object classes to the color constancy problem. Lastly, weintroduce another metric expressing the detected object and how well it fits the learnedmodel of its class. Finally, we evaluate our proposed method on a popular colorconstancy dataset, confirming that each weight addition enhances the performanceof the global illuminant estimation. Experimental results show promising results,outperforming the conventional methods while competing with the state of the artmethods.--M.S. - Master of Scienc

    A Study on Image Enhancement Techniques using YCbCr Color Space Methods

    Full text link
    We propose an image enhancement scheme by using YCBCR color space method. It shows the better feature of the processed input image. The acquired images are classified into three types, word document image, MRI image and scenery image. At first, the acquired inputs are converted to the gray scale to plot with the normalized histogram. Then, using the color space methods, the images are converted into YCBCR characteristics and there components are separated into individual modules(Y, CB, CR components). The processed image separates its in-features of luminance and chrominance components such as Y component, CB component and CR component. In Gray scale image, the Y is said to be the luminance feature also known as single component. In Color image, CB and CR is said to be the chromaticity of blue and red components. Further we find Hue, Saturation and Intensity components are classified from the same samples. Then the proposed technique shows its better performance than the other methods in the enhancement of images corrupted by Gaussian noise. The Experimental result shows that the proposed methods makes good enhancement in visual quality
    corecore