437 research outputs found

    Intensity and Compactness Enabled Saliency Estimation for Leakage Detection in Diabetic and Malarial Retinopathy

    Get PDF
    Leakage in retinal angiography currently is a key feature for confirming the activities of lesions in the management of a wide range of retinal diseases, such as diabetic maculopathy and paediatric malarial retinopathy. This paper proposes a new saliency-based method for the detection of leakage in fluorescein angiography. A superpixel approach is firstly employed to divide the image into meaningful patches (or superpixels) at different levels. Two saliency cues, intensity and compactness, are then proposed for the estimation of the saliency map of each individual superpixel at each level. The saliency maps at different levels over the same cues are fused using an averaging operator. The two saliency maps over different cues are fused using a pixel-wise multiplication operator. Leaking regions are finally detected by thresholding the saliency map followed by a graph-cut segmentation. The proposed method has been validated using the only two publicly available datasets: one for malarial retinopathy and the other for diabetic retinopathy. The experimental results show that it outperforms one of the latest competitors and performs as well as a human expert for leakage detection and outperforms several state-of-the-art methods for saliency detection

    Content-driven superpixels and their applications

    No full text
    This thesis develops a new superpixel algorithm that displays excellent visual reconstruction of the original image. It achieves high stability across multiple random initialisations, achieved by producing superpixels directly corresponding to local image complexity. This is achieved by growing superpixels and dividing them on image variation. The existing analysis was not sufficient to take these properties into account so new measures of oversegmentation provide new insight into the optimum superpixel representation. As a consequence of the algorithm, it was discovered that CDS has properties that have eluded previous attempts, such as initialisation invariance and stability. The completely unsupervised nature of CDS makes them highly suitable for tasks such as application to a database containing images of unknown complexity. These new superpixel properties have allowed new applications for superpixel pre-processing to be produced. These are image segmentation; image compression; scene classification; and focus detection. In addition, a new method of objectively analysing regions of focus has been developed using Light-Field photography

    Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks

    Full text link
    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.Comment: Presented at SPIE Security + Defence 2016 Proc. SPIE 9997, Target and Background Signatures I

    Single image defocus estimation by modified gaussian function

    Get PDF
    © 2019 John Wiley & Sons, Ltd. This article presents an algorithm to estimate the defocus blur from a single image. Most of the existing methods estimate the defocus blur at edge locations, which further involves the reblurring process. For this purpose, existing methods use the traditional Gaussian function in the phase of reblurring but it is found that the traditional Gaussian kernel is sensitive to the edges and can cause loss of edges information. Hence, there are more chances of missing spatially varying blur at edge locations. We offer the repeated averaging filters as an alternative to the traditional Gaussian function, which is more effective, and estimate the spatially varying defocus blur at edge locations. By using repeated averaging filters, a blur sparse map is computed. The obtained sparse map is propagated by integration of superpixels segmentation and transductive inference to estimate full defocus blur map. Our adopted method of repeated averaging filters has less computational time of defocus blur map estimation and has better visual estimates of the final defocus recovered map. Moreover, it has surpassed many previous state-of-the-art proposed systems in terms of quantative analysis

    Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera

    Full text link
    We propose and demonstrate experimentally a new method based on the spatial entanglement for the absolute calibration of analog detector. The idea consists on measuring the sub-shot-noise intensity correlation between two branches of parametric down conversion, containing many pairwise correlated spatial modes. We calibrate a scientific CCD camera and a preliminary evaluation of the statistical uncertainty indicates the metrological interest of the method

    Automated Retinal Lesion Detection via Image Saliency Analysis

    Get PDF
    Background and objective:The detection of abnormalities such as lesions or leakage from retinal images is an important health informatics task for automated early diagnosis of diabetic and malarial retinopathy or other eye diseases, in order to prevent blindness and common systematic conditions. In this work, we propose a novel retinal lesion detection method by adapting the concepts of saliency. Methods :Retinal images are firstly segmented as superpixels, two new saliency feature representations: uniqueness and compactness, are then derived to represent the superpixels. The pixel level saliency is then estimated from these superpixel saliency values via a bilateral filter. These extracted saliency features form a matrix for low-rank analysis to achieve saliency detection. The precise contour of a lesion is finally extracted from the generated saliency map after removing confounding structures such as blood vessels, the optic disc, and the fovea. The main novelty of this method is that it is an effective tool for detecting different abnormalities at pixel-level from different modalities of retinal images, without the need to tune parameters. Results:To evaluate its effectiveness, we have applied our method to seven public datasets of diabetic and malarial retinopathy with four different types of lesions: exudate, hemorrhage, microaneurysms, and leakage. The evaluation was undertaken at pixel-level, lesion-level, or image-level according to ground truth availability in these datasets. Conclusions:The experimental results show that the proposed method outperforms existing state-of-the-art ones in applicability, effectiveness, and accuracy
    corecore