12,321 research outputs found

    A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

    Full text link
    Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02±0.684.02 \pm 0.68 dB (single-frame) to 8.14±1.038.14 \pm 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50±0.563.50 \pm 0.56 (single-frame) to 7.63±1.817.63 \pm 1.81 (denoised). The MSSIM increased from 0.13±0.020.13 \pm 0.02 (single frame) to 0.65±0.030.65 \pm 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort

    Novel image enhancement technique using shunting inhibitory cellular neural networks

    Get PDF
    This paper describes a method for improving image quality in a color CMOS image sensor. The technique simultaneously acts to compress the dynamic range, reorganize the signal to improve visibility, suppress noise, identify local features, achieve color constancy, and lightness rendition. An efficient hardware architecture and a rigorous analysis of the different modules are presented to achieve high quality CMOS digital camera

    A Non-Reference Evaluation of Underwater Image Enhancement Methods Using a New Underwater Image Dataset

    Get PDF
    The rise of vision-based environmental, marine, and oceanic exploration research highlights the need for supporting underwater image enhancement techniques to help mitigate water effects on images such as blurriness, low color contrast, and poor quality. This paper presents an evaluation of common underwater image enhancement techniques using a new underwater image dataset. The collected dataset is comprised of 100 images of aquatic plants taken at a shallow depth of up to three meters from three different locations in the Great Lake Superior, USA, via a Remotely Operated Vehicle (ROV) equipped with a high-definition RGB camera. In particular, we use our dataset to benchmark nine state-of-the-art image enhancement models at three different depths using a set of common non-reference image quality evaluation metrics. Then we provide a comparative analysis of the performance of the selected models at different depths and highlight the most prevalent ones. The obtained results show that the selected image enhancement models are capable of producing considerably better-quality images with some models performing better than others at certain depths

    Image enhancement for underwater mining applications

    Get PDF
    The exploration of water bodies from the sea to land filled water spaces has seen a continuous increase with new technologies such as robotics. Underwater images is one of the main sensor resources used but suffer from added problems due to the environment. Multiple methods and techniques have provided a way to correct the color, clear the poor quality and enhance the features. In this thesis work, we present the work of an Image Cleaning and Enhancement Technique which is based on performing color correction on images incorporated with Dark Channel Prior (DCP) and then taking the converted images and modifying them into the Long, Medium and Short (LMS) color space, as this space is the region in which the human eye perceives colour. This work is being developed at LSA (Laboratório de Sistema Autónomos) robotics and autonomous systems laboratory. Our objective is to improve the quality of images for and taken by robots with the particular emphasis on underwater flooded mines. This thesis work describes the architecture and the developed solution. A comparative analysis with state of the art methods and of our proposed solution is presented. Results from missions taken by the robot in operational mine scenarios are presented and discussed and allowing for the solution characterization and validation

    Enhancement of Single and Composite Images Based on Contourlet Transform Approach

    Get PDF
    Image enhancement is an imperative step in almost every image processing algorithms. Numerous image enhancement algorithms have been developed for gray scale images despite their absence in many applications lately. This thesis proposes hew image enhancement techniques of 8-bit single and composite digital color images. Recently, it has become evident that wavelet transforms are not necessarily best suited for images. Therefore, the enhancement approaches are based on a new 'true' two-dimensional transform called contourlet transform. The proposed enhancement techniques discussed in this thesis are developed based on the understanding of the working mechanisms of the new multiresolution property of contourlet transform. This research also investigates the effects of using different color space representations for color image enhancement applications. Based on this investigation an optimal color space is selected for both single image and composite image enhancement approaches. The objective evaluation steps show that the new method of enhancement not only superior to the commonly used transformation method (e.g. wavelet transform) but also to various spatial models (e.g. histogram equalizations). The results found are encouraging and the enhancement algorithms have proved to be more robust and reliable

    Autonomous Grasping Using Novel Distance Estimator

    Get PDF
    This paper introduces a novel distance estimator using monocular vision for autonomous underwater grasping. The presented method is also applicable to topside grasping operations. The estimator is developed for robot manipulators with a monocular camera placed near the gripper. The fact that the camera is attached near the gripper makes it possible to design a method for capturing images from different positions, as the relative position change can be measured. The presented system can estimate relative distance to an object of unknown size with good precision. The manipulator applied in the presented work is the SeaArm-2, a fully electric underwater small modular manipulator. The manipulator is unique in its integrated monocular camera in the end-effector module, and its design facilitates the use of different end-effector tools. The camera is used for supervision, object detection, and tracking. The distance estimator was validated in a laboratory setting through autonomous grasping experiments. The manipulator was able to search for and find, estimate the relative distance of, grasp, and retrieve the relevant object in 12 out of 12 trials.publishedVersio

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore