6,996 research outputs found

    A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement

    Full text link
    Low-light images are not conducive to human observation and computer vision algorithms due to their low visibility. Although many image enhancement techniques have been proposed to solve this problem, existing methods inevitably introduce contrast under- and over-enhancement. Inspired by human visual system, we design a multi-exposure fusion framework for low-light image enhancement. Based on the framework, we propose a dual-exposure fusion algorithm to provide an accurate contrast and lightness enhancement. Specifically, we first design the weight matrix for image fusion using illumination estimation techniques. Then we introduce our camera response model to synthesize multi-exposure images. Next, we find the best exposure ratio so that the synthetic image is well-exposed in the regions where the original image is under-exposed. Finally, the enhanced result is obtained by fusing the input image and the synthetic image according to the weight matrix. Experiments show that our method can obtain results with less contrast and lightness distortion compared to that of several state-of-the-art methods.Comment: Project website: https://baidut.github.io/BIMEF

    Noise in Structured-Light Stereo Depth Cameras: Modeling and its Applications

    Full text link
    Depth maps obtained from commercially available structured-light stereo based depth cameras, such as the Kinect, are easy to use but are affected by significant amounts of noise. This paper is devoted to a study of the intrinsic noise characteristics of such depth maps, i.e. the standard deviation of noise in estimated depth varies quadratically with the distance of the object from the depth camera. We validate this theoretical model against empirical observations and demonstrate the utility of this noise model in three popular applications: depth map denoising, volumetric scan merging for 3D modeling, and identification of 3D planes in depth maps

    Single Image Dehazing through Improved Atmospheric Light Estimation

    Full text link
    Image contrast enhancement for outdoor vision is important for smart car auxiliary transport systems. The video frames captured in poor weather conditions are often characterized by poor visibility. Most image dehazing algorithms consider to use a hard threshold assumptions or user input to estimate atmospheric light. However, the brightest pixels sometimes are objects such as car lights or streetlights, especially for smart car auxiliary transport systems. Simply using a hard threshold may cause a wrong estimation. In this paper, we propose a single optimized image dehazing method that estimates atmospheric light efficiently and removes haze through the estimation of a semi-globally adaptive filter. The enhanced images are characterized with little noise and good exposure in dark regions. The textures and edges of the processed images are also enhanced significantly.Comment: Multimedia Tools and Applications (2015

    Enabling Pedestrian Safety using Computer Vision Techniques: A Case Study of the 2018 Uber Inc. Self-driving Car Crash

    Full text link
    Human lives are important. The decision to allow self-driving vehicles operate on our roads carries great weight. This has been a hot topic of debate between policy-makers, technologists and public safety institutions. The recent Uber Inc. self-driving car crash, resulting in the death of a pedestrian, has strengthened the argument that autonomous vehicle technology is still not ready for deployment on public roads. In this work, we analyze the Uber car crash and shed light on the question, "Could the Uber Car Crash have been avoided?". We apply state-of-the-art Computer Vision models to this highly practical scenario. More generally, our experimental results are an evaluation of various image enhancement and object recognition techniques for enabling pedestrian safety in low-lighting conditions using the Uber crash as a case study.Comment: 10 pages, 8 figures, 3 table

    Natural Color Image Enhancement based on Modified Multiscale Retinex Algorithm and Performance Evaluation usingWavelet Energy

    Full text link
    This paper presents a new color image enhancement technique based on modified MultiScale Retinex(MSR) algorithm and visual quality of the enhanced images are evaluated using a new metric, namely, wavelet energy. The color image enhancement is achieved by down sampling the value component of HSV color space converted image into three scales (normal, medium and fine) following the contrast stretching operation. These down sampled value components are enhanced using the MSR algorithm. The value component is reconstructed by averaging each pixels of the lower scale image with that of the upper scale image subsequent to up sampling the lower scale image. This process replaces dark pixel by the average pixels of both the lower scale and upper scale, while retaining the bright pixels. The quality of the reconstructed images in the proposed method is found to be good and far better then the other researchers method. The performance of the proposed scheme is evaluated using new wavelet domain based assessment criterion, referred as wavelet energy. This scheme computes the energy of both original and enhanced image in wavelet domain. The number of edge details as well as wavelet energy is less in a poor quality image compared with naturally enhanced image. Experimental results presented confirms that the proposed wavelet energy based color image quality assessment technique efficiently characterizes both the local and global details of enhanced image.Comment: 10 pages, 3 figures, Recent Advances in Intelligent Informatics Advances in Intelligent Systems and Computing Volume 235, 2014, pp 83-9

    A Brief Survey of Recent Edge-Preserving Smoothing Algorithms on Digital Images

    Full text link
    Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.Comment: Manuscrip

    A Unified Framework for Multi-Sensor HDR Video Reconstruction

    Full text link
    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (align- ment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality

    Scene Segmentation-Based Luminance Adjustment for Multi-Exposure Image Fusion

    Full text link
    We propose a novel method for adjusting luminance for multi-exposure image fusion. For the adjustment, two novel scene segmentation approaches based on luminance distribution are also proposed. Multi-exposure image fusion is a method for producing images that are expected to be more informative and perceptually appealing than any of the input ones, by directly fusing photos taken with different exposures. However, existing fusion methods often produce unclear fused images when input images do not have a sufficient number of different exposure levels. In this paper, we point out that adjusting the luminance of input images makes it possible to improve the quality of the final fused images. This insight is the basis of the proposed method. The proposed method enables us to produce high-quality images, even when undesirable inputs are given. Visual comparison results show that the proposed method can produce images that clearly represent a whole scene. In addition, multi-exposure image fusion with the proposed method outperforms state-of-the-art fusion methods in terms of MEF-SSIM, discrete entropy, tone mapped image quality index, and statistical naturalness.Comment: will be published in IEEE Transactions on Image Processin

    Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions

    Full text link
    Underwater image enhancement is such an important low-level vision task with many applications that numerous algorithms have been proposed in recent years. These algorithms developed upon various assumptions demonstrate successes from various aspects using different data sets and different metrics. In this work, we setup an undersea image capturing system, and construct a large-scale Real-world Underwater Image Enhancement (RUIE) data set divided into three subsets. The three subsets target at three challenging aspects for enhancement, i.e., image visibility quality, color casts, and higher-level detection/classification, respectively. We conduct extensive and systematic experiments on RUIE to evaluate the effectiveness and limitations of various algorithms to enhance visibility and correct color casts on images with hierarchical categories of degradation. Moreover, underwater image enhancement in practice usually serves as a preprocessing step for mid-level and high-level vision tasks. We thus exploit the object detection performance on enhanced images as a brand new task-specific evaluation criterion. The findings from these evaluations not only confirm what is commonly believed, but also suggest promising solutions and new directions for visibility enhancement, color correction, and object detection on real-world underwater images.Comment: arXiv admin note: text overlap with arXiv:1712.04143 by other author

    Towards Real-Time Advancement of Underwater Visual Quality with GAN

    Full text link
    Low visual quality has prevented underwater robotic vision from a wide range of applications. Although several algorithms have been developed, real-time and adaptive methods are deficient for real-world tasks. In this paper, we address this difficulty based on generative adversarial networks (GAN), and propose a GAN-based restoration scheme (GAN-RS). In particular, we develop a multi-branch discriminator including an adversarial branch and a critic branch for the purpose of simultaneously preserving image content and removing underwater noise. In addition to adversarial learning, a novel dark channel prior loss also promotes the generator to produce realistic vision. More specifically, an underwater index is investigated to describe underwater properties, and a loss function based on the underwater index is designed to train the critic branch for underwater noise suppression. Through extensive comparisons on visual quality and feature restoration, we confirm the superiority of the proposed approach. Consequently, the GAN-RS can adaptively improve underwater visual quality in real time and induce an overall superior restoration performance. Finally, a real-world experiment is conducted on the seabed for grasping marine products, and the results are quite promising. The source code is publicly available at https://github.com/SeanChenxy/GAN_RS
    • …
    corecore