393 research outputs found

    Removing Atmospheric Noise Using Channel Selective Processing For Visual Correction

    Get PDF
    In the presented paper; we propose an effective image fog removal technique with a color stabilization technique which is a total 2-level process for image restoration with a HSI (Hue Saturation Intensity) based evaluation process. The approach uses extraction of suppressed pixels from an RGB image affected by smoke, steam, fog which is form of white and Gaussian noise. From our observation of most images in fog environment contain some pixels which have low values of luminescence in every color channel (considering RGB image).Using this model, we can directly estimate the effective density of fog and recover the most affected parts in the image. The parameter of calculating the effective luminescence which is a form of intensity, and also gives the scattering estimates of the light, the combined Laplace of the luminescence-light and suppressed pixels values gives us the basic map of light spread which is further used in the restoration of intensity. The transmission of intensity between the calculated fog values in the image give the estimate for the local transition between the intensity values and color values. This factor helps in the color restoration of the affected image and estimates the proper restoration of image after removal of dense fog particles. After the removal of fog particles, we then restore the color balance in the image using an auto-color-contrast stabilization technique. This is the 2-level fog restoration method. The visibility is highly dependent on the saturation of color values and not over saturation, which accounts for image quality improvements. In order to evaluate in-depth the effectiveness, we have also introduced the HSI mapping of the images, as this will show the true restoration of intensity and saturation in the fog image. Results on various images demonstrate the power of the proposed algorithm. To measure the efficiency of the algorithm the parameter of visual index is also estimated which further evaluates the robustness of the proposed algorithm for the HVS (Human Visual System) for the de-fogged images

    Digital image enhancement by brightness and contrast manipulation using Verilog hardware description language

    Get PDF
    A foggy environment may cause digitally captured images to appear blurry, dim, or low in contrast. This will impact computer vision systems that rely on image information. With the need for real-time image information, such as a plate number recognition system, a simple yet effective image enhancement algorithm using a hardware implementation is very much needed to fulfil the need. To improve images that suffer from low exposure and hazy, the hardware implementations are usually based on complex algorithms. Hence, the aim of this paper is to propose a less complex enhancement algorithm for hardware implementation that is able to improve the quality of such images. The proposed method simply combines brightness and contrast manipulation to enhance the image. In order to see the performance of the proposed method, a total of 100 vehicle registration number images were collected, enhanced, and evaluated. The evaluation results were compared to two other enhancement methods quantitatively and qualitatively. Quantitative evaluation is done by evaluating the output image using peak signal-to-noise ratio and mean-square error evaluation metrics, while a survey is done to evaluate the output image qualitatively. Based on the quantitative evaluation results, our proposed method outperforms the other two enhancement methods

    DEEP LEARNING FOR IMAGE RESTORATION AND ROBOTIC VISION

    Get PDF
    Traditional model-based approach requires the formulation of mathematical model, and the model often has limited performance. The quality of an image may degrade due to a variety of reasons: It could be the context of scene is affected by weather conditions such as haze, rain, and snow; It\u27s also possible that there is some noise generated during image processing/transmission (e.g., artifacts generated during compression.). The goal of image restoration is to restore the image back to desirable quality both subjectively and objectively. Agricultural robotics is gaining interest these days since most agricultural works are lengthy and repetitive. Computer vision is crucial to robots especially the autonomous ones. However, it is challenging to have a precise mathematical model to describe the aforementioned problems. Compared with traditional approach, learning-based approach has an edge since it does not require any model to describe the problem. Moreover, learning-based approach now has the best-in-class performance on most of the vision problems such as image dehazing, super-resolution, and image recognition. In this dissertation, we address the problem of image restoration and robotic vision with deep learning. These two problems are highly related with each other from a unique network architecture perspective: It is essential to select appropriate networks when dealing with different problems. Specifically, we solve the problems of single image dehazing, High Efficiency Video Coding (HEVC) loop filtering and super-resolution, and computer vision for an autonomous robot. Our technical contributions are threefold: First, we propose to reformulate haze as a signal-dependent noise which allows us to uncover it by learning a structural residual. Based on our novel reformulation, we solve dehazing with recursive deep residual network and generative adversarial network which emphasizes on objective and perceptual quality, respectively. Second, we replace traditional filters in HEVC with a Convolutional Neural Network (CNN) filter. We show that our CNN filter could achieve 7% BD-rate saving when compared with traditional filters such as bilateral and deblocking filter. We also propose to incorporate a multi-scale CNN super-resolution module into HEVC. Such post-processing module could improve visual quality under extremely low bandwidth. Third, a transfer learning technique is implemented to support vision and autonomous decision making of a precision pollination robot. Good experimental results are reported with real-world data

    The Cord (September 26, 2012)

    Get PDF

    Multispectral pansharpening with radiative transfer-based detail-injection modeling for preserving changes in vegetation cover

    Get PDF
    Whenever vegetated areas are monitored over time, phenological changes in land cover should be decoupled from changes in acquisition conditions, like atmospheric components, Sun and satellite heights and imaging instrument. This especially holds when the multispectral (MS) bands are sharpened for spatial resolution enhancement by means of a panchromatic (Pan) image of higher resolution, a process referred to as pansharpening. In this paper, we provide evidence that pansharpening of visible/near-infrared (VNIR) bands takes advantage of a correction of the path radiance term introduced by the atmosphere, during the fusion process. This holds whenever the fusion mechanism emulates the radiative transfer model ruling the acquisition of the Earth's surface from space, that is for methods exploiting a multiplicative, or contrast-based, injection model of spatial details extracted from the panchromatic (Pan) image into the interpolated multispectral (MS) bands. The path radiance should be estimated and subtracted from each band before the product by Pan is accomplished. Both empirical and model-based estimation techniques of MS path radiances are compared within the framework of optimized algorithms. Simulations carried out on two GeoEye-1 observations of the same agricultural landscape on different dates highlight that the de-hazing of MS before fusion is beneficial to an accurate detection of seasonal changes in the scene, as measured by the normalized differential vegetation index (NDVI)
    • …
    corecore