33,413 research outputs found

    Design of Novel Algorithm and Architecture for Gaussian Based Color Image Enhancement System for Real Time Applications

    Full text link
    This paper presents the development of a new algorithm for Gaussian based color image enhancement system. The algorithm has been designed into architecture suitable for FPGA/ASIC implementation. The color image enhancement is achieved by first convolving an original image with a Gaussian kernel since Gaussian distribution is a point spread function which smoothen the image. Further, logarithm-domain processing and gain/offset corrections are employed in order to enhance and translate pixels into the display range of 0 to 255. The proposed algorithm not only provides better dynamic range compression and color rendition effect but also achieves color constancy in an image. The design exploits high degrees of pipelining and parallel processing to achieve real time performance. The design has been realized by RTL compliant Verilog coding and fits into a single FPGA with a gate count utilization of 321,804. The proposed method is implemented using Xilinx Virtex-II Pro XC2VP40-7FF1148 FPGA device and is capable of processing high resolution color motion pictures of sizes of up to 1600x1200 pixels at the real time video rate of 116 frames per second. This shows that the proposed design would work for not only still images but also for high resolution video sequences.Comment: 15 pages, 15 figure

    Image enhancement using fuzzy intensity measure and adaptive clipping histogram equalization

    Get PDF
    Image enhancement aims at processing an input image so that the visual content of the output image is more pleasing or more useful for certain applications. Although histogram equalization is widely used in image enhancement due to its simplicity and effectiveness, it changes the mean brightness of the enhanced image and introduces a high level of noise and distortion. To address these problems, this paper proposes image enhancement using fuzzy intensity measure and adaptive clipping histogram equalization (FIMHE). FIMHE uses fuzzy intensity measure to first segment the histogram of the original image, and then clip the histogram adaptively in order to prevent excessive image enhancement. Experiments on the Berkeley database and CVF-UGR-Image database show that FIMHE outperforms state-of-the-art histogram equalization based methods

    Method and apparatus for predicting the direction of movement in machine vision

    Get PDF
    A computer-simulated cortical network is presented. The network is capable of computing the visibility of shifts in the direction of movement. Additionally, the network can compute the following: (1) the magnitude of the position difference between the test and background patterns; (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern; and (3) the direction of a test pattern moved relative to a textured background. The direction of movement of an object in the field of view of a robotic vision system is detected in accordance with nonlinear Gabor function algorithms. The movement of objects relative to their background is used to infer the 3-dimensional structure and motion of object surfaces

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201
    • …
    corecore