613 research outputs found

    Local Contrast Enhancement Utilizing Bidirectional Switching Equalization of Separated and Clipped Subhistograms

    Get PDF
    Digital image contrast enhancement methods that are based on histogram equalization technique are still useful for the use in consumer electronic products due to their simple implementation. However, almost all the suggested enhancement methods are using global processing technique, which does not emphasize local contents. Therefore, this paper proposes a new local image contrast enhancement method, based on histogram equalization technique, which not only enhances the contrast, but also increases the sharpness of the image. Besides, this method is also able to preserve the mean brightness of the image. In order to limit the noise amplification, this newly proposed method utilizes local mean-separation, and clipped histogram bins methodologies. Based on nine test color images and the benchmark with other three histogram equalization based methods, the proposed technique shows the best overall performance

    Adaptive filtering techniques for acquisition noise and coding artifacts of digital pictures

    Get PDF
    The quality of digital pictures is often degraded by various processes (e.g, acquisition or capturing, compression, filtering process, transmission, etc). In digital image/video processing systems, random noise appearing in images is mainly generated during the capturing process; while the artifacts (or distortions) are generated in compression or filtering processes. This dissertation looks at digital image/video quality degradations with possible solution for post processing techniques for coding artifacts and acquisition noise reduction for images/videos. Three major issues associated with the image/video degradation are addressed in this work. The first issue is the temporal fluctuation artifact in digitally compressed videos. In the state-of-art video coding standard, H.264/AVC, temporal fluctuations are noticeable between intra picture frames or between an intra picture frame and neighbouring inter picture frames. To resolve this problem, a novel robust statistical temporal filtering technique is proposed. It utilises a re-descending robust statistical model with outlier rejection feature to reduce the temporal fluctuations while preserving picture details and motion sharpness. PSNR and sum of square difference (SSD) show improvement of proposed filters over other benchmark filters. Even for videos contain high motion, the proposed temporal filter shows good performances in fluctuation reduction and motion clarity preservation compared with other baseline temporal filters. The second issue concerns both the spatial and temporal artifacts (e.g, blocking, ringing, and temporal fluctuation artifacts) appearing in compressed video. To address this issue, a novel joint spatial and temporal filtering framework is constructed for artifacts reduction. Both the spatial and the temporal filters employ a re-descending robust statistical model (RRSM) in the filtering processes. The robust statistical spatial filter (RSSF) reduces spatial blocking and ringing artifacts whilst the robust statistical temporal filter (RSTF) suppresses the temporal fluctuations. Performance evaluations demonstrate that the proposed joint spatio-temporal filter is superior to H.264 loop filter in terms of spatial and temporal artifacts reduction and motion clarity preservation. The third issue is random noise, commonly modeled as mixed Gaussian and impulse noise (MGIN), which appears in image/video acquisition process. An effective method to estimate MGIN is through a robust estimator, median absolute deviation normalized (MADN). The MADN estimator is used to separate the MGIN model into impulse and additive Gaussian noise portion. Based on this estimation, the proposed filtering process is composed of a modified median filter for impulse noise reduction, and a DCT transform based denoising filter for additive Gaussian noise reduction. However, this DCT based denoising filter produces temporal fluctuations for videos. To solve this problem, a temporal filter is added to the filtering process. Therefore, another joint spatio-temporal filtering scheme is built to achieve the best visual quality of denoised videos. Extensive experiments show that the proposed joint spatio-temporal filtering scheme outperforms other benchmark filters in noise and distortions suppression

    Video Quality Metrics

    Get PDF

    Active and passive approaches for image authentication

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Signal processing for improved MPEG-based communication systems

    Get PDF

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world

    Monitoring the Depth of Anaesthesia

    Get PDF
    One of the current challenges in medicine is monitoring the patients’ depth of general anaesthesia (DGA). Accurate assessment of the depth of anaesthesia contributes to tailoring drug administration to the individual patient, thus preventing awareness or excessive anaesthetic depth and improving patients’ outcomes. In the past decade, there has been a significant increase in the number of studies on the development, comparison and validation of commercial devices that estimate the DGA by analyzing electrical activity of the brain (i.e., evoked potentials or brain waves). In this paper we review the most frequently used sensors and mathematical methods for monitoring the DGA, their validation in clinical practice and discuss the central question of whether these approaches can, compared to other conventional methods, reduce the risk of patient awareness during surgical procedures

    Optimization of video capturing and tone mapping in video camera systems

    Get PDF
    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In surveillance, a high output image quality with very robust and stable operation under difficult imaging conditions are essential, combined with automatic, intelligent camera behavior without user intervention. The key problem discussed in this thesis is to ensure this high quality under all conditions, which specifically addresses the discrepancy of the dynamic range of input scenes and displays. For example, typical challenges are High Dynamic Range (HDR) and low-dynamic range scenes with strong light-dark differences and overall poor visibility of details, respectively. The detailed problem statement is as follows: (1) performing correct and stable image acquisition for video cameras in variable dynamic range environments, and (2) finding the best image processing algorithms to maximize the visualization of all image details without introducing image distortions. Additionally, the solutions should satisfy complexity and cost requirements of typical video surveillance cameras. For image acquisition, we develop optimal image exposure algorithms that use a controlled lens, sensor integration time and camera gain, to maximize SNR. For faster and more stable control of the camera exposure system, we remove nonlinear tone-mapping steps from the level control loop and we derive a parallel control strategy that prevents control delays and compensates for the non-linearity and unknown transfer characteristics of the used lenses. For HDR imaging we adopt exposure bracketing that merges short and long exposed images. To solve the involved non-linear sensor distortions, we apply a non-linear correction function to the distorted sensor signal, implementing a second-order polynomial with coefficients adaptively estimated from the signal itself. The result is a good, dynamically controlled match between the long- and short-exposed image. The robustness of this technique is improved for fluorescent light conditions, preventing serious distortions by luminance flickering and color errors. To prevent image degradation we propose both fluorescent light detection and fluorescence locking, based on measurements of the sensor signal intensity and color errors in the short-exposed image. The use of various filtering steps increases the detector robustness and reliability for scenes with motion and the appearance of other light sources. In the alternative algorithm principle of fluorescence locking, we ensure that light integrated during the short exposure time has a correct intensity and color by synchronizing the exposure measurement to the mains frequency. The second area of research is to maximize visualization of all image details. This is achieved by both global and local tone mapping functions. The largest problem of Global Tone Mapping Functions (GTMF) is that they often significantly deteriorate the image contrast. We have developed a new GTMF and illustrate, both analytically and perceptually, that it exhibits only a limited amount of compression, compared to conventional solutions. Our algorithm splits GTMF into two tasks: (1) compressing HDR images (DRC transfer function) and (2) enhancing the (global) image contrast (CHRE transfer function). The DRC subsystem adapts the HDR video signal to the remainder of the system, which can handle only a fraction of the original dynamic range. Our main contribution is a novel DRC function shape which is adaptive to the image, so that details in the dark image parts are enhanced simultaneously while only moderately compressing details in the bright areas. Also, the DRC function shape is well matched with the sensor noise characteristics in order to limit the noise amplification. Furthermore, we show that the image quality can be significantly improved in DRC compression if a local contrast preservation step is included. The second part of GTMF is a CHRE subsystem that fine-tunes and redistributes the luminance (and color) signal in the image, to optimize global contrast of the scene. The contribution of the proposed CHRE processing is that unlike standard histogram equalization, it can preserve details in statistically unpopulated but visually relevant luminance regions. One of the important cornerstones of the GTMF is that both DRC and CHRE algorithms are performed in the perceptually uniform space and optimized for the salient regions obtained by the improved salient-region detector, to maximize the relevant information transfer to the HVS. The proposed GTMF solution offers a good processing quality, but cannot sufficiently preserve local contrast for extreme HDR signals and it gives limited improvement low-contrast scenes. The local contrast improvement is based on the Locally Adaptive Contrast Enhancement (LACE) algorithm. We contribute by using multi-band frequency decomposition, to set up the complete enhancement system. Four key problems occur with real-time LACE processing: (1) "halo" artifacts, (2) clipping of the enhancement signal, (3) noise degradation and (4) the overall system complexity. "Halo" artifacts are eliminated by a new contrast gain specification using local energy and contrast measurements. This solution has a low complexity and offers excellent performance in terms of higher contrast and visually appealing performance. Algorithms preventing clipping of the output signal and reducing noise amplification give a further enhancement. We have added a supplementary discussion on executing LACE in the logarithmic domain, where we have derived a new contrast gain function solving LACE problems efficiently. For the best results, we have found that LACE processing should be performed in the logarithmic domain for standard and HDR images, and in the linear domain for low-contrast images. Finally, the complexity of the contrast gain calculation is reduced by a new local energy metric, which can be calculated efficiently in a 2D-separable fashion. Besides the complexity benefit, the proposed energy metric gives better performance compared to the conventional metrics. The conclusions of our work are summarized as follows. For acquisition, we need to combine an optimal exposure algorithm, giving both improved dynamic performance and maximum image contrast/SNR, with robust exposure bracketing that can handle difficult conditions such as fluorescent lighting. For optimizing visibility of details in the scene, we have split the GTMF in two parts, DRC and CHRE, so that a controlled optimization can be performed offering less contrast compression and detail loss than in the conventional case. Local contrast is enhanced with the known LACE algorithm, but the performance is significantly improved by individually addressing "halo" artifacts, signal clipping and noise degradation. We provide artifact reduction by new contrast gain function based on local energy, contrast measurements and noise estimation. Besides the above arguments, we have contributed feasible performance metrics and listed ample practical evidence of the real-time implementation of our algorithms in FPGAs and ASICs, used in commercially available surveillance cameras, which obtained awards for their image quality
    corecore