105 research outputs found

    Signal processing for improved MPEG-based communication systems

    Get PDF

    Removal Of Blocking Artifacts From JPEG-Compressed Images Using An Adaptive Filtering Algorithm

    Get PDF
    The aim of this research was to develop an algorithm that will produce a considerable improvement in the quality of JPEG images, by removing blocking and ringing artifacts, irrespective of the level of compression present in the image. We review multiple published related works, and finally present a computationally efficient algorithm for reducing the blocky and Gibbs oscillation artifacts commonly present in JPEG compressed images. The algorithm alpha-blends a smoothed version of the image with the original image; however, the blending is controlled by a limit factor that considers the amount of compression present and any local edge information derived from the application of a Prewitt filter. In addition, the actual value of the blending coefficient (α) is derived from the local Mean Structural Similarity Index Measure (MSSIM) which is also adjusted by a factor that also considers the amount of compression present. We also present our results as well as the results for a variety of other papers whose authors used other post compression filtering methods

    Removal Of Blocking Artifacts From JPEG-Compressed Images Using Neural Network

    Get PDF
    The goal of this research was to develop a neural network that will produce considerable improvement in the quality of JPEG compressed images, irrespective of compression level present in the images. In order to develop a computationally efficient algorithm for reducing blocky and Gibbs oscillation artifacts from JPEG compressed images, we integrated artificial intelligence to remove blocky and Gibbs oscillation artifacts. In this approach, alpha blend filter [7] was used to post process JPEG compressed images to reduce noise and artifacts without losing image details. Here alpha blending was controlled by a limit factor that considers the amount of compression present, and any local information derived from Prewitt filter application in the input JPEG image. The outcome of modified alpha blend was improved by a trained neural network and compared with various other published works [7][9][11][14][20][23][30][32][33][35][37] where authors used post compression filtering methods

    Video copy-move forgery detection scheme based on displacement paths

    Get PDF
    Sophisticated digital video editing tools has made it easier to tamper real videos and create perceptually indistinguishable fake ones. Even worse, some post-processing effects, which include object insertion and deletion in order to mimic or hide a specific event in the video frames, are also prevalent. Many attempts have been made to detect such as video copy-move forgery to date; however, the accuracy rates are still inadequate and rooms for improvement are wide-open and its effectiveness is confined to the detection of frame tampering and not localization of the tampered regions. Thus, a new detection scheme was developed to detect forgery and improve accuracy. The scheme involves seven main steps. First, it converts the red, green and blue (RGB) video into greyscale frames and treats them as images. Second, it partitions each frame into non-overlapping blocks of sized 8x8 pixels each. Third, for each two successive frames (S2F), it tracks every block’s duplicate using the proposed two-tier detection technique involving Diamond search and Slantlet transform to locate the duplicated blocks. Fourth, for each pair of the duplicated blocks of the S2F, it calculates a displacement using optical flow concept. Fifth, based on the displacement values and empirically calculated threshold, the scheme detects existence of any deleted objects found in the frames. Once completed, it then extracts the moving object using the same threshold-based approach. Sixth, a frame-by-frame displacement tracking is performed to trace the object movement and find a displacement path of the moving object. The process is repeated for another group of frames to find the next displacement path of the second moving object until all the frames are exhausted. Finally, the displacement paths are compared between each other using Dynamic Time Warping (DTW) matching algorithm to detect the cloning object. If any pair of the displacement paths are perfectly matched then a clone is found. To validate the process, a series of experiments based on datasets from Surrey University Library for Forensic Analysis (SULFA) and Video Tampering Dataset (VTD) were performed to gauge the performance of the proposed scheme. The experimental results of the detection scheme were very encouraging with an accuracy rate of 96.86%, which markedly outperformed the state-of-the-art methods by as much as 3.14%

    Optimization of video capturing and tone mapping in video camera systems

    Get PDF
    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In surveillance, a high output image quality with very robust and stable operation under difficult imaging conditions are essential, combined with automatic, intelligent camera behavior without user intervention. The key problem discussed in this thesis is to ensure this high quality under all conditions, which specifically addresses the discrepancy of the dynamic range of input scenes and displays. For example, typical challenges are High Dynamic Range (HDR) and low-dynamic range scenes with strong light-dark differences and overall poor visibility of details, respectively. The detailed problem statement is as follows: (1) performing correct and stable image acquisition for video cameras in variable dynamic range environments, and (2) finding the best image processing algorithms to maximize the visualization of all image details without introducing image distortions. Additionally, the solutions should satisfy complexity and cost requirements of typical video surveillance cameras. For image acquisition, we develop optimal image exposure algorithms that use a controlled lens, sensor integration time and camera gain, to maximize SNR. For faster and more stable control of the camera exposure system, we remove nonlinear tone-mapping steps from the level control loop and we derive a parallel control strategy that prevents control delays and compensates for the non-linearity and unknown transfer characteristics of the used lenses. For HDR imaging we adopt exposure bracketing that merges short and long exposed images. To solve the involved non-linear sensor distortions, we apply a non-linear correction function to the distorted sensor signal, implementing a second-order polynomial with coefficients adaptively estimated from the signal itself. The result is a good, dynamically controlled match between the long- and short-exposed image. The robustness of this technique is improved for fluorescent light conditions, preventing serious distortions by luminance flickering and color errors. To prevent image degradation we propose both fluorescent light detection and fluorescence locking, based on measurements of the sensor signal intensity and color errors in the short-exposed image. The use of various filtering steps increases the detector robustness and reliability for scenes with motion and the appearance of other light sources. In the alternative algorithm principle of fluorescence locking, we ensure that light integrated during the short exposure time has a correct intensity and color by synchronizing the exposure measurement to the mains frequency. The second area of research is to maximize visualization of all image details. This is achieved by both global and local tone mapping functions. The largest problem of Global Tone Mapping Functions (GTMF) is that they often significantly deteriorate the image contrast. We have developed a new GTMF and illustrate, both analytically and perceptually, that it exhibits only a limited amount of compression, compared to conventional solutions. Our algorithm splits GTMF into two tasks: (1) compressing HDR images (DRC transfer function) and (2) enhancing the (global) image contrast (CHRE transfer function). The DRC subsystem adapts the HDR video signal to the remainder of the system, which can handle only a fraction of the original dynamic range. Our main contribution is a novel DRC function shape which is adaptive to the image, so that details in the dark image parts are enhanced simultaneously while only moderately compressing details in the bright areas. Also, the DRC function shape is well matched with the sensor noise characteristics in order to limit the noise amplification. Furthermore, we show that the image quality can be significantly improved in DRC compression if a local contrast preservation step is included. The second part of GTMF is a CHRE subsystem that fine-tunes and redistributes the luminance (and color) signal in the image, to optimize global contrast of the scene. The contribution of the proposed CHRE processing is that unlike standard histogram equalization, it can preserve details in statistically unpopulated but visually relevant luminance regions. One of the important cornerstones of the GTMF is that both DRC and CHRE algorithms are performed in the perceptually uniform space and optimized for the salient regions obtained by the improved salient-region detector, to maximize the relevant information transfer to the HVS. The proposed GTMF solution offers a good processing quality, but cannot sufficiently preserve local contrast for extreme HDR signals and it gives limited improvement low-contrast scenes. The local contrast improvement is based on the Locally Adaptive Contrast Enhancement (LACE) algorithm. We contribute by using multi-band frequency decomposition, to set up the complete enhancement system. Four key problems occur with real-time LACE processing: (1) "halo" artifacts, (2) clipping of the enhancement signal, (3) noise degradation and (4) the overall system complexity. "Halo" artifacts are eliminated by a new contrast gain specification using local energy and contrast measurements. This solution has a low complexity and offers excellent performance in terms of higher contrast and visually appealing performance. Algorithms preventing clipping of the output signal and reducing noise amplification give a further enhancement. We have added a supplementary discussion on executing LACE in the logarithmic domain, where we have derived a new contrast gain function solving LACE problems efficiently. For the best results, we have found that LACE processing should be performed in the logarithmic domain for standard and HDR images, and in the linear domain for low-contrast images. Finally, the complexity of the contrast gain calculation is reduced by a new local energy metric, which can be calculated efficiently in a 2D-separable fashion. Besides the complexity benefit, the proposed energy metric gives better performance compared to the conventional metrics. The conclusions of our work are summarized as follows. For acquisition, we need to combine an optimal exposure algorithm, giving both improved dynamic performance and maximum image contrast/SNR, with robust exposure bracketing that can handle difficult conditions such as fluorescent lighting. For optimizing visibility of details in the scene, we have split the GTMF in two parts, DRC and CHRE, so that a controlled optimization can be performed offering less contrast compression and detail loss than in the conventional case. Local contrast is enhanced with the known LACE algorithm, but the performance is significantly improved by individually addressing "halo" artifacts, signal clipping and noise degradation. We provide artifact reduction by new contrast gain function based on local energy, contrast measurements and noise estimation. Besides the above arguments, we have contributed feasible performance metrics and listed ample practical evidence of the real-time implementation of our algorithms in FPGAs and ASICs, used in commercially available surveillance cameras, which obtained awards for their image quality
    corecore