19 research outputs found

    Detection of dirt impairments from archived film sequences : survey and evaluations

    Get PDF
    Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research

    Segmentation-assisted detection of dirt impairments in archived film sequences

    Get PDF
    A novel segmentation-assisted method for film dirt detection is proposed. We exploit the fact that film dirt manifests in the spatial domain as a cluster of connected pixels whose intensity differs substantially from that of its neighborhood and we employ a segmentation-based approach to identify this type of structure. A key feature of our approach is the computation of a measure of confidence attached to detected dirt regions which can be utilized for performance fine tuning. Another important feature of our algorithm is the avoidance of the computational complexity associated with motion estimation. Our experimental framework benefits from the availability of manually derived as well as objective ground truth data obtained using infrared scanning. Our results demonstrate that the proposed method compares favorably with standard spatial, temporal and multistage median filtering approaches and provides efficient and robust detection for a wide variety of test material

    Knowledge based fundamental and harmonic frequency detection in polyphonic music analysis

    Get PDF
    In this paper, we present an efficient approach to detect and tracking the fundamental frequency (Fo) from 'wav' audio. In general, music Fo and harmonic frequency show the multiple relations; therefore frequency domain analysis can be used to track the Fo. The model includes the harmonic frequency probability analysis method and useful pre-post processing for multiple instruments. Thus, the proposed system can efficiently transcribe polyphonic music, while taking into account the probability of Fo and harmonic frequency. The experimental results demonstrate that the proposed system can successful transcribe polyphonic music, achieved the quite advanced level

    Compressive sensing based secret signals recovery for effective image steganalysis in secure communications

    Get PDF
    Conventional image steganalysis mainly focus on presence detection rather than the recovery of the original secret messages that were embedded in the host image. To address this issue, we propose an image steganalysis method featured in the compressive sensing (CS) domain, where block CS measurement matrix senses the transform coefficients of stego-image to reflect the statistical differences between the cover and stego- images. With multi-hypothesis prediction in the CS domain, the reconstruction of hidden signals is achieved efficiently. Extensive experiments have been carried out on five diverse image databases and benchmarked with four typical stegographic algorithms. The comprehensive results have demonstrated the efficacy of the proposed approach as a universal scheme for effective detection of stegography in secure communications whilst it has greatly reduced the numbers of features requested for secret signal reconstruction

    A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos

    Get PDF
    Although research on detection of saliency and visual attention has been active over recent years, most of the existing work focuses on still image rather than video based saliency. In this paper, a deep learning based hybrid spatiotemporal saliency feature extraction framework is proposed for saliency detection from video footages. The deep learning model is used for the extraction of high-level features from raw video data, and they are then integrated with other high-level features. The deep learning network has been found extremely effective for extracting hidden features than that of conventional handcrafted methodology. The effectiveness for using hybrid high-level features for saliency detection in video is demonstrated in this work. Rather than using only one static image, the proposed deep learning model take several consecutive frames as input and both the spatial and temporal characteristics are considered when computing saliency maps. The efficacy of the proposed hybrid feature framework is evaluated by five databases with human gaze complex scenes. Experimental results show that the proposed model outperforms five other state-of-the-art video saliency detection approaches. In addition, the proposed framework is found useful for other video content based applications such as video highlights. As a result, a large movie clip dataset together with labeled video highlights is generated

    Efficient detection of temporally impulsive dirt impairments in archived films

    No full text
    We propose a novel approach for the detection of temporally impulsive dirt impairments in archived film sequences. Our method does not require motion compensation and uses raw differences between the current frame and each of the previous and next frames to extract a confidence signal which is used to localize and label dirt regions. A key feature of our method is the removal of false alarms by local region-growing. Unlike other work utilizing manually added dirt impairments, we test our method on real film sequences with objective ground truth obtained by infrared scanning. With confidence information extracted from color channels, dirt areas of low contrast to the corresponding gray image can be successfully detected by our method when motion-based methods fail. Comparisons with established algorithms demonstrate that our method offers more efficient, robust and accurate dirt detection with fewer false alarms for a wide range of test material

    Nonlinear acoustics of water-saturated marine sediments

    Get PDF
    corecore