722 research outputs found

    Modelling of content-aware indicators for effective determination of shot boundaries in compressed MPEG videos

    Get PDF
    In this paper, a content-aware approach is proposed to design multiple test conditions for shot cut detection, which are organized into a multiple phase decision tree for abrupt cut detection and a finite state machine for dissolve detection. In comparison with existing approaches, our algorithm is characterized with two categories of content difference indicators and testing. While the first category indicates the content changes that are directly used for shot cut detection, the second category indicates the contexts under which the content change occurs. As a result, indications of frame differences are tested with context awareness to make the detection of shot cuts adaptive to both content and context changes. Evaluations announced by TRECVID 2007 indicate that our proposed algorithm achieved comparable performance to those using machine learning approaches, yet using a simpler feature set and straightforward design strategies. This has validated the effectiveness of modelling of content-aware indicators for decision making, which also provides a good alternative to conventional approaches in this topic

    Shot boundary detection in MPEG videos using local and global indicators

    Get PDF
    Shot boundary detection (SBD) plays important roles in many video applications. In this letter, we describe a novel method on SBD operating directly in the compressed domain. First, several local indicators are extracted from MPEG macroblocks, and AdaBoost is employed for feature selection and fusion. The selected features are then used in classifying candidate cuts into five sub-spaces via pre-filtering and rule-based decision making. Following that, global indicators of frame similarity between boundary frames of cut candidates are examined using phase correlation of dc images. Gradual transitions like fade, dissolve, and combined shot cuts are also identified. Experimental results on the test data from TRECVID'07 have demonstrated the effectiveness and robustness of our proposed methodology. * INSPEC o Controlled Indexing decision making , image segmentation , knowledge based systems , video coding o Non Controlled Indexing AdaBoost , MPEG videos , feature selection , global indicator , local indicator , rule-based decision making , shot boundary detection , video segmentation * Author Keywords Decision making , TRECVID , shot boundary detection (SBD) , video segmentation , video signal processing References 1. J. Yuan , H. Wang , L. Xiao , W. Zheng , J. L. F. Lin and B. Zhang "A formal study of shot boundary detection", IEEE Trans. Circuits Syst. Video Technol., vol. 17, pp. 168 2007. Abstract |Full Text: PDF (2789KB) 2. C. Grana and R. Cucchiara "Linear transition detection as a unified shot detection approach", IEEE Trans. Circuits Syst. Video Technol., vol. 17, pp. 483 2007. Abstract |Full Text: PDF (505KB) 3. Q. Urhan , M. K. Gullu and S. Erturk "Modified phase-correlation based robust hard-cut detection with application to archive film", IEEE Trans. Circuits Syst. Video Technol., vol. 16, pp. 753 2006. Abstract |Full Text: PDF (3808KB) 4. C. Cotsaces , N. Nikolaidis and I. Pitas "Video shot detection and condensed representation: A review", Proc. IEEE Signal Mag., vol. 23, pp. 28 2006. 5. National Institute of Standards and Technology (NIST), pp. [online] Available: http://www-nlpir.nist.gov/projects/trecvid/ 6. J. Bescos "Real-time shot change detection over online MPEG-2 video", IEEE Trans. Circuits Syst. Video Technol., vol. 14, pp. 475 2004. Abstract |Full Text: PDF (1056KB) 7. H. Lu and Y. P. Tan "An effective post-refinement method for shot boundary detection", IEEE Trans. Circuits Syst. Video Technol., vol. 15, pp. 1407 2005. Abstract |Full Text: PDF (3128KB) 8. G. Boccignone , A. Chianese , V. Moscato and A. Picariello "Foveated shot detection for video segmentation", IEEE Trans. Circuits Syst. Video Technol., vol. 15, pp. 365 2005. Abstract |Full Text: PDF (2152KB) 9. Z. Cernekova , I. Pitas and C. Nikou "Information theory-based shot cut/fade detection and video summarization", IEEE Trans. Circuits Syst. Video Technol., vol. 16, pp. 82 2006. Abstract |Full Text: PDF (1184KB) 10. L.-Y. Duan , M. Xu , Q. Tian , C.-S. Xu and J. S. Jin "A unified framework for semantic shot classification in sports video", IEEE Trans. Multimedia, vol. 7, pp. 1066 2005. Abstract |Full Text: PDF (2872KB) 11. H. Fang , J. M. Jiang and Y. Feng "A fuzzy logic approach for detection of video shot boundaries", Pattern Recogn., vol. 39, pp. 2092 2006. [CrossRef] 12. R. A. Joyce and B. Liu "Temporal segmentation of video using frame and histogram space", IEEE Trans. Multimedia, vol. 8, pp. 130 2006. Abstract |Full Text: PDF (864KB) 13. A. Hanjalic "Shot boundary detection: Unraveled and resolved", IEEE Trans. Circuits Syst. Video Technol., vol. 12, pp. 90 2002. Abstract |Full Text: PDF (289KB) 14. S.-C. Pei and Y.-Z. Chou "Efficient MPEG compressed video analysis using macroblock type information", IEEE Trans. Multimedia, vol. 1, pp. 321 1999. Abstract |Full Text: PDF (612KB) 15. C.-L. Huang and B.-Y. Liao "A robust scene-change detection method for video segmentation", IEEE Trans. Circuits Syst. Video Technol., vol. 11, pp. 1281 2001. Abstract |Full Text: PDF (241KB) 16. Y. Freund and R. E. Schapire "A decision-theoretic generalization of online learning and an application to boosting", J. Comput. Syst. Sci., vol. 55, pp. 119 1997. [CrossRef] On this page * Abstract * Index Terms * References Brought to you by STRATHCLYDE UNIVERSITY LIBRARY * Your institute subscribes to: * IEEE-Wiley eBooks Library , IEEE/IET Electronic Library (IEL) * What can I access? Terms of Us

    Activity-driven content adaptation for effective video summarisation

    Get PDF
    In this paper, we present a novel method for content adaptation and video summarization fully implemented in compressed-domain. Firstly, summarization of generic videos is modeled as the process of extracted human objects under various activities/events. Accordingly, frames are classified into five categories via fuzzy decision including shot changes (cut and gradual transitions), motion activities (camera motion and object motion) and others by using two inter-frame measurements. Secondly, human objects are detected using Haar-like features. With the detected human objects and attained frame categories, activity levels for each frame are determined to adapt with video contents. Continuous frames belonging to same category are grouped to form one activity entry as content of interest (COI) which will convert the original video into a series of activities. An overall adjustable quota is used to control the size of generated summarization for efficient streaming purpose. Upon this quota, the frames selected for summarization are determined by evenly sampling the accumulated activity levels for content adaptation. Quantitative evaluations have proved the effectiveness and efficiency of our proposed approach, which provides a more flexible and general solution for this topic as domain-specific tasks such as accurate recognition of objects can be avoided

    Evaluating and combining digital video shot boundary detection algorithms

    Get PDF
    The development of standards for video encoding coupled with the increased power of computing mean that content-based manipulation of digital video information is now feasible. Shots are a basic structural building block of digital video and the boundaries between shots need to be determined automatically to allow for content-based manipulation. A shot can be thought of as continuous images from one camera at a time. In this paper we examine a variety of automatic techniques for shot boundary detection that we have implemented and evaluated on a baseline of 720,000 frames (8 hours) of broadcast television. This extends our previous work on evaluating a single technique based on comparing colour histograms. A description of each of our three methods currently working is given along with how they are evaluated. It is found that although the different methods have about the same order of magnitude in terms of effectiveness, different shot boundaries are detected by the different methods. We then look at combining the three shot boundary detection methods to produce one output result and the benefits in accuracy and performance that this brought to our system. Each of the methods were changed from using a static threshold value for three unconnected methods to one using three dynamic threshold values for one connected method. In a final summing up we look at the future directions for this work

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords

    No-reference analysis of decoded MPEG images for PSNR estimation and post-processing

    Get PDF
    We propose no-reference analysis and processing of DCT (Discrete Cosine Transform) coded images based on estimation of selected MPEG parameters from the decoded video. The goal is to assess MPEG video quality and perform post-processing without access to neither the original stream nor the code stream. Solutions are presented for MPEG-2 video. A method to estimate the quantization parameters of DCT coded images and MPEG I-frames at the macro-block level is presented. The results of this analysis is used for deblocking and deringing artifact reduction and no-reference PSNR estimation without code stream access. An adaptive deringing method using texture classification is presented. On the test set, the quantization parameters in MPEG-2 I-frames are estimated with an overall accuracy of 99.9% and the PSNR is estimated with an overall average error of 0.3dB. The deringing and deblocking algorithms yield improvements of 0.3dB on the MPEG-2 decoded test sequences
    corecore