82,601 research outputs found

    Super Imposed Method for Text Extraction in a Sports Video

    Get PDF
    Video is one of the sources for presenting the valuable information. It contains sequence of video images, audio and text information. Text data present in video contain useful information for automatic annotation, structuring, mining, indexing and retrieval of video. Nowadays mechanically added (superimposed) text in video sequences provides useful information about their contents. It provides supplemental but important information for video indexing and retrieval. A large number of techniques have been proposed to address this problem. This paper provides a novel method of detecting video text regions containing player information and score in sports videos. It also proposes an improved algorithm for the automatic extraction of super imposed text in sports video. First, we identified key frames from video using the Color Histogram technique to minimize the number of video frames. Then, the key images were converted into gray images for the efficient text detection. Generally, the super imposed text displayed in bottom part of the image in the sports video. So, we cropped the text image regions in the gray image which contains the text information. Then we applied the canny edge detection algorithms for text edge detection. The ESPN cricket video data was taken for our experiment and extracted the super imposed text region in the sports video. Using the OCR tool, the text region image was converted as ASCII text and the result was verified

    Shot boundary detection in MPEG videos using local and global indicators

    Get PDF
    Shot boundary detection (SBD) plays important roles in many video applications. In this letter, we describe a novel method on SBD operating directly in the compressed domain. First, several local indicators are extracted from MPEG macroblocks, and AdaBoost is employed for feature selection and fusion. The selected features are then used in classifying candidate cuts into five sub-spaces via pre-filtering and rule-based decision making. Following that, global indicators of frame similarity between boundary frames of cut candidates are examined using phase correlation of dc images. Gradual transitions like fade, dissolve, and combined shot cuts are also identified. Experimental results on the test data from TRECVID'07 have demonstrated the effectiveness and robustness of our proposed methodology. * INSPEC o Controlled Indexing decision making , image segmentation , knowledge based systems , video coding o Non Controlled Indexing AdaBoost , MPEG videos , feature selection , global indicator , local indicator , rule-based decision making , shot boundary detection , video segmentation * Author Keywords Decision making , TRECVID , shot boundary detection (SBD) , video segmentation , video signal processing References 1. J. Yuan , H. Wang , L. Xiao , W. Zheng , J. L. F. Lin and B. Zhang "A formal study of shot boundary detection", IEEE Trans. Circuits Syst. Video Technol., vol. 17, pp. 168 2007. Abstract |Full Text: PDF (2789KB) 2. C. Grana and R. Cucchiara "Linear transition detection as a unified shot detection approach", IEEE Trans. Circuits Syst. Video Technol., vol. 17, pp. 483 2007. Abstract |Full Text: PDF (505KB) 3. Q. Urhan , M. K. Gullu and S. Erturk "Modified phase-correlation based robust hard-cut detection with application to archive film", IEEE Trans. Circuits Syst. Video Technol., vol. 16, pp. 753 2006. Abstract |Full Text: PDF (3808KB) 4. C. Cotsaces , N. Nikolaidis and I. Pitas "Video shot detection and condensed representation: A review", Proc. IEEE Signal Mag., vol. 23, pp. 28 2006. 5. National Institute of Standards and Technology (NIST), pp. [online] Available: http://www-nlpir.nist.gov/projects/trecvid/ 6. J. Bescos "Real-time shot change detection over online MPEG-2 video", IEEE Trans. Circuits Syst. Video Technol., vol. 14, pp. 475 2004. Abstract |Full Text: PDF (1056KB) 7. H. Lu and Y. P. Tan "An effective post-refinement method for shot boundary detection", IEEE Trans. Circuits Syst. Video Technol., vol. 15, pp. 1407 2005. Abstract |Full Text: PDF (3128KB) 8. G. Boccignone , A. Chianese , V. Moscato and A. Picariello "Foveated shot detection for video segmentation", IEEE Trans. Circuits Syst. Video Technol., vol. 15, pp. 365 2005. Abstract |Full Text: PDF (2152KB) 9. Z. Cernekova , I. Pitas and C. Nikou "Information theory-based shot cut/fade detection and video summarization", IEEE Trans. Circuits Syst. Video Technol., vol. 16, pp. 82 2006. Abstract |Full Text: PDF (1184KB) 10. L.-Y. Duan , M. Xu , Q. Tian , C.-S. Xu and J. S. Jin "A unified framework for semantic shot classification in sports video", IEEE Trans. Multimedia, vol. 7, pp. 1066 2005. Abstract |Full Text: PDF (2872KB) 11. H. Fang , J. M. Jiang and Y. Feng "A fuzzy logic approach for detection of video shot boundaries", Pattern Recogn., vol. 39, pp. 2092 2006. [CrossRef] 12. R. A. Joyce and B. Liu "Temporal segmentation of video using frame and histogram space", IEEE Trans. Multimedia, vol. 8, pp. 130 2006. Abstract |Full Text: PDF (864KB) 13. A. Hanjalic "Shot boundary detection: Unraveled and resolved", IEEE Trans. Circuits Syst. Video Technol., vol. 12, pp. 90 2002. Abstract |Full Text: PDF (289KB) 14. S.-C. Pei and Y.-Z. Chou "Efficient MPEG compressed video analysis using macroblock type information", IEEE Trans. Multimedia, vol. 1, pp. 321 1999. Abstract |Full Text: PDF (612KB) 15. C.-L. Huang and B.-Y. Liao "A robust scene-change detection method for video segmentation", IEEE Trans. Circuits Syst. Video Technol., vol. 11, pp. 1281 2001. Abstract |Full Text: PDF (241KB) 16. Y. Freund and R. E. Schapire "A decision-theoretic generalization of online learning and an application to boosting", J. Comput. Syst. Sci., vol. 55, pp. 119 1997. [CrossRef] On this page * Abstract * Index Terms * References Brought to you by STRATHCLYDE UNIVERSITY LIBRARY * Your institute subscribes to: * IEEE-Wiley eBooks Library , IEEE/IET Electronic Library (IEL) * What can I access? Terms of Us

    Query engine of novelty in video streams

    Get PDF
    Prior research on novelty detection has primarily focused on algorithms to detect novelty for a given application domain. Effective storage, indexing and retrieval of novel events (beyond detection) are largely ignored as a problem in itself. In light of the recent advances in counter-terrorism efforts and link discovery initiatives, the need for effective data management of novel events assumes apparent importance. Automatically detecting novel events in video data streams is an extremely challenging task. The aim of this thesis is to provide evidence to the fact that the notion of novelty in video as perceived by a human is extremely subjective and therefore algorithmically illdefined. Though it comes as no surprise that current machine-based parametric learning systems to accurately mimic human novelty perception are far from perfect such systems have recently been very successful in exhaustively capturing novelty in video once the novelty function is well-defined by a human expert. So, how truly effective are these machine based novelty detection systems as compared to human novelty detection? In this paper we outline an experimental evaluation of the human vs machine based novelty systems in terms of qualitative performance. We then quantify this evaluation using a variety of metrics based on location of novel events, number of novel events found in the video, etc. We begin by describing a machine-based system for detecting novel events in video data streams. We then discuss the issues of designing an indexing-strategy or Manga (comic-book representation is termed as manga in Japanese) to effectively determine the most-representative novel frames for a video sequence. We then evaluate the performance of machine-based novelty detection system against human novelty detection and present the results. The distance metrics we suggest for novelty comparison may eventually aide a variety of end-users to effectively drive the indexing, retrieval and analysis of large video databases. It should also be noted that the techniques we describe in this paper are based on low-level features extracted from video such as color, intensity and focus of attention. The video processing component does not include any semantic processing such as object detection in video for this framework. We conjecture that such advances, though beyond the scope of this particular paper, would undoubtedly benefit the machine-based novelty detection systems and experimentally validate this. We believe that developing a novelty detection system that works in conjunction with the human expert will lead to a more user-centered data mining approach for such domains. JPEG 2000 is a new method of compressing images better than other image formats such as JPEG, GIF, PNG, etc. The main reason this format is in need for investigation is it allows metadata to be embedded within the image itself. The types of data can essentially be anything such as text, audio, video, images, etc. Currently image annotations are stored and collected side by side. Even though this method is very common, it brings up a lot of risks and flaws. Imagine if medical images were annotated by doctors to describe a tumor within the brain, then suddenly some of the annotations are lost. Without these annotations, the images itself would be useless. By embedding these annotations within the image will guarentee that the description and the image will never be seperated. The metadata embedded within the image has no influence to the image iteself. In this thesis we initially develop a metric to index novelty by comparing it to traditional indexing techniques and to human perception. In the second phase of this thesis, we will investigate the new emerging technology of JPEG 2000 and show that novelty stored in this format will outperform traditional image structures. One of the contributions this thesis is making is to develop metrics to measure the performance and quality between the query results of JPEG 2000 and traditional image formats. Since JPEG 2000 is a new technology, there are no existing metrics to measure this type of performance with traditional images

    An Innovative Method for Measuring Instrument Data Acquisition using Image Processing Techniques

    Full text link
    Measuring instruments are essential for obtaining accurate data, but data acquisition can be challenging. We propose a novel method for measuring instrument data acquisition using a camera to capture the instrument display and image processing techniques to extract the measured values. We demonstrate the effectiveness and accuracy of this method by applying it to capture the magnetic field of a permanent magnet using a gauss meter and webcam. Our image processing process involves Python libraries for video processing, including the OpenCV library for contour detection and thresholding. The processed data is then saved to a text file for further analysis. Our results show that our proposed method is effective and accurate, and offers a practical solution for cases where a direct cable connection is not possible or is difficult to establish. This method has potential applications in scientific research, engineering, and manufacturing

    Implementation of Adaptive Unsharp Masking as a pre-filtering method for watermark detection and extraction

    Get PDF
    Digital watermarking has been one of the focal points of research interests in order to provide multimedia security in the last decade. Watermark data, belonging to the user, are embedded on an original work such as text, audio, image, and video and thus, product ownership can be proved. Various robust watermarking algorithms have been developed in order to extract/detect the watermark against such attacks. Although watermarking algorithms in the transform domain differ from others by different combinations of transform techniques, it is difficult to decide on an algorithm for a specific application. Therefore, instead of developing a new watermarking algorithm with different combinations of transform techniques, we propose a novel and effective watermark extraction and detection method by pre-filtering, namely Adaptive Unsharp Masking (AUM). In spite of the fact that Unsharp Masking (UM) based pre-filtering is used for watermark extraction/detection in the literature by causing the details of the watermarked image become more manifest, effectiveness of UM may decrease in some cases of attacks. In this study, AUM has been proposed for pre-filtering as a solution to the disadvantages of UM. Experimental results show that AUM performs better up to 11\% in objective quality metrics than that of the results when pre-filtering is not used. Moreover; AUM proposed for pre-filtering in the transform domain image watermarking is as effective as that of used in image enhancement and can be applied in an algorithm-independent way for pre-filtering in transform domain image watermarking

    Extracting generic text information from images

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.As a vast amount of text appears everywhere, including natural scene, web pages and videos, text becomes very important information for different applications. Extracting text information from images and video frames is the first step of applying them to a specific application and this task is completed by a text information extraction (TIE) system. TIE consists of text detection, text binarisation and text recognition. For different applications or projects, one or more of these three TIE components may be embedded. Although many efforts have been made to extract text from images and videos, this problem is far from being solved due to the difficulties existing in different scenarios. This thesis focuses on the research of text detection and text binarisation. For the work on text detection in born-digital images, a new scheme for coarse text detection and a texture-based feature for fine text detection are proposed. In the coarse detection step, a novel scheme based on Maximum Gradient Difference (MGD) response of text lines is proposed. MGD values are classified into multiple clusters by a clustering algorithm to create multiple layer images. Then, the text line candidates are detected in different layer images. An SVM classifier trained by a novel texture-based feature is utilized to filter out the non-text regions. The superiority of the proposed feature is demonstrated by comparing with other features for text/non-text classification capability. Another algorithm is designed for detecting texts from natural scene images. Maximally Stable Extremal Regions (MSERs) as character candidates are classified into character MSERs and non-character MSERs based on geometry-based, stroke-based, HOG-based and colour-based features. Two types of misclassified character MSERs are retrieved by two different schemes respectively. A false alarm elimination step is performed for increasing the text detection precision and the bootstrap strategy is used to enhance the power of suppressing false positives. Both promising recall rate and precision rate are achieved. In the aspect of text binarisation research, the combination of the selected colour channel image and graph-based technique are explored firstly. The colour channel image with the histogram having the biggest distance, estimated by mean-shift procedure, between the two main peaks is selected before the graph model is constructed. Then, Normalised cut is employed on the graph to get the binarisation result. For circumventing the drawbacks of the grayscale-based method, a colour-based text binarisation method is proposed. A modified Connected Component (CC)-based validation measurement and a new objective segmentation evaluation criterion are applied as sequential processing. The experimental results show the effectiveness of our text binarisation algorithms
    corecore