495 research outputs found

    DC-image for real time compressed video matching

    Get PDF
    This chapter presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without full decompression. In addition, the relevant arguments and supporting evidences are discussed. Several local feature detectors will be examined to select the best for matching using the DC-image. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and computation complexity. The second experiment compares between using local features and global features regarding compressed video matching with respect to the DC-image. The results confirmed that the use of DC-image, despite its highly reduced size, it is promising as it produces higher matching precision, compared to the full I-frame. Also, SIFT, as a local feature, outperforms most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin which leaves a space for further optimizations that can be done to improve this computation complexity

    Video matching using DC-image and local features

    Get PDF
    This paper presents a suggested framework for video matching based on local features extracted from the DCimage of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the realtime margin. There are also various optimisations that can be done to improve this computation complexity

    Semantic Analysis of High-definition MPEG-2 Soccer Video Using Bayesian Network

    Get PDF
    近年,インターネットのブロードバンド化に伴い,映像配信が普及し,また,地上デジタル放送や,BS・CSデジタル放送などの衛星放送により,ユーザが試聴できる番組の数が急増してきている.パソコンやレコーダのハードディスクの容量も増え,大量の番組(コンテンツ)を保存することが可能となったが,その反面,膨大な映像データの中から,視聴者の求めるシーンを素早く検索する技術の必要性がこれまでにも増して高まって来ている.本研究はサッカー映像のリプレーシーンとゴール付近のハイライトシーンの検出方法を提案する.シーンの検出には,MPEG-2エンコーダによって圧縮されたハイビジョンサッカー映像から抽出した特徴量とハイライトシーンとの間の因果関係をベイジアンネットワークで記述する手法を用いる.ベイジアンネットワークを用いることにより,抽出された特徴量からハイライトシーンの発生を確率的に推論することが可能になる.すでにベイジアンネットワークを用いたサッカー映像のハイライトシーンの検出法は提案されているが,それらの方法では,フレーム毎に画素単位でさまざまな画像処理を映像に施すことによって求めた特徴量を利用している.そのため,画面が大きくなると計算コストも大きくなるので,リアルタイム処理には専用の処理装置が必要になる.本研究で提案する方法はMPEG-2圧縮データに含まれている符号化パラメータから特徴量を計算するので,従来法に比べて計算量が少なく,ハイビジョンなどの高解像度映像であっても,通常のPCを用いてリアルタイム処理が可能である.また,従来法では各種シーンに対してベイジアンネットワークが提案されているが,いずれも,ネットワークモデル中のシーンに関わるイベントがすべてフレーム単位で定義されている.例えば,従来法のゴールシーンに関わる,ゴールゲートの出現,観客の声,リプレーの発生等のイベントは全てフレーム単位で数えている.しかし,各イベントの開始・終了フレームを明確に判定する手法が明らかにされておらず,場合によっては人の手で行わなう必要がある.そのため,ベイジアンネットワークを学習する時に、各種イベントの時間帯の与え方に誤差が含まれる可能性がある.さらに、テストビデオから,シーン検出する時,シーンの始終時間帯の検出も困難である.本研究の提案手法では,まず,MPEG-2圧縮データから直接抽出した符号化パラメータの特徴的な変化から,カメラの切り換えに伴う画面の切り替るカット点を検出し,隣接する二つのカット点間をショットとして定義する.さらに各ショットの特徴量を調べることにより,ショットをいくつかのイベントクラスに分類する.さらに,シーンをある特徴的なイベントの発生として捉えることにより,シーンの検出を行う.本手法では,各イベントの開始・終了時刻をショットのカット点によって明確に与えることができることができ,しかもMPEG-2圧縮データから自動的に求めることが可能である.提案方式の性能評価のために,実際のビデオデータを使用した検出実験を行ったところ,ゴール付近で起こるイベントシーンの再現率が86.17%,適合率90.76%,またリプレーシーンの再現率が81.00%, 適合率92.57%という検出結果が得られた.一方,従来法の検出結果では,同一のビデオデータではないが,ゴール付近で起こるイベントシーンの再現率71.1%,適合率89.8%であり,提案方式のほうが従来法に比べ,再現率,適合率ともに上回り,とくに再現率の向上が顕著である.以上のことより,提案法の有効性が確認された.電気通信大学201

    Video indexing and summarization using motion activity

    Get PDF
    In this dissertation, video-indexing techniques using low-level motion activity characteristics and their application to video summarization are presented. The MPEG-7 motion activity feature is defined as the subjective level of activity or motion in a video segment. First, a novel psychophysical and analytical framework for automatic measurement of motion activity in compliance with its subjective perception is developed. A psychophysically sound subjective ground truth for motion activity and a test-set of video clips is constructed for this purpose. A number of low-level, compressed domain motion vector based, known and novel descriptors are then described. It is shown that these descriptors successfully estimate the subjective level of motion activity of video clips. Furthermore, the individual strengths and limitations of the proposed descriptors are determined using a novel pair wise comparison framework. It is verified that the intensity of motion activity descriptor of the MPEG-7 standard is one of the best performers, while a novel descriptor proposed in this dissertation performs comparably or better. A new descriptor for the spatial distribution of motion activity in a scene is proposed. This descriptor is supplementary to the intensity of motion activity descriptor. The new descriptor is shown to have comparable query retrieval performance to the current spatial distribution of motion activity descriptor of the MPEG-7 standard. The insights obtained from the motion activity investigation are applied to video summarization. A novel approach to summarizing and skimming through video using motion activity is presented. The approach is based on allocation of playback time to video segments proportional to the motion activity of the segments. Low activity segments are played faster than high activity segments in such a way that a constant level of activity is maintained throughout the video. Since motion activity is a low-complexity descriptor, the proposed summarization techniques are extremely fast. The summarization techniques are successfully used on surveillance video, The proposed techniques can also be used as a preprocessing stage for more complex summarization and content analysis techniques, thus providing significant cost gains

    Efficient Recognition of authentic dynamic facial expressions on the FEEDTUM database

    Get PDF
    In order to allow for fast recognition of a user’s affective state we discuss innovative holistic and self organizing approaches for efficient facial expression analysis. The feature set is thereby formed by global descriptors and MPEG based DCT coefficients. In view of subsequent classification we compare modelling by pseudo multidimensional Hidden Markov Models and Support Vector Machines. Within the latter case super-vectors are constructed based on Sequential Floating Search Methods. Extensive test-runs as a proof of concept are carried out on our publicly available FEEDTUM database consisting of elicited spontaneous emotions of 18 subjects within the MPEG-4 emotion-set plus added neutrality. Maximum recognition performance reaches the benchmark-rate gained by a human perception test with 20 test-persons and manifest the effectiveness of the introduced novel concepts. 1

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords

    Scene extraction in motion pictures

    Full text link
    This paper addresses the challenge of bridging the semantic gap between the rich meaning users desire when they query to locate and browse media and the shallowness of media descriptions that can be computed in today\u27s content management systems. To facilitate high-level semantics-based content annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from fill production to determine when a scene change occurs. We then investigate different rules and conventions followed as part of Fill Grammar that would guide and shape an algorithmic solution for determining a scene. Two different techniques using intershot analysis are proposed as solutions in this paper. In addition, we present different refinement mechanisms, such as film-punctuation detection founded on Film Grammar, to further improve the results. These refinement techniques demonstrate significant improvements in overall performance. Furthermore, we analyze errors in the context of film-production techniques, which offer useful insights into the limitations of our method
    corecore