1,312 research outputs found

    Semantic Analysis of High-definition MPEG-2 Soccer Video Using Bayesian Network

    Get PDF
    近年,インターネットのブロードバンド化に伴い,映像配信が普及し,また,地上デジタル放送や,BS・CSデジタル放送などの衛星放送により,ユーザが試聴できる番組の数が急増してきている.パソコンやレコーダのハードディスクの容量も増え,大量の番組(コンテンツ)を保存することが可能となったが,その反面,膨大な映像データの中から,視聴者の求めるシーンを素早く検索する技術の必要性がこれまでにも増して高まって来ている.本研究はサッカー映像のリプレーシーンとゴール付近のハイライトシーンの検出方法を提案する.シーンの検出には,MPEG-2エンコーダによって圧縮されたハイビジョンサッカー映像から抽出した特徴量とハイライトシーンとの間の因果関係をベイジアンネットワークで記述する手法を用いる.ベイジアンネットワークを用いることにより,抽出された特徴量からハイライトシーンの発生を確率的に推論することが可能になる.すでにベイジアンネットワークを用いたサッカー映像のハイライトシーンの検出法は提案されているが,それらの方法では,フレーム毎に画素単位でさまざまな画像処理を映像に施すことによって求めた特徴量を利用している.そのため,画面が大きくなると計算コストも大きくなるので,リアルタイム処理には専用の処理装置が必要になる.本研究で提案する方法はMPEG-2圧縮データに含まれている符号化パラメータから特徴量を計算するので,従来法に比べて計算量が少なく,ハイビジョンなどの高解像度映像であっても,通常のPCを用いてリアルタイム処理が可能である.また,従来法では各種シーンに対してベイジアンネットワークが提案されているが,いずれも,ネットワークモデル中のシーンに関わるイベントがすべてフレーム単位で定義されている.例えば,従来法のゴールシーンに関わる,ゴールゲートの出現,観客の声,リプレーの発生等のイベントは全てフレーム単位で数えている.しかし,各イベントの開始・終了フレームを明確に判定する手法が明らかにされておらず,場合によっては人の手で行わなう必要がある.そのため,ベイジアンネットワークを学習する時に、各種イベントの時間帯の与え方に誤差が含まれる可能性がある.さらに、テストビデオから,シーン検出する時,シーンの始終時間帯の検出も困難である.本研究の提案手法では,まず,MPEG-2圧縮データから直接抽出した符号化パラメータの特徴的な変化から,カメラの切り換えに伴う画面の切り替るカット点を検出し,隣接する二つのカット点間をショットとして定義する.さらに各ショットの特徴量を調べることにより,ショットをいくつかのイベントクラスに分類する.さらに,シーンをある特徴的なイベントの発生として捉えることにより,シーンの検出を行う.本手法では,各イベントの開始・終了時刻をショットのカット点によって明確に与えることができることができ,しかもMPEG-2圧縮データから自動的に求めることが可能である.提案方式の性能評価のために,実際のビデオデータを使用した検出実験を行ったところ,ゴール付近で起こるイベントシーンの再現率が86.17%,適合率90.76%,またリプレーシーンの再現率が81.00%, 適合率92.57%という検出結果が得られた.一方,従来法の検出結果では,同一のビデオデータではないが,ゴール付近で起こるイベントシーンの再現率71.1%,適合率89.8%であり,提案方式のほうが従来法に比べ,再現率,適合率ともに上回り,とくに再現率の向上が顕著である.以上のことより,提案法の有効性が確認された.電気通信大学201

    A Convolutional Neural Network Approach for Half-Pel Interpolation in Video Coding

    Full text link
    Motion compensation is a fundamental technology in video coding to remove the temporal redundancy between video frames. To further improve the coding efficiency, sub-pel motion compensation has been utilized, which requires interpolation of fractional samples. The video coding standards usually adopt fixed interpolation filters that are derived from the signal processing theory. However, as video signal is not stationary, the fixed interpolation filters may turn out less efficient. Inspired by the great success of convolutional neural network (CNN) in computer vision, we propose to design a CNN-based interpolation filter (CNNIF) for video coding. Different from previous studies, one difficulty for training CNNIF is the lack of ground-truth since the fractional samples are actually not available. Our solution for this problem is to derive the "ground-truth" of fractional samples by smoothing high-resolution images, which is verified to be effective by the conducted experiments. Compared to the fixed half-pel interpolation filter for luma in High Efficiency Video Coding (HEVC), our proposed CNNIF achieves up to 3.2% and on average 0.9% BD-rate reduction under low-delay P configuration.Comment: International Symposium on Circuits and Systems (ISCAS) 201

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    An approach to summarize video data in compressed domain

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2007Includes bibliographical references (leaves: 54-56)Text in English; Abstract: Turkish and Englishx, 59 leavesThe requirements to represent digital video and images efficiently and feasibly have collected great efforts on research, development and standardization over past 20 years. These efforts targeted a vast area of applications such as video on demand, digital TV/HDTV broadcasting, multimedia video databases, surveillance applications etc. Moreover, the applications demand more efficient collections of algorithms to enable lower bit rate levels, with acceptable quality depending on application requirements. In our time, most of the video content either stored, transmitted is in compressed form. The increase in the amount of video data that is being shared attracted interest of researchers on the interrelated problems of video summarization, indexing and abstraction. In this study, the scene cut detection in emerging ISO/ITU H264/AVC coded bit stream is realized by extracting spatio-temporal prediction information directly in the compressed domain. The syntax and semantics, parsing and decoding processes of ISO/ITU H264/AVC bit-stream is analyzed to detect scene information. Various video test data is constructed using Joint Video Team.s test model JM encoder, and implementations are made on JM decoder. The output of the study is the scene information to address video summarization, skimming, indexing applications that use the new generation ISO/ITU H264/AVC video

    Semi-automatic video object segmentation for multimedia applications

    Get PDF
    A semi-automatic video object segmentation tool is presented for segmenting both still pictures and image sequences. The approach comprises both automatic segmentation algorithms and manual user interaction. The still image segmentation component is comprised of a conventional spatial segmentation algorithm (Recursive Shortest Spanning Tree (RSST)), a hierarchical segmentation representation method (Binary Partition Tree (BPT)), and user interaction. An initial segmentation partition of homogeneous regions is created using RSST. The BPT technique is then used to merge these regions and hierarchically represent the segmentation in a binary tree. The semantic objects are then manually built by selectively clicking on image regions. A video object-tracking component enables image sequence segmentation, and this subsystem is based on motion estimation, spatial segmentation, object projection, region classification, and user interaction. The motion between the previous frame and the current frame is estimated, and the previous object is then projected onto the current partition. A region classification technique is used to determine which regions in the current partition belong to the projected object. User interaction is allowed for object re-initialisation when the segmentation results become inaccurate. The combination of all these components enables offline video sequence segmentation. The results presented on standard test sequences illustrate the potential use of this system for object-based coding and representation of multimedia
    corecore