840 research outputs found

    Semantic Analysis of High-definition MPEG-2 Soccer Video Using Bayesian Network

    Get PDF
    近年,インターネットのブロードバンド化に伴い,映像配信が普及し,また,地上デジタル放送や,BS・CSデジタル放送などの衛星放送により,ユーザが試聴できる番組の数が急増してきている.パソコンやレコーダのハードディスクの容量も増え,大量の番組(コンテンツ)を保存することが可能となったが,その反面,膨大な映像データの中から,視聴者の求めるシーンを素早く検索する技術の必要性がこれまでにも増して高まって来ている.本研究はサッカー映像のリプレーシーンとゴール付近のハイライトシーンの検出方法を提案する.シーンの検出には,MPEG-2エンコーダによって圧縮されたハイビジョンサッカー映像から抽出した特徴量とハイライトシーンとの間の因果関係をベイジアンネットワークで記述する手法を用いる.ベイジアンネットワークを用いることにより,抽出された特徴量からハイライトシーンの発生を確率的に推論することが可能になる.すでにベイジアンネットワークを用いたサッカー映像のハイライトシーンの検出法は提案されているが,それらの方法では,フレーム毎に画素単位でさまざまな画像処理を映像に施すことによって求めた特徴量を利用している.そのため,画面が大きくなると計算コストも大きくなるので,リアルタイム処理には専用の処理装置が必要になる.本研究で提案する方法はMPEG-2圧縮データに含まれている符号化パラメータから特徴量を計算するので,従来法に比べて計算量が少なく,ハイビジョンなどの高解像度映像であっても,通常のPCを用いてリアルタイム処理が可能である.また,従来法では各種シーンに対してベイジアンネットワークが提案されているが,いずれも,ネットワークモデル中のシーンに関わるイベントがすべてフレーム単位で定義されている.例えば,従来法のゴールシーンに関わる,ゴールゲートの出現,観客の声,リプレーの発生等のイベントは全てフレーム単位で数えている.しかし,各イベントの開始・終了フレームを明確に判定する手法が明らかにされておらず,場合によっては人の手で行わなう必要がある.そのため,ベイジアンネットワークを学習する時に、各種イベントの時間帯の与え方に誤差が含まれる可能性がある.さらに、テストビデオから,シーン検出する時,シーンの始終時間帯の検出も困難である.本研究の提案手法では,まず,MPEG-2圧縮データから直接抽出した符号化パラメータの特徴的な変化から,カメラの切り換えに伴う画面の切り替るカット点を検出し,隣接する二つのカット点間をショットとして定義する.さらに各ショットの特徴量を調べることにより,ショットをいくつかのイベントクラスに分類する.さらに,シーンをある特徴的なイベントの発生として捉えることにより,シーンの検出を行う.本手法では,各イベントの開始・終了時刻をショットのカット点によって明確に与えることができることができ,しかもMPEG-2圧縮データから自動的に求めることが可能である.提案方式の性能評価のために,実際のビデオデータを使用した検出実験を行ったところ,ゴール付近で起こるイベントシーンの再現率が86.17%,適合率90.76%,またリプレーシーンの再現率が81.00%, 適合率92.57%という検出結果が得られた.一方,従来法の検出結果では,同一のビデオデータではないが,ゴール付近で起こるイベントシーンの再現率71.1%,適合率89.8%であり,提案方式のほうが従来法に比べ,再現率,適合率ともに上回り,とくに再現率の向上が顕著である.以上のことより,提案法の有効性が確認された.電気通信大学201

    Using association rule mining to enrich semantic concepts for video retrieval

    Get PDF
    In order to achieve true content-based information retrieval on video we should analyse and index video with high-level semantic concepts in addition to using user-generated tags and structured metadata like title, date, etc. However the range of such high-level semantic concepts, detected either manually or automatically, usually limited compared to the richness of information content in video and the potential vocabulary of available concepts for indexing. Even though there is work to improve the performance of individual concept classifiers, we should strive to make the best use of whatever partial sets of semantic concept occurrences are available to us. We describe in this paper our method for using association rule mining to automatically enrich the representation of video content through a set of semantic concepts based on concept co-occurrence patterns. We describe our experiments on the TRECVid 2005 video corpus annotated with the 449 concepts of the LSCOM ontology. The evaluation of our results shows the usefulness of our approach

    Automated classification of cricket pitch frames in cricket video

    Get PDF
    The automated detection of the cricket pitch in a video recording of a cricket match is a fundamental step in content-based indexing and summarization of cricket videos. In this paper, we propose visualcontent based algorithms to automate the extraction of video frames with the cricket pitch in focus. As a preprocessing step, we first select a subset of frames with a view of the cricket field, of which the cricket pitch forms a part. This filtering process reduces the search space by eliminating frames that contain a view of the audience, close-up shots of specific players, advertisements, etc. The subset of frames containing the cricket field is then subject to statistical modeling of the grayscale (brightness) histogram (SMoG). Since SMoG does not utilize color or domain-specific information such as the region in the frame where the pitch is expected to be located, we propose an alternative algorithm: component quantization based region of interest extraction (CQRE) for the extraction of pitch frames. Experimental results demonstrate that, regardless of the quality of the input, successive application of the two methods outperforms either one applied exclusively. The SMoG-CQRE combination for pitch frame classification yields an average accuracy of 98:6% in the best case (a high resolution video with good contrast) and an average accuracy of 87:9% in the worst case (a low resolution video with poor contrast). Since, the extraction of pitch frames forms the first step in analyzing the important events in a match, we also present a post-processing step, viz. , an algorithm to detect players in the extracted pitch frames

    Multi-sensor human action recognition with particular application to tennis event-based indexing

    Get PDF
    The ability to automatically classify human actions and activities using vi- sual sensors or by analysing body worn sensor data has been an active re- search area for many years. Only recently with advancements in both fields and the ubiquitous nature of low cost sensors in our everyday lives has auto- matic human action recognition become a reality. While traditional sports coaching systems rely on manual indexing of events from a single modality, such as visual or inertial sensors, this thesis investigates the possibility of cap- turing and automatically indexing events from multimodal sensor streams. In this work, we detail a novel approach to infer human actions by fusing multimodal sensors to improve recognition accuracy. State of the art visual action recognition approaches are also investigated. Firstly we apply these action recognition detectors to basic human actions in a non-sporting con- text. We then perform action recognition to infer tennis events in a tennis court instrumented with cameras and inertial sensing infrastructure. The system proposed in this thesis can use either visual or inertial sensors to au- tomatically recognise the main tennis events during play. A complete event retrieval system is also presented to allow coaches to build advanced queries, which existing sports coaching solutions cannot facilitate, without an inordi- nate amount of manual indexing. The event retrieval interface is evaluated against a leading commercial sports coaching tool in terms of both usability and efficiency

    Content And Multimedia Database Management Systems

    Get PDF
    A database management system is a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. The main characteristic of the ‘database approach’ is that it increases the value of data by its emphasis on data independence. DBMSs, and in particular those based on the relational data model, have been very successful at the management of administrative data in the business domain. This thesis has investigated data management in multimedia digital libraries, and its implications on the design of database management systems. The main problem of multimedia data management is providing access to the stored objects. The content structure of administrative data is easily represented in alphanumeric values. Thus, database technology has primarily focused on handling the objects’ logical structure. In the case of multimedia data, representation of content is far from trivial though, and not supported by current database management systems

    Searching surveillance video contents using convolutional neural network

    Get PDF
    Manual video inspection, searching, and analyzing is exhausting and inefficient. This paper presents an intelligent system to search surveillance video contents using deep learning. The proposed system reduced the amount of work that is needed to perform video searching and improved the speed and accuracy. A pre-trained VGG-16 CNNs model is used for dataset training. In addition, key frames of videos were extracted in order to save space, reduce the amount of work, and reduce the execution time. The extracted key frames were processed using the sobel operator edge detector and the max-pooling in order to eliminate redundancy. This increases compaction and avoids similarities between extracted frames. A text file, that contains key frame index, time of occurrence, and the classification of the VGG-16 model, is produced. The text file enables humans to easily search for objects of interest. VIRAT and IVY LAB datasets were used in the experiments. In addition, 128 different classes were identified in the datasets. The classes represent important objects for surveillance systems. However, users can identify other classes and utilize the proposed methodology. Experiments and evaluation showed that the proposed system outperformed existing methods in an order of magnitude. The system achieved the best results in speed while providing a high accuracy in classification
    corecore