30,688 research outputs found

    Video summarization with key frames

    Get PDF
    Video summarization is an important tool for managing and browsing video content. The increasing amount of consumer level video recording devices combined with the availability of cheap high bandwidth internet connections have enabled ordinary people to become video content producers and publishers. This has resulted in massive increase in online video content. Tools are needed for efficiently finding relevant content devoid traditional viewing. Video summaries provide a condensed view of the actual video. They are most commonly presented as static still images in the form of storyboards or dynamic video skims, which are shorter versions of the actual videos. Although methods for creating summaries with the assistance of computers have been long studied, practical implementations of the summarization methods are only a few. In this thesis, a semi-supervised workflow and a tool set for creating summaries is implemented. At first, the implemented tool creates a static storyboard summary of an input video automatically. Users are able to use the storyboard summaries to select the most important content and the selected content is then used to create a video skim. Major part of the thesis work consists of evaluating and finding the best methods to detect single key frames that would best depict the contents of a video. The evaluation process is focused mainly on motion analysis based optical flow histograms. In the experimental part, the performance of the implemented workflow is compared to state of the art automatic video summarization method. Based on the experiment results, even a rather simple method can produce good results and keeping the human in the loop for key frame selection is beneficial for generating meaningful video summaries

    Video summarization by group scoring

    Get PDF
    In this paper a new model for user-centered video summarization is presented. Involvement of more than one expert in generating the final video summary should be regarded as the main use case for this algorithm. This approach consists of three major steps. First, the video frames are scored by a group of operators. Next, these assigned scores are averaged to produce a singular value for each frame and lastly, the highest scored video frames alongside the corresponding audio and textual contents are extracted to be inserted into the summary. The effectiveness of this approach has been evaluated by comparing the video summaries generated by this system against the results from a number of automatic summarization tools that use different modalities for abstraction

    The TRECVID 2007 BBC rushes summarization evaluation pilot

    Get PDF
    This paper provides an overview of a pilot evaluation of video summaries using rushes from several BBC dramatic series. It was carried out under the auspices of TRECVID. Twenty-two research teams submitted video summaries of up to 4% duration, of 42 individual rushes video files aimed at compressing out redundant and insignificant material. The output of two baseline systems built on straightforward content reduction techniques was contributed by Carnegie Mellon University as a control. Procedures for developing ground truth lists of important segments from each video were developed at Dublin City University and applied to the BBC video. At NIST each summary was judged by three humans with respect to how much of the ground truth was included, how easy the summary was to understand, and how much repeated material the summary contained. Additional objective measures included: how long it took the system to create the summary, how long it took the assessor to judge it against the ground truth, and what the summary's duration was. Assessor agreement on finding desired segments averaged 78% and results indicate that while it is difficult to exceed the performance of baselines, a few systems did

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    Effective video summarization approach based on visual attention

    Get PDF
    Video summarization is applied to reduce redundancy and develop a concise representation of key frames in the video, more recently, video summaries have been used through visual attention modeling. In these schemes, the frames that stand out visually are extracted as key frames based on human attention modeling theories. The schemes for modeling visual attention have proven to be effective for video summaries. Nevertheless, the high cost of computing in such techniques restricts their usability in everyday situations. In this context, we propose a method based on KFE (key frame extraction) technique, which is recommended based on an efficient and accurate visual attention model. The calculation effort is minimized by utilizing dynamic visual highlighting based on the temporal gradient instead of the traditional optical flow techniques. In addition, an efficient technique using a discrete cosine transformation is utilized for the static visual salience. The dynamic and static visual attention metrics are merged by means of a non-linear weighted fusion technique. Results of the systemare compared with some existing stateof- the-art techniques for the betterment of accuracy. The experimental results of our proposed model indicate the efficiency and high standard in terms of the key frames extraction as output.Qatar University - No. IRCC-2021-010

    VSCAN: An Enhanced Video Summarization using Density-based Spatial Clustering

    Full text link
    In this paper, we present VSCAN, a novel approach for generating static video summaries. This approach is based on a modified DBSCAN clustering algorithm to summarize the video content utilizing both color and texture features of the video frames. The paper also introduces an enhanced evaluation method that depends on color and texture features. Video Summaries generated by VSCAN are compared with summaries generated by other approaches found in the literature and those created by users. Experimental results indicate that the video summaries generated by VSCAN have a higher quality than those generated by other approaches.Comment: arXiv admin note: substantial text overlap with arXiv:1401.3590 by other authors without attributio

    Personalized video summarization based on group scoring

    Get PDF
    In this paper an expert-based model for generation of personalized video summaries is suggested. The video frames are initially scored and annotated by multiple video experts. Thereafter, the scores for the video segments that have been assigned the higher priorities by end users will be upgraded. Considering the required summary length, the highest scored video frames will be inserted into a personalized final summary. For evaluation purposes, the video summaries generated by our system have been compared against the results from a number of automatic and semi-automatic summarization tools that use different modalities for abstraction
    • 

    corecore