740 research outputs found

    Video browsing interfaces and applications: a review

    Get PDF
    We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data—which, if presented in its raw format, is rather unwieldy and costly—have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other

    Future-Viewer: An Efficient Framework for Navigating and Classifying Audio-Visual Documents

    Get PDF
    In this paper we present an intuitive framework named Future-Viewer, introduced for the effective visualization of spatiotemporal low-level features, in the context of browsing and retrieval of a multimedia document. This tool is used to facilitate the access to the content and to improve the understanding of the semantics associated to the considered multimedia document. The main visualization paradigm employed consists in representing a 2D feature space in which the video document shots are located. The features that characterize the 2D space's axes can be selected by the user. Shots with similar content fall near each other, and the tool offers various functionalities for automatically nding and annotating shot clusters in the feature space. These annotations can also be stored in MPEG7 format. The use of this application to browse the content of few audio-video sequences demonstrate very interesting capabilities

    Associating characters with events in films

    Get PDF
    The work presented here combines the analysis of a film's audiovisual features with the analysis of an accompanying audio description. Specifically, we describe a technique for semantic-based indexing of feature films that associates character names with meaningful events. The technique fuses the results of event detection based on audiovisual features with the inferred on-screen presence of characters, based on an analysis of an audio description script. In an evaluation with 215 events from 11 films, the technique performed the character detection task with Precision = 93% and Recall = 71%. We then go on to show how novel access modes to film content are enabled by our analysis. The specific examples illustrated include video retrieval via a combination of event-type and character name and our first steps towards visualization of narrative and character interplay based on characters occurrence and co-occurrence in events

    Interactive visualization of video content and associated description for semantic annotation

    Get PDF
    In this paper, we present an intuitive graphic fra- mework introduced for the effective visualization of video content and associated audio-visual description, with the aim to facilitate a quick understanding and annotation of the semantic content of a video sequence. The basic idea consists in the visualization of a 2D feature space in which the shots of the considered video sequence are located. Moreover, the temporal position and the specific content of each shot can be displayed and analysed in more detail. The selected fea- tures are decided by the user, and can be updated during the navigation session. In the main window, shots of the consi- dered video sequence are displayed in a Cartesian plane, and the proposed environment offers various functionalities for automatically and semi-automatically finding and annotating the shot clusters in such feature space. With this tool the user can therefore explore graphically how the basic segments of a video sequence are distributed in the feature space, and can recognize and annotate the significant clusters and their structure. The experimental results show that browsing and annotating documents with the aid of the proposed visuali- zation paradigms is easy and quick, since the user has a fast and intuitive access to the audio-video content, even if he or she has not seen the document yet

    Casual Information Visualization on Exploring Spatiotemporal Data

    Get PDF
    The goal of this thesis is to study how the diverse data on the Web which are familiar to everyone can be visualized, and with a special consideration on their spatial and temporal information. We introduce novel approaches and visualization techniques dealing with different types of data contents: interactively browsing large amount of tags linking with geospace and time, navigating and locating spatiotemporal photos or videos in collections, and especially, providing visual supports for the exploration of diverse Web contents on arbitrary webpages in terms of augmented Web browsing

    An Overview of Video Shot Clustering and Summarization Techniques for Mobile Applications

    Get PDF
    The problem of content characterization of video programmes is of great interest because video appeals to large audiences and its efficient distribution over various networks should contribute to widespread usage of multimedia services. In this paper we analyze several techniques proposed in literature for content characterization of video programmes, including movies and sports, that could be helpful for mobile media consumption. In particular we focus our analysis on shot clustering methods and effective video summarization techniques since, in the current video analysis scenario, they facilitate the access to the content and help in quick understanding of the associated semantics. First we consider the shot clustering techniques based on low-level features, using visual, audio and motion information, even combined in a multi-modal fashion. Then we concentrate on summarization techniques, such as static storyboards, dynamic video skimming and the extraction of sport highlights. Discussed summarization methods can be employed in the development of tools that would be greatly useful to most mobile users: in fact these algorithms automatically shorten the original video while preserving most events by highlighting only the important content. The effectiveness of each approach has been analyzed, showing that it mainly depends on the kind of video programme it relates to, and the type of summary or highlights we are focusing on

    Study on Scientific outputs of Scholars in the Field of Digital Libraries Using Altmetrics Indicators

    Get PDF
    The current study aims to calculate the relationship between Altmetric scores obtained from the observation and dissemination of digital library resources in the Dimensions database and the number of citations received in the Scopus database. Also, in another part of the research, the predictive power of the number of Scopus citations by Altmetric scores is examined. The present research is applied in terms of purpose and survey-descriptive in terms of type, which is done by the scientometric method and with an Altmetric approach. The statistical population of the study includes all articles in the field of digital libraries (24183 records) that are indexed in the Scopus citation database during 1960-2020. Dimensions database has been used to evaluate the Altmetric scores obtained from these articles on social networks. Due to the limited access to the required data in the Scopus database, 2000 highly cited articles in the field of digital libraries in this Scopus database were studied through the Dimensions database. The data collection tools are Scopus Citation Database and Dimensions Database. The required data is collected through the Scopus database. In this study, the studied indicators from the Dimensions database appear as the independent variable of the research. The dependent variables in this study are the number of citations to articles in the Scopus database. Correlation tests and multiple regression between the studied indices are used to examine the relationships between variables and perform statistical tests. The software used is Excel and SPSS version 23. The present study results show that the social networks Patent, Facebook, Wikipedia, and Twitter have the highest correlation with the number of citations in the Dimensions database. The social networks Blog, Google User, and Q&A do not significantly relate to the number of citations received in Dimensions. Patent social networks, Wikipedia, and Twitter have the highest correlation with the number of Scopus citations. In this case, the social networks of Blog, Google User, Pulse Source and Q&A do not significantly correlate with the number of citations received. Among the citation databases studied, Mendeley has the highest correlation between the numbers of citations. Other results indicate that the publication and viewing of documents on social networks cannot predict the number of citations in the Dimensions and Scopus databases.https://dorl.net/dor/20.1001.1.20088302.2022.20.4.10.

    Personalized video summarization by highest quality frames

    Get PDF
    In this work, a user-centered approach has been the basis for generation of the personalized video summaries. Primarily, the video experts score and annotate the video frames during the enrichment phase. Afterwards, the frames scores for different video segments will be updated based on the captured end-users (different with video experts) priorities towards existing video scenes. Eventually, based on the pre-defined skimming time, the highest scored video frames will be extracted to be included into the personalized video summaries. In order to evaluate the effectiveness of our proposed model, we have compared the video summaries generated by our system against the results from 4 other summarization tools using different modalities
    • 

    corecore