11 research outputs found

    Multimedia content modeling and personalization

    Get PDF

    An Overview of Video Shot Clustering and Summarization Techniques for Mobile Applications

    Get PDF
    The problem of content characterization of video programmes is of great interest because video appeals to large audiences and its efficient distribution over various networks should contribute to widespread usage of multimedia services. In this paper we analyze several techniques proposed in literature for content characterization of video programmes, including movies and sports, that could be helpful for mobile media consumption. In particular we focus our analysis on shot clustering methods and effective video summarization techniques since, in the current video analysis scenario, they facilitate the access to the content and help in quick understanding of the associated semantics. First we consider the shot clustering techniques based on low-level features, using visual, audio and motion information, even combined in a multi-modal fashion. Then we concentrate on summarization techniques, such as static storyboards, dynamic video skimming and the extraction of sport highlights. Discussed summarization methods can be employed in the development of tools that would be greatly useful to most mobile users: in fact these algorithms automatically shorten the original video while preserving most events by highlighting only the important content. The effectiveness of each approach has been analyzed, showing that it mainly depends on the kind of video programme it relates to, and the type of summary or highlights we are focusing on

    Information-theoretic content selection for automated home video editing

    Get PDF
    ABSTRACT In automated home video editing, selecting out the most informative contents from the redundant footage is challenging. This paper proposes an information-theoretic approach to content selection by exploring the dependence relations between who (characters) and where (scenes) in the video. First the footage is segmented into basic units about the same characters at the same scene. To compactly represent the dependence relations between scenes and characters, contingency table is used to model their co-occurrence statistics. Suppose the contents about which characters at which scene are dominating by two random variables, an optimal selection criterion is proposed based on joint entropy. To improve the computation efficiency, a pruned N-Best heuristic algorithm is presented to search the most informative video units. Experimental results demonstrated the proposed approach is flexible and effective for automated content selection

    A Literature Review on the Development of Multimedia Information Retrieval (MIR) and the Futere Challenges

    Get PDF
     Abstrak Multimedia information retrieval (MIR) adalah proses pencarian dan pengambilan informasi (information retrieval/IR) dalam content berbentuk multimedia, seperti suara, gambar, video, dan animasi. Penelitian ini menggunakan metode kajian literatur (literature review) terhadap perkembangan MIR saat ini dan tantangan yang akan dihadapi di masa depan bagi para periset di bidang IR. Berbagai penelitian MIR saat ini meliputi komputasi yang berpusat pada manusia (aktor) terhadap pencarian informasi, memungkinkan mesin melakukan pembelajaran (semantik), memungkinkan mesin meminta koreksi (umpan balik), penambahan fitur atau faktor baru, penelitian pada media baru, perangkuman informasi dari content multimedia, pengindeksan dengan performa tinggi, dan mekanisme terhadap teknik evaluasi. Di masa yang akan datang, tantangan yang menjadi potensi penelitian MIR meliputi peran manusia yang tetap menjadi pusat (aktor) terhadap pencarian informasi, kolaborasi konten multimedia yang lebih beragam, dan penggunaan kata kunci sederhana (folksonomi). Kata kunci: multimedia information retrieval, multimedia, komputasi, semantik, pencarian informasi  Abstract Multimedia information retrieval (MIR) is the process of searching and retrieving information (information retrieval/IR) in multimedia content, such as audio, image, video, and animation. This study uses literature review method against current MIR conditions and what challenges to be faced in the future for researchers in the field of IR. Various studies of MIR currently include human centered computation for IR, allowing machine to do the learning (semantics); allowing machine to request feedback, add new features or factors, research on new media, summarize information from multimedia content, high-performance indexing, and evaluation techniques. In the future, the potential of MIR research includes the human-centered role for information retrieval, more diverse collaborative multimedia content, and the use of simple keyword (folksonomy). Keywords: multimedia information retrieval, multimedia, computation, semantics, information searchÂ

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    ABSTRACT A Utility Framework for the Automatic Generation of Audio-Visual Skims

    No full text
    In this paper, we present a novel algorithm for generating audiovisual skims from computable scenes. Skims are useful for browsing digital libraries, and for on-demand summaries in settop boxes. A computable scene is a chunk of data that exhibits consistencies with respect to chromaticity, lighting and sound. There are three key aspects to our approach: (a) visual complexity and grammar, (b) robust audio segmentation and (c) an utility model for skim generation. We define a measure of visual complexity of a shot, and map complexity to the minimum time for comprehending the shot. Then, we analyze the underlying visual grammar, since it makes the shot sequence meaningful. We segment the audio data into four classes, and then detect significant phrases in the speech segments. The utility functions are defined in terms of complexity and duration of the segment. The target skim is created using a general constrained utility maximization procedure that maximizes the information content and the coherence of the resulting skim. The objective function is constrained due to multimedia synchronization constraints, visual syntax and by penalty functions on audio and video segments. The user study results indicate that the optimal skims show statistically significant differences with other skims with compression rates up to 90%. 1

    ComputergestĂĽtzte Inhaltsanalyse von digitalen Videoarchiven

    Full text link
    Der Übergang von analogen zu digitalen Videos hat in den letzten Jahren zu großen Veränderungen innerhalb der Filmarchive geführt. Insbesondere durch die Digitalisierung der Filme ergeben sich neue Möglichkeiten für die Archive. Eine Abnutzung oder Alterung der Filmrollen ist ausgeschlossen, so dass die Qualität unverändert erhalten bleibt. Zudem wird ein netzbasierter und somit deutlich einfacherer Zugriff auf die Videos in den Archiven möglich. Zusätzliche Dienste stehen den Archivaren und Anwendern zur Verfügung, die erweiterte Suchmöglichkeiten bereitstellen und die Navigation bei der Wiedergabe erleichtern. Die Suche innerhalb der Videoarchive erfolgt mit Hilfe von Metadaten, die weitere Informationen über die Videos zur Verfügung stellen. Ein großer Teil der Metadaten wird manuell von Archivaren eingegeben, was mit einem großen Zeitaufwand und hohen Kosten verbunden ist. Durch die computergestützte Analyse eines digitalen Videos ist es möglich, den Aufwand bei der Erzeugung von Metadaten für Videoarchive zu reduzieren. Im ersten Teil dieser Dissertation werden neue Verfahren vorgestellt, um wichtige semantische Inhalte der Videos zu erkennen. Insbesondere werden neu entwickelte Algorithmen zur Erkennung von Schnitten, der Analyse der Kamerabewegung, der Segmentierung und Klassifikation von Objekten, der Texterkennung und der Gesichtserkennung vorgestellt. Die automatisch ermittelten semantischen Informationen sind sehr wertvoll, da sie die Arbeit mit digitalen Videoarchiven erleichtern. Die Informationen unterstützen nicht nur die Suche in den Archiven, sondern führen auch zur Entwicklung neuer Anwendungen, die im zweiten Teil der Dissertation vorgestellt werden. Beispielsweise können computergenerierte Zusammenfassungen von Videos erzeugt oder Videos automatisch an die Eigenschaften eines Abspielgerätes angepasst werden. Ein weiterer Schwerpunkt dieser Dissertation liegt in der Analyse historischer Filme. Vier europäische Filmarchive haben eine große Anzahl historischer Videodokumentationen zur Verfügung gestellt, welche Anfang bis Mitte des letzten Jahrhunderts gedreht und in den letzten Jahren digitalisiert wurden. Durch die Lagerung und Abnutzung der Filmrollen über mehrere Jahrzehnte sind viele Videos stark verrauscht und enthalten deutlich sichtbare Bildfehler. Die Bildqualität der historischen Schwarz-Weiß-Filme unterscheidet sich signifikant von der Qualität aktueller Videos, so dass eine verlässliche Analyse mit bestehenden Verfahren häufig nicht möglich ist. Im Rahmen dieser Dissertation werden neue Algorithmen vorgestellt, um eine zuverlässige Erkennung von semantischen Inhalten auch in historischen Videos zu ermöglichen
    corecore