13 research outputs found

    Scene extraction in motion pictures

    Full text link
    This paper addresses the challenge of bridging the semantic gap between the rich meaning users desire when they query to locate and browse media and the shallowness of media descriptions that can be computed in today\u27s content management systems. To facilitate high-level semantics-based content annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from fill production to determine when a scene change occurs. We then investigate different rules and conventions followed as part of Fill Grammar that would guide and shape an algorithmic solution for determining a scene. Two different techniques using intershot analysis are proposed as solutions in this paper. In addition, we present different refinement mechanisms, such as film-punctuation detection founded on Film Grammar, to further improve the results. These refinement techniques demonstrate significant improvements in overall performance. Furthermore, we analyze errors in the context of film-production techniques, which offer useful insights into the limitations of our method

    On the extraction of thematic and dramatic functions of content in educational videos

    Full text link
    In this paper, we propose novel computational models for the extraction of high level expressive constructs related to, namely thematic and dramatic functions of the content shown in educational and training videos. Drawing on the existing knowledge of film theory, and media production rules and conventions used by the filmmakers. we hypothesize key aesthetic elements contributing to convey these functions of the content. Computational models to extract them are then formulated and their performance evaluated on a set of ten educational and training videos is presented.<br /

    Neighborhood coherence and edge based approaches to film scene extraction

    Full text link
    In order to enable high-level semantics-based video annotation and interpretation, we tackle the problem of automatic decomposition of motion pictures into meaningful story units, namely scenes. Since a scene is a complicated and subjective concept, we first propose guidelines from film production to determine when a scene change occurs in film. We examine different rules and conventions followed as part of Film Grammar to guide and shape our algorithmic solution for determining a scene boundary. Two different techniques are proposed as new solutions in this paper. Our experimental results on 10 full-length movies show that our technique based on shot sequence coherence performs well and reasonably better than the color edges-based approach

    Horror film genre typing and scene labeling via audio analysis

    Full text link
    We examine localised sound energy patterns, or events, that we associate with high level affect experienced with films. The study of sound energy events in conjunction with their intended affect enable the analysis of film at a higher conceptual level, such as genre. The various affect/emotional responses we investigate in this paper are brought about by well established patterns of sound energy dynamics employed in audio tracks of horror films. This allows the examination of the thematic content of the films in relation to horror elements. We analyse the frequency of sound energy and affect events at a film level as well as at a scene level, and propose measures indicative of the film genre and scene content. Using 4 horror, and 2 non-horror movies as experimental data we establish a correlation between the sound energy event types and horrific thematic content within film, thus enabling an automated mechanism for genre typing and scene content labeling in film.<br /

    A Rule-Based Video Annotation System

    Full text link

    Generation of musical patterns using video features

    Get PDF
    With the growing interest in social media applications, mobile phones have also seen a dramatic improvement in the quality of their cameras. This has caused a surge in the number of videos made by ordinary users, now capable of capturing any scene anywhere. Such videos often suffer from a lack of background music accompanying them. A simple solution is to attach an existing track that is particularly suitable for the video, yet it is also possible to create a completely new one. Research has thus far focused on recommending appropriate tracks for a given video, whereas the concept of automatic music generation is less studied. In any case, the addition of a new music track must rely exclusively on the features of the original video. In this study, a novel approach has been used to extract data using different video features and generating new music from those features. A desktop application has been designed for this purpose, containing a complete pipeline from importing the video to outputting the final video complemented with new music. To analyze the music quality, a user survey was conducted with roughly 100 participants. The survey contained several distinct videos, each represented in multiple variations with different musical settings. It was revealed that most samples of the newly generated music had enough potential to accompany the video and make it more interesting and meaningful. The results suggest that a more detailed user survey is needed to identify the precise features found appealing by the listeners, exhibiting less variation in musical tempo but more in the instruments applied

    Automatic indexing of video content via the detection of semantic events

    Get PDF
    The number, and size, of digital video databases is continuously growing. Unfortunately, most, if not all, of the video content in these databases is stored without any sort of indexing or analysis and without any associated metadata. If any of the videos do have metadata, then it is usually the result of some manual annotation process rather than any automatic indexing. Thus, locating clips and browsing content is difficult, time consuming and generally inefficient. The task of automatically indexing movies is particularly difficult given their innovative creation process and the individual style of many film makers. However, there are a number of underlying film grammar conventions that are universally followed, from a Hollywood blockbuster to an underground movie with a limited budget. These conventions dictate many elements of film making such as camera placement and editing. By examining the use of these conventions it is possible to extract information about the events in a movie. This research aims to provide an approach that creates an indexed version of a movie to facilitate ease of browsing and efficient retrieval. In order to achieve this aim, all of the relevant events contained within a movie are detected and classified into a predefined index. The event detection process involves examining the underlying structure of a movie and utilising audiovisual analysis techniques, supported by machine learning algorithms, to extract information based on this structure. The result is an indexed movie that can be presented to users for browsing/retrieval of relevant events, as well as supporting user specified searching. Extensive evaluation of the indexing approach is carried out. This evaluation indicates efficient performance of the event detection and retrieval system, and also highlights the subjective nature of video content

    Motion and emotion : Semantic knowledge for hollywood film indexing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Extraction multimodale de la structure narrative des épisodes de séries télévisées

    Get PDF
    Nos contributions portent sur l'extraction de la structure narrative d'épisodes de séries télévisées à deux niveaux hiérarchiques. Le premier niveau de structuration consiste à retrouver les transitions entre les scènes à partir d'une analyse de la couleur des images et des locuteurs présents dans les scènes. Nous montrons que l'analyse des locuteurs permet d'améliorer le résultat d'une segmentation en scènes basée sur la couleur. Il est courant de voir plusieurs histoires (ou lignes d'actions) racontées en parallèle dans un même épisode de série télévisée. Ainsi, le deuxième niveau de structuration consiste à regrouper les scènes en histoires. Nous cherchons à désentrelacer les histoires pour pouvoir, par exemple, visualiser les différentes lignes d'actions indépendamment. La principale difficulté consiste à déterminer les descripteurs les plus pertinents permettant de regrouper les scènes appartenant à une même histoire. A ce niveau, nous étudions également l'utilisation de descripteurs provenant des trois modalités différentes précédemment exposées. Nous proposons en outre des méthodes permettant de fusionner les informations provenant de ces trois modalités. Pour répondre à la variabilité de la structure narrative des épisodes de séries télévisées, nous proposons une méthode qui s'adapte à chaque épisode. Elle permet de choisir automatiquement la méthode de regroupement la plus pertinente parmi les différentes méthodes proposées. Enfin, nous avons développé StoViz, un outil de visualisation de la structure d'un épisode de série télévisée (scènes et histoires). Il permet de faciliter la navigation au sein d'un épisode, en montrant les différentes histoires racontées en parallèle dans l'épisode. Il permet également la lecture des épisodes histoire par histoire, et la visualisation d'un court résumé de l'épisode en donnant un aperçu de chaque histoire qui y est racontée.Our contributions concern the extraction of the structure of TV series episodes at two hierarchical levels. The first level of structuring is to find the scene transitions based on the analysis of the color information and the speakers involved in the scenes. We show that the analysis of the speakers improves the result of a color-based segmentation into scenes. It is common to see several stories (or lines of action) told in parallel in a single TV series episode. Thus, the second level of structure is to cluster scenes into stories. We seek to deinterlace the stories in order to visualize the different lines of action independently. The main difficulty is to determine the most relevant descriptors for grouping scenes belonging to the same story. We explore the use of descriptors from the three different modalities described above. We also propose methods to combine these three modalities. To address the variability of the narrative structure of TV series episodes, we propose a method that adapts to each episode. It can automatically select the most relevant clustering method among the various methods we propose. Finally, we developed StoViz, a tool for visualizing the structure of a TV series episode (scenes and stories). It allows an easy browsing of each episode, revealing the different stories told in parallel. It also allows playback of episodes story by story, and visualizing a summary of the episode by providing a short overview of each story
    corecore