21 research outputs found

    How Do We Evaluate the Quality of Computational Editing Systems?

    Get PDF
    International audienceOne problem common to all researchers in the field of virtual cinematography and editing is to be able to assess the quality of the output of their systems. There is a pressing requirement for appropriate evaluations of proposed models and techniques. Indeed, though papers are often accompanied with example videos, showing subjective results and occasionally providing qualitative comparisons with other methods or with human-created movies, they generally lack an extensive evaluation. The goal of this paper is to survey evaluation methodologies that have been used in the past and to review a range of other interesting methodologies as well as a number of questions related to how we could better evaluate and compare future systems

    Simulation of Past Life: Controlling Agent Behaviors from the Interactions between Ethnic Groups

    No full text
    International audienceMany efforts have been carried out in preserving the history and culture of Penang and also other regions of Malaysia since George Town was elected as a UNESCO living heritage city. This paper presents a method to simulate life in a local trading port in the 1800s, where various populations with very different social rules interacted with each other. These populations included Indian coolies, Malay vendors, British colonists and Chinese traders. The challenge is to model these ethnic groups as autonomous agents, and to capture the changes of behavior due to inter-ethnic interactions and to the arrival of boats at the pier. Agents from each population are equipped with a specific set of steering methods which are selected and parameterized according to predefined behavioral patterns (graphs of states). In this paper, we propose a new formalism where interactions between the different ethnics groups and with the boats can be either activated globally or locally. Global interactions cause changes of states for all the agents belonging to the target population, while local interactions only take place between specific agents, and result in changes of states for these agents only. The main contributions of our method are: i) Applying microscopic crowd simulation to the complex case of a multi-ethnic trading port, involving different behavioral patterns; ii) Introducing a high-level control method, through the inter- ethnic interactions formalism. The resulting system generates a variety of real-time animations, all reflecting the adequate social behaviors. Such a system would be particularly useful in a virtual tour application

    Joint Attention for Automated Video Editing

    Get PDF
    International audienceJoint attention refers to the shared focal points of attention for occupants in a space. In this work, we introduce a computational definition of joint attention for the automated editing of meetings in multi-camera environments from the AMI corpus. Using extracted head pose and individual headset amplitude as features, we developed three editing methods: (1) a naive audio-based method that selects the camera using only the headset input, (2) a rule-based edit that selects cameras at a fixed pacing using pose data, and (3) an editing algorithm using LSTM (Long-short term memory) learned joint-attention from both pose and audio data, trained on expert edits. The methods are evaluated qualitatively against the human edit, and quantitatively in a user study with 22 participants. Results indicate that LSTM-trained joint attention produces edits that are comparable to the expert edit, offering a wider range of camera views than audio, while being more generalizable as compared to rule-based methods

    Cinématographie et montage automatique dans des environnements virtuels

    Get PDF
    The wide availability of high-resolution 3D models and the facility to create new geometrical and animated content, using low-cost input devices, open to many the possibility of becoming digital 3D storytellers. To date there is however a clear lack of accessible tools to easily create the cinematography (positioning and moving the cameras to create shots) and perform the editing of such stories (selecting appropriate cuts between the shots created by the cameras). Creating a movie requires the knowledge of a significant amount of empirical rules and established conventions. Most 3D animation packages do not encompass this expertise, calling the need for automatic approaches that would, at least partially, support users in their creative process. In this thesis we address both challenges of automating cinematography and editing in virtual environments.Using cameras to convey events and actions in dynamic environments is a major concern in many CG applications.In the context of crowd simulation, we present a novel approach to address the challenge of controlling multiple cameras tracking groups of targets. In this first contribution we propose a system that relies on Reynolds' model of steering behaviors to control and locally coordinate a collection of autonomous camera agents evolving in the dynamic 3D environments to shot multi-scale events.Editing a movie is a complex and tedious endeavor that requires a lot of expertise in the field. Therefore, automating the process calls for a formalization of this knowledge. Using continuity editing -- the predominant style of editing -- as a benchmark for evaluating edits, we introduce a novel optimization-based approach for automatically creating well-edited movies from a 3D animation. We propose an efficient solution through dynamic programming, by relying on a plausible semi-Markov assumption.Building upon our first contribution we then propose a novel importance-driven approach to cinematic replay that exploits both the narrative and geometric information in games to automatically compute camera paths. Combined with our editing framework, our solution generates coherent cinematic replays of game sessions.Finally, drawing inspiration from standard practices in the movie industry, we introduce a novel approach to camera path planning. This solution ensures realistic trajectories by constraining camera motion on a virtual rail. The camera position and orientation are optimized in time along the rail to best satisfy visual properties. The computed shots constitute relevant inputs for the editing framework which then generates compelling cinematographic content.La libre diffusion de modèles 3D de qualité ainsi que la mise à disposition de nombreux moyens facilitant la création de contenus animés ont permis de populariser la production d'oeuvres cinématographiques 3D.A l'heure actuelle, on peut cependant observer un manque important d'outils permettant de traiter la cinématographie (placement des caméras pour l'enregistrement des plans) et d'effectuer le montage de tels contenus (sélection des plans et transitions entre caméras). La création d'un film nécessite la connaissance d'un grand nombre de règles et de conventions.La plupart des systèmes d'animation ne disposant pas de ces connaissances, le besoin de méthodes automatiques qui pourraient, au moins partiellement, assister l'utilisateur dans son entreprise créative, se fait de plus en plus ressentir. A travers cette thèse, nous abordons à la fois la gestion automatique de la cinématographie ainsi que le montage des plans générés.L'utilisation de caméras pour retranscrire les actions et événements se déroulant au sein d'un environnement dynamique est une préoccupation importante de beaucoup d'applications graphiques. Dans le contexte de la simulation de foule, nous présentons une nouvelle approche qui aborde le contrôle simultané de plusieurs caméras filmant un groupe de cibles. Nous proposons un système se reposant sur le modèle de "comportements guidées" développé par Reynolds. Celui-ci permet de contrôler et coordonner localement un ensemble de caméras évoluant dans un environnement afin de filmer les différents événements s'y déroulant.Le montage d'un film est une tache particulièrement complexe et méticuleuse qui nécessite de profondes connaissances du domaine. L'automatisation de cette tache requiert donc la formalisation de cette expertise. En utilisant la méthode de montage linéaire - ou "continuity editing" ; il s'agit de la technique de montage la plus utilisée - comme référence pour l'évaluation du montage, nous présentons une nouvelle approche au montage automatique d'animations 3D se reposant sur une méthode d'optimisation. En s'appuyant sur une hypothèse semi-Markovienne, notre méthode utilise la programmation dynamique afin de calculer efficacement les solutions.A partir de notre première contribution, nous proposons ensuite une nouvelle approche à la création de "replay" cinématiques qui utilise à la fois les informations narratives et géométriques extraites du jeu pour automatiquement calculer la trajectoire de la caméra. En combinant ce système avec notre framework de montage, notre solution génère rapidement les replay de sessions de jeu.Enfin, en prenant inspiration de pratiques couramment utilisées dans l'industrie du cinéma, nous proposons une nouvelle approche à la planification de mouvements de caméra. Notre solution assure le réalisme des trajectoires en contraignant les caméras sur des rails virtuels. La position et l'orientation de la caméra sont optimisées dans le temps le long du rail pour satisfaire différentes propriétés visuelles. Les plans générés sont ensuite envoyés à notre framework de montage qui génère alors la séquence cinématographique

    Implementing Hitchcock - the Role of Focalization and Viewpoint

    Get PDF
    International audienceFocalization and viewpoint are important aspects of narrative movie-making that need to be taken into account by cinematography and editing. In this paper, we argue that viewpoint can be determined from the first principles of focalization in the screenplay and adherence to a slightly modified version of Hitchcock's rule in cinematography and editing. With minor changes to previous work in automatic cinematography and editing, we show that this strategy makes it possible to easily control the viewpoint in the movie by rewriting and annotating the screenplay. We illustrate our claim with four versions of a moderately complex movie scene obtained by focalizing on its four main characters, with dramatically different camera choices

    Comparing film-editing

    Get PDF
    International audienceThrough a precise 3D animated reconstruction of a key scene in the movie "Back to the Future" directed by RobertZemekis, we are able to make a detailed comparison of two very different versions of editing. The first versionclosely follows film editor Arthur Schmidt original sequence of shots cut in the movie. The second version isautomatically generated using our recent algorithm [GRLC15] using the same choice of cameras. A shot-by-shotand cut-by-cut comparison demonstrates that our algorithm provides a remarkably pleasant and valid solution,even in such a rich narrative context, which differs significantly from the original version more than 60% of thetime. Our explanation is that our version avoids stylistic effects while the original version favors such effects anduses them effectively. As a result, we suggest that our algorithm can be thought of as a baseline ("film-editing zerodegree") for future work on film-editing style

    Continuity Editing for 3D Animation

    Get PDF
    International audienceWe describe an optimization-based approach for automatically creating well-edited movies from a 3D animation. While previous work has mostly focused on the problem of placing cameras to produce nice-looking views of the action, the problem of cutting and pasting shots from all available cameras has never been addressed extensively. In this paper, we review the main causes of editing errors in literature and propose an editing model relying on a minimization of such errors. We make a plausible semi-Markov assumption, resulting in a dynamic programming solution which is computationally efficient. We also show that our method can generate movies with different editing rhythms and validate the results through a user study. Combined with state-of-the-art cinematography, our approach therefore promises to significantly extend the expressiveness and naturalness of virtual movie-making.Nous présentons une nouvelle approche pour automatiser le montage de films d'animations 3D. Notre approche est basée sur des techniques d'optimisation et complète l'état de l'art, qui s'est surtout préoccupé du problème du placement des caméras virtuelles. Dans cet article, nous nous intéressons exclusivement au problème de la sélection et du montage des caméras. Nous passons en revue les principales causes d'erreurs qui doivent être évitées, et proposons une fonction de coût permettant de pénaliser ces erreurs. Nous introduisons une hypothèse semi-markovienne qui permet de restreindre l'espace des solutions et de proposer des algorithmes de montage rapides et exacts. Nous démontrons la validité de notre approche par une étude utilisateurs qui confirme que les termes de notre fonction de coût contribuent à l'évaluation subjective de la qualité du montage obtenu
    corecore