1,026 research outputs found

    Techniques de mise en scène pour le jeu vidéo et l'animation

    Get PDF
    Eurographics State of the Art Report (STAR).International audienceOver the last forty years, researchers in computer graphics have proposed a large variety of theoretical models and computer implementations of a virtual film director, capable of creating movies from minimal input such as a screenplay or storyboard. The underlying film directing techniques are also in high demand to assist and automate the generation of movies in computer games and animation. The goal of this survey is to characterize the spectrum of applications that require film directing, to present a historical and up-to-date summary of research in algorithmic film directing, and to identify promising avenues and hot topics for future research.Depuis quarante ans, les chercheurs en informatique graphique ont proposé une grande variété de modèles théoriques et d'implémentations de réalisateurs virtuels, capables de créer des films automatiquement à partir de scénarios ou de storyboards. Les techniques de mise en scène sous-jacentes peuvent également être très utiles pour assister et automatiser la création de films dans le jeu vidéo et l'animation. Le but de cet état de l'art est de caractériser le spectre des applications qui peuvent bénéficier des techniques de mise en scène, de donner un compte rendu historique de la recherche en mise en scène algorithmique, et d'identifier les tendances et perspectives du domaine

    Dynamic Storyboard Generation in an Engine-based Virtual Environment for Video Production

    Full text link
    Amateurs working on mini-films and short-form videos usually spend lots of time and effort on the multi-round complicated process of setting and adjusting scenes, plots, and cameras to deliver satisfying video shots. We present Virtual Dynamic Storyboard (VDS) to allow users storyboarding shots in virtual environments, where the filming staff can easily test the settings of shots before the actual filming. VDS runs on a "propose-simulate-discriminate" mode: Given a formatted story script and a camera script as input, it generates several character animation and camera movement proposals following predefined story and cinematic rules to allow an off-the-shelf simulation engine to render videos. To pick up the top-quality dynamic storyboard from the candidates, we equip it with a shot ranking discriminator based on shot quality criteria learned from professional manual-created data. VDS is comprehensively validated via extensive experiments and user studies, demonstrating its efficiency, effectiveness, and great potential in assisting amateur video production.Comment: Project page: https://virtualfilmstudio.github.io

    Virtual Cinematography in Games:Investigating the Impact on Player Experience

    Get PDF

    Thinking like a director: Film editing paterns for virtual cinematographic storytelling

    Get PDF
    International audienceThis paper introduces Film Editing Patterns (FEP), a language to formalize film editing practices and stylistic choices found in movies. FEP constructs are constraints, expressed over one or more shots from a movie sequence that characterize changes in cinematographic visual properties such as shot sizes, camera angles, or layout of actors on the screen. We present the vocabulary of the FEP language, introduce its usage in analyzing styles from annotated film data, and describe how it can support users in the creative design of film sequences in 3D. More specifically, (i) we define the FEP language, (ii) we present an application to craft filmic sequences from 3D animated scenes that uses FEPs as a high level mean to select cameras and perform cuts between cameras that follow best practices in cinema and (iii) we evaluate the benefits of FEPs by performing user experiments in which professional filmmakers and amateurs had to create cinematographic sequences. The evaluation suggests that users generally appreciate the idea of FEPs, and that it can effectively help novice and medium experienced users in crafting film sequences with little training

    The Prose Storyboard Language: A Tool for Annotating and Directing Movies: (Version 2.0, Revised and Illustrated Edition)

    Get PDF
    International audienceThe prose storyboard language is a formal language for describing movies shot by shot, where each shot is described with a unique sentence. The language uses a simple syntax and limited vocabulary borrowed from working practices in traditional movie-making and is intended to be readable both by machines and humans. The language has been designed over the last ten years to serve as a high-level user interface for intelligent cinematography and editing systems. In this new paper, we present the latest evolution of the language, and the results of an extensive annotation exercise showing the benefits of the language in the task of annotating the sophisticated cinematography and film editing of classic movies

    Contrôle de caméra virtuelle à base de partitions spatiales dynamiques

    Get PDF
    Le contrôle de caméra virtuelle est aujourd'hui un composant essentiel dans beaucoup d'applications d'infographie. Malgré cette importance, les approches actuelles restent limitées en terme d'expressivité, d'interactivité et de performances. Typiquement, les éléments de style ou de genre cinématographique sont difficiles à modéliser et à simuler dû à l'incapacité des systèmes actuels de calculer simultanément des points de vues, des trajectoires et d'effectuer le montage. Deuxièmement, elles n'explorent pas assez le potentiel créatif offert par le couplage potentiel d'un humain et d'un système intelligent pour assister les utilisateurs dans une tâche complexe de construction de séquences cinématographiques. Enfin, la plupart des approches existantes se basent sur des techniques d'optimisation dans un espace de recherche 6D, qui s'avèrent coûteuses et donc inadaptées à un contexte interactif. Dans cette thèse, nous proposons tout d'abord un cadre unique intégrant les quatre aspects clés de la cinématographie (le calcul de point de vue, la planification de trajectoires, le montage et la visibilité). Ce cadre expressif permet de simuler certaines dimensions de style cinématographique. Nous proposons ensuite une méthodologie permettant de combiner les capacités d'un système automatique avec une interaction utilisateur. Enfin, nous présentons un modèle de contrôle de caméra efficace qui réduit l'espace de recherche de 6D à 3D. Ce modèle a le potentiel pour remplacer un certain nombre de formulations existantes.Virtual camera control is nowadays an essential component in many computer graphics applications. Despite its importance, current approaches remain limited in their expressiveness, interactive nature and performances. Typically, elements of directorial style and genre cannot be easily modeled nor simulated due to the lack of simultaneous control in viewpoint computation, camera path planning and editing. Second, there is a lack in exploring the creative potential behind the coupling of a human with an intelligent system to assist users in the complex task of designing cinematographic sequences. Finally, most techniques are based on computationally expensive optimization techniques performed in a 6D search space, which prevents their application to real-time contexts. In this thesis, we first propose a unifying approach which handles four key aspects of cinematography (viewpoint computation, camera path planning, editing and visibility computation) in an expressive model which accounts for some elements of directorial style. We then propose a workflow allowing to combine automated intelligence with user interaction. We finally present a novel and efficient approach to virtual camera control which reduces the search space from 6D to 3D and has the potential to replace a number of existing formulations.RENNES1-Bibl. électronique (352382106) / SudocSudocFranceF

    Narrative-Driven Camera Control for Cinematic Replay of Computer Games

    Get PDF
    International audienceThis paper presents a system that generates cinematic replays for dialogue-based 3D video games. The system exploits the narrative and geometric information present in these games and automatically computes camera framings and edits to build a coherent cinematic replay of the gaming session. We propose a novel importance-driven approach to cinematic replay. Rather than relying on actions performed by characters to drive the cinematography (as in idiom-based approaches), we rely on the importance of characters in the narrative. We first devise a mechanism to compute the varying importance of the characters. We then map importances of characters with different camera specifications, and propose a novel technique that (i) automatically computes camera positions satisfying given specifications, and (ii) provides smooth camera motions when transitioning between different specifications. We demonstrate the features of our system by implementing three camera behaviors (one for master shots, one for shots on the player character, and one for reverse shots). We present results obtained by interfacing our system with a full-fledged serious game (Nothing for Dinner) containing several hours of 3D animated content

    Machinima Filmmaking: The Integration of Immersive Technology for Collaborative Machinima Filmmaking

    Get PDF
    This Digital Media MS project proposes to create a flexible, intuitive, and low-entry barrier virtual cinematography tool that will enable participants engaged in human computer interaction (HCI) activities to quickly stage, choreograph, rehearse, and capture performances in real-time. Heretofore, Machinima developers have used limited forms of expressive input devices to puppeteer characters and record in-game live performances, using a gamepad, keyboard, mouse, and joystick to produce content. This has stagnated Machinima development because machinimators have not embraced the current evolution of input devices to create, capture, and edit content, thereby omitting game engine programming possibilities that could exploit new 3D compositing techniques and alternatives to strengthen interactivity, collaboration, and efficiency in cinematic pipelines. Our system, leveraging consumer-affordable hardware and software, advances the development of Machinima production by providing a foundation for alternative cinematic practices, to create a seamless form of human computer interaction and to positively convince more people toward Machinima filmmaking. We propose to produce an Unreal Engine 4 system plugin which integrates virtual reality and eye tracking, via the Oculus Rift DK2 and Tobii EyeX, respectively. The plugin will enable two people, in the roles of Director and Performer, to navigate and interact within a virtual 3D space to productively affect collaborative Machinima filmmaking.M.S., Digital Media -- Drexel University, 201
    • …
    corecore