343 research outputs found

    Between Sense and Sensibility: Declarative narrativisation of mental models as a basis and benchmark for visuo-spatial cognition and computation focussed collaborative cognitive systems

    Full text link
    What lies between `\emph{sensing}' and `\emph{sensibility}'? In other words, what kind of cognitive processes mediate sensing capability, and the formation of sensible impressions ---e.g., abstractions, analogies, hypotheses and theory formation, beliefs and their revision, argument formation--- in domain-specific problem solving, or in regular activities of everyday living, working and simply going around in the environment? How can knowledge and reasoning about such capabilities, as exhibited by humans in particular problem contexts, be used as a model and benchmark for the development of collaborative cognitive (interaction) systems concerned with human assistance, assurance, and empowerment? We pose these questions in the context of a range of assistive technologies concerned with \emph{visuo-spatial perception and cognition} tasks encompassing aspects such as commonsense, creativity, and the application of specialist domain knowledge and problem-solving thought processes. Assistive technologies being considered include: (a) human activity interpretation; (b) high-level cognitive rovotics; (c) people-centred creative design in domains such as architecture & digital media creation, and (d) qualitative analyses geographic information systems. Computational narratives not only provide a rich cognitive basis, but they also serve as a benchmark of functional performance in our development of computational cognitive assistance systems. We posit that computational narrativisation pertaining to space, actions, and change provides a useful model of \emph{visual} and \emph{spatio-temporal thinking} within a wide-range of problem-solving tasks and application areas where collaborative cognitive systems could serve an assistive and empowering function.Comment: 5 pages, research statement summarising recent publication

    Intelligent Camera Control Using Behavior Trees

    Get PDF
    Automatic camera systems produce very basic animations for virtual worlds. Users often view environments through two types of cameras: a camera that they control manually, or a very basic automatic camera that follows their character, minimizing occlusions. Real cinematography features much more variety producing more robust stories. Cameras shoot establishing shots, close-ups, tracking shots, and bird’s eye views to enrich a narrative. Camera techniques such as zoom, focus, and depth of field contribute to framing a particular shot. We present an intelligent camera system that automatically positions, pans, tilts, zooms, and tracks events occurring in real-time while obeying traditional standards of cinematography. We design behavior trees that describe how a single intelligent camera might behave from low-level narrative elements assigned by “smart events”. Camera actions are formed by hierarchically arranging behavior sub-trees encapsulating nodes that control specific camera semantics. This approach is more modular and particularly reusable for quickly creating complex camera styles and transitions rather then focusing only on visibility. Additionally, our user interface allows a director to provide further camera instructions, such as prioritizing one event over another, drawing a path for the camera to follow, and adjusting camera settings on the fly.We demonstrate our method by placing multiple intelligent cameras in a complicated world with several events and storylines, and illustrate how to produce a well-shot “documentary” of the events constructed in real-time

    Techniques de mise en scène pour le jeu vidéo et l'animation

    Get PDF
    Eurographics State of the Art Report (STAR).International audienceOver the last forty years, researchers in computer graphics have proposed a large variety of theoretical models and computer implementations of a virtual film director, capable of creating movies from minimal input such as a screenplay or storyboard. The underlying film directing techniques are also in high demand to assist and automate the generation of movies in computer games and animation. The goal of this survey is to characterize the spectrum of applications that require film directing, to present a historical and up-to-date summary of research in algorithmic film directing, and to identify promising avenues and hot topics for future research.Depuis quarante ans, les chercheurs en informatique graphique ont proposé une grande variété de modèles théoriques et d'implémentations de réalisateurs virtuels, capables de créer des films automatiquement à partir de scénarios ou de storyboards. Les techniques de mise en scène sous-jacentes peuvent également être très utiles pour assister et automatiser la création de films dans le jeu vidéo et l'animation. Le but de cet état de l'art est de caractériser le spectre des applications qui peuvent bénéficier des techniques de mise en scène, de donner un compte rendu historique de la recherche en mise en scène algorithmique, et d'identifier les tendances et perspectives du domaine

    Virtual Cinematography in Games:Investigating the Impact on Player Experience

    Get PDF

    A Lightweight Intelligent Virtual Cinematography System for Machinima Production

    Get PDF
    Machinima is a low-cost alternative to full production filmmaking. However, creating quality cinematic visualizations with existing machinima techniques still requires a high degree of talent and effort. We introduce a lightweight artificial intelligence system, Cambot, that can be used to assist in machinima production. Cambot takes a script as input and produces a cinematic visualization. Unlike other virtual cinematography systems, Cambot favors an offline algorithm coupled with an extensible library of specific modular and reusable facets of cinematic knowledge. One of the advantages of this approach to virtual cinematography is a tight coordination between the positions and movements of the camera and the actors

    Movie Editing and Cognitive Event Segmentation in Virtual Reality Video

    Get PDF
    Traditional cinematography has relied for over a century on a well-established set of editing rules, called continuity editing, to create a sense of situational continuity. Despite massive changes in visual content across cuts, viewers in general experience no trouble perceiving the discontinuous flow of information as a coherent set of events. However, Virtual Reality (VR) movies are intrinsically different from traditional movies in that the viewer controls the camera orientation at all times. As a consequence, common editing techniques that rely on camera orientations, zooms, etc., cannot be used. In this paper we investigate key relevant questions to understand how well traditional movie editing carries over to VR. To do so, we rely on recent cognition studies and the event segmentation theory, which states that our brains segment continuous actions into a series of discrete, meaningful events. We first replicate one of these studies to assess whether the predictions of such theory can be applied to VR. We next gather gaze data from viewers watching VR videos containing different edits with varying parameters, and provide the first systematic analysis of viewers' behavior and the perception of continuity in VR. From this analysis we make a series of relevant findings; for instance, our data suggests that predictions from the cognitive event segmentation theory are useful guides for VR editing; that different types of edits are equally well understood in terms of continuity; and that spatial misalignments between regions of interest at the edit boundaries favor a more exploratory behavior even after viewers have fixated on a new region of interest. In addition, we propose a number of metrics to describe viewers' attentional behavior in VR. We believe the insights derived from our work can be useful as guidelines for VR content creation

    An Interactive Narrative Architecture Based on Filmmaking Theory

    Get PDF
    Designing and developing an interactive narrative experience includes development of story content as well as a visual composition plan for visually realizing the story content. Theatre directors, filmmakers, and animators have emphasized the importance of visual design. Choices of character placements, lighting configuration, and camera movements, have been documented by designers to have direct impact on communicating the narrative, evoking emotions and moods, and engaging viewers. Many research projects focused on adapting the narrative content to the interaction, yet little attention was given to adapting the visual presentation. In this paper, I present a new approach to interactive narrative – an approach based on filmmaking theory. I propose an interactive narrative architecture, that in addition to dynamically selecting narrative events that suit the continuously changing situation, it automatically, and in real-time, reconfigures the visual design integrating camera movements, lighting modulation, and character movements. The architecture utilizes rules extracted from filmmaking, cinematography, and visual arts theories. I argue that such adaptation will lead to increased engagement and enriched interactive narrative experience

    Camera Control through Cinematography in 3D Computer Games

    Get PDF
    Modern 3D computer games have the potential to employ principles from cinematography in rendering the action in the game. Using principles of cinematography would take advantage of techniques that have been used to render action in cinematic films for more than a century. This paper outlines our proposal to develop a camera control system that uses principles of cinematography for 3D computer games and provides a critical review of related research
    • …
    corecore