1,316 research outputs found

    Graphics Insertions into Real Video for Market Research

    Get PDF

    Calipso: Physics-based Image and Video Editing through CAD Model Proxies

    Get PDF
    We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso's physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.Comment: 11 page

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    The Virtual Worlds of Cinema Visual Effects, Simulation, and the Aesthetics of Cinematic Immersion

    Get PDF
    This thesis develops a phenomenology of immersive cinematic spectatorship. During an immersive experience in the cinema, the images, sounds, events, emotions, and characters that form a fictional diegesis become so compelling that our conscious experience of the real world is displaced by a virtual world. Theorists and audiences have long recognized cinema’s ability to momentarily substitute for the lived experience of reality, but it remains an under-theorized aspect of cinematic spectatorship. The first aim of this thesis is therefore to examine these immersive responses to cinema from three perspectives – the formal, the technological, and the neuroscientific – to describe the exact mechanisms through which a spectator’s immersion in a cinematic world is achieved. A second aim is to examine the historical development of the technologies of visual simulation that are used to create these immersive diegetic worlds. My analysis shows a consistent increase in the vividness and transparency of simulative technologies, two factors that are crucial determinants in a spectator’s immersion. In contrast to the cultural anxiety that often surrounds immersive responses to simulative technologies, I examine immersive spectatorship as an aesthetic phenomenon that is central to our engagement with cinema. The ubiquity of narrative – written, verbal, cinematic – shows that the ability to achieve immersion is a fundamental property of the human mind found in cultures diverse in both time and place. This thesis is thus an attempt to illuminate this unique human ability and examine the technologies that allow it to flourish
    • …
    corecore