31,980 research outputs found

    Interactive 3D video editing

    Get PDF
    We present a generic and versatile framework for interactive editing of 3D video footage. Our framework combines the advantages of conventional 2D video editing with the power of more advanced, depth-enhanced 3D video streams. Our editor takes 3D video as input and writes both 2D or 3D video formats as output. Its underlying core data structure is a novel 4D spatio-temporal representation which we call the video hypervolume. Conceptually, the processing loop comprises three fundamental operators: slicing, selection, and editing. The slicing operator allows users to visualize arbitrary hyperslices from the 4D data set. The selection operator labels subsets of the footage for spatio-temporal editing. This operator includes a 4D graph-cut based algorithm for object selection. The actual editing operators include cut & paste, affine transformations, and compositing with other media, such as images and 2D video. For high-quality rendering, we employ EWA splatting with view-dependent texturing and boundary matting. We demonstrate the applicability of our methods to post-production of 3D vide

    The Video Mesh: A Data Structure for Image-based Three-dimensional Video Editing

    Get PDF
    This paper introduces the video mesh, a data structure for representing video as 2.5D “paper cutouts.” The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. The user rotoscopes occluding contours and we introduce an algorithm to cut the video mesh along them. Object boundaries are refined with per-pixel alpha values. The video mesh is at its core a set of texture mapped triangles, we leverage graphics hardware to enable interactive editing and rendering of a variety of effects. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depth-of-field manipulation, and 2D to 3D video conversion

    The Video Mesh: A Data Structure for Image-based Video Editing

    Get PDF
    This paper introduces the video mesh, a data structure for representing video as 2.5D "paper cutouts." The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. The user rotoscopes occluding contours and we introduce an algorithm to cut the video mesh along them. Object boundaries are refined with perpixel alpha values. The video mesh is at its core a set of texture mapped triangles, we leverage graphics hardware to enable interactive editing and rendering of a variety of effects. We demonstrate the effectiveness of our representation with a number of special effects including 3D viewpoint changes, object insertion, and depth-of-field manipulation

    Calipso: Physics-based Image and Video Editing through CAD Model Proxies

    Get PDF
    We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso's physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.Comment: 11 page

    Integrating diverse digital elements and DVD authoring to design a promotional interactive DVD media

    Get PDF
    a. Project consideration The project will be an experimental design using DVD media. Below are few factors to be considered before beginning the project. Goal: Use features of DVD media to create an interactive DVD. Value: To help users understand that interactive DVDs promote products better than traditional media. Design solutions: To find the best solutions for the project. Timeline: Define the timetable of project process. Evolution: Understand problems and revise the final project. b. Product definition: The product being promoted almost decides the entire design style in the final DVD. I had chosen the CDJ-1000, a DJ turntable. Not only is it an attractive product, it is also fun, and has the ability to remix audio. It really fits features of DVD media for those chrematistics, and that type of lifestyle can be promoted well in DVD media. c. Define project structure (Please reference diagram 1. of the project structure) UDF Format 1 . Demonstration section: Real people demonstrate the product through film shooting, video editing, sound editing and remixing, lighting and special effects. 2. Main Features: QTVR motion menu. (3D modeling and animation, DVD scripting) 3. Training time: Use multi-angle video and multi-channel audio to train people how to use the product and also combine the quiz. (Different angle video editing, DVD scripting ) 4. Terminology: Basic menu system provides a database-style information system. 5. Product specifications Same as terminology. 6. Credits: Credit information ISO Format 1. Product game: The beat game for DVD media. 2. Product information: QTVR movie and product information. 3. DVD information: DVD media information, credits and web links. d. Define project procedures The project will explore new technology and create interactive DVD media. There are no examples or reference information for this new technology, which makes it necessary to have defined project procedures. (Please reference diagram 02. the project procedure

    Dialectical polyptych: an interactive movie

    Get PDF
    Most of the known video games developed by big software companies usually establish an approach to the cinematic language in an attempt to create a perfect combination of narrative, visual technique and interaction. Unlike most video games, interactive film narratives normally involve an interruption in time whenever the spectator has to make choices. “Dialectical Polyptych” is an interactive movie included in a project called “Characters looking for a spect-actor”, which aims to give the spectator on-the-fly control over film editing, thus exploiting the role of the spectator as an active subject in the presented narrative. This paper presents a system based on a 3D sensor for tracking the spectator's movements and positions, which allows seamless real-timeinteractivity with the movie. Different positions of the body prompt a change in the angle or shot within each narrative, and hand swipes allow the spectator to alternate between the two parallel narratives, both producing a complementary narrative.info:eu-repo/semantics/publishedVersio

    Service oriented interactive media (SOIM) engines enabled by optimized resource sharing

    Get PDF
    In the same way as cloud computing, Software as a Service (SaaS) and Content Centric Networking (CCN) triggered a new class of software architectures fundamentally different from traditional desktop software, service oriented networking (SON) suggests a new class of media engine technologies, which we call Service Oriented Interactive Media (SOIM) engines. This includes a new approach for game engines and more generally interactive media engines for entertainment, training, educational and dashboard applications. Porting traditional game engines and interactive media engines to the cloud without fundamentally changing the architecture, as done frequently, can enable already various advantages of cloud computing for such kinds of applications, for example simple and transparent upgrading of content and unified user experience on all end-user devices. This paper discusses a new architecture for game engines and interactive media engines fundamentally designed for cloud and SON. Main advantages of SOIM engines are significantly higher resource efficiency, leading to a fraction of cloud hosting costs. SOIM engines achieve these benefits by multilayered data sharing, efficiently handling many input and output channels for video, audio, and 3D world synchronization, and smart user session and session slot management. Architecture and results of a prototype implementation of a SOIM engine are discussed

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance
    corecore