2,147 research outputs found

    Real-Time Stylized Rendering for Large-Scale 3D Scenes

    Get PDF
    While modern digital entertainment has seen a major shift toward photorealism in animation, there is still significant demand for stylized rendering tools. Stylized, or non-photorealistic rendering (NPR), applications generally sacrifice physical accuracy for artistic or functional visual output. Oftentimes, NPR applications focus on extracting specific features from a 3D environment and highlighting them in a unique manner. One application of interest involves recreating 2D hand-drawn art styles in a 3D-modeled environment. This task poses challenges in the form of spatial coherence, feature extraction, and stroke line rendering. Previous research on this topic has also struggled to overcome specific performance bottlenecks, which have limited use of this technology in real-time applications. Specifically, many stylized rendering techniques have difficulty operating on large-scale scenes, such as open-world terrain environments. In this paper, we describe various novel rendering techniques for mimicking hand-drawn art styles in a large-scale 3D environment, including modifications to existing methods for stroke rendering and hatch-line texturing. Our system focuses on providing various complex styles while maintaining real-time performance, to maximize user-interactability. Our results demonstrate improved performance over existing real-time methods, and offer a few unique style options for users, though the system still suffers from some visual inconsistencies

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance

    Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples

    Get PDF
    This paper presents several methods for shading meshes from scanned paint samples that represent dark to light transitions. Our techniques emphasize artistic control of brush stroke texture and color. We first demonstrate how the texture of the paint sample can be separated from its color gradient. We demonstrate three methods, two real-time and one off-line for producing rendered, shaded images from the texture samples. All three techniques use texture synthesis to generate additional paint samples. Finally, we develop metrics for evaluating how well each method achieves our goal in terms of texture similarity, shading correctness and temporal coherence

    Higher level techniques for the artistic rendering of images and video

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Design of 2D Time-Varying Vector Fields

    Get PDF
    published_or_final_versio

    Design of 2D time-varying vector fields

    Get PDF
    pre-printDesign of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects

    Realizing the physics of motile cilia synchronization with driven colloids

    Full text link
    Cilia and flagella in biological systems often show large scale cooperative behaviors such as the synchronization of their beats in "metachronal waves". These are beautiful examples of emergent dynamics in biology, and are essential for life, allowing diverse processes from the motility of eukaryotic microorganisms, to nutrient transport and clearance of pathogens from mammalian airways. How these collective states arise is not fully understood, but it is clear that individual cilia interact mechanically,and that a strong and long ranged component of the coupling is mediated by the viscous fluid. We review here the work by ourselves and others aimed at understanding the behavior of hydrodynamically coupled systems, and particularly a set of results that have been obtained both experimentally and theoretically by studying actively driven colloidal systems. In these controlled scenarios, it is possible to selectively test aspects of the living motile cilia, such as the geometrical arrangement, the effects of the driving profile and the distance to no-slip boundaries. We outline and give examples of how it is possible to link model systems to observations on living systems, which can be made on microorganisms, on cell cultures or on tissue sections. This area of research has clear clinical application in the long term, as severe pathologies are associated with compromised cilia function in humans.Comment: 31 pages, to appear in Annual Review of Condensed Matter Physic

    Colour videos with depth : acquisition, processing and evaluation

    Get PDF
    The human visual system lets us perceive the world around us in three dimensions by integrating evidence from depth cues into a coherent visual model of the world. The equivalent in computer vision and computer graphics are geometric models, which provide a wealth of information about represented objects, such as depth and surface normals. Videos do not contain this information, but only provide per-pixel colour information. In this dissertation, I hence investigate a combination of videos and geometric models: videos with per-pixel depth (also known as RGBZ videos). I consider the full life cycle of these videos: from their acquisition, via filtering and processing, to stereoscopic display. I propose two approaches to capture videos with depth. The first is a spatiotemporal stereo matching approach based on the dual-cross-bilateral grid – a novel real-time technique derived by accelerating a reformulation of an existing stereo matching approach. This is the basis for an extension which incorporates temporal evidence in real time, resulting in increased temporal coherence of disparity maps – particularly in the presence of image noise. The second acquisition approach is a sensor fusion system which combines data from a noisy, low-resolution time-of-flight camera and a high-resolution colour video camera into a coherent, noise-free video with depth. The system consists of a three-step pipeline that aligns the video streams, efficiently removes and fills invalid and noisy geometry, and finally uses a spatiotemporal filter to increase the spatial resolution of the depth data and strongly reduce depth measurement noise. I show that these videos with depth empower a range of video processing effects that are not achievable using colour video alone. These effects critically rely on the geometric information, like a proposed video relighting technique which requires high-quality surface normals to produce plausible results. In addition, I demonstrate enhanced non-photorealistic rendering techniques and the ability to synthesise stereoscopic videos, which allows these effects to be applied stereoscopically. These stereoscopic renderings inspired me to study stereoscopic viewing discomfort. The result of this is a surprisingly simple computational model that predicts the visual comfort of stereoscopic images. I validated this model using a perceptual study, which showed that it correlates strongly with human comfort ratings. This makes it ideal for automatic comfort assessment, without the need for costly and lengthy perceptual studies
    corecore