117 research outputs found

    Scalable partitioning for parallel position based dynamics

    Get PDF
    We introduce a practical partitioning technique designed for parallelizing Position Based Dynamics, and exploiting the ubiquitous multi-core processors present in current commodity GPUs. The input is a set of particles whose dynamics is influenced by spatial constraints. In the initialization phase, we build a graph in which each node corresponds to a constraint and two constraints are connected by an edge if they influence at least one common particle. We introduce a novel greedy algorithm for inserting additional constraints (phantoms) in the graph such that the resulting topology is q-colourable, where ˆ qˆ ≥ 2 is an arbitrary number. We color the graph, and the constraints with the same color are assigned to the same partition. Then, the set of constraints belonging to each partition is solved in parallel during the animation phase. We demonstrate this by using our partitioning technique; the performance hit caused by the GPU kernel calls is significantly decreased, leaving unaffected the visual quality, robustness and speed of serial position based dynamics

    gMotion: A spatio-temporal grammar for the procedural generation of motion graphics

    Get PDF
    Creating by hand compelling 2D animations that choreograph several groups of shapes requires a large number of manual edits. We present a method to procedurally generate motion graphics with timeslice grammars. Timeslice grammars are to time what split grammars are to space. We use this grammar to formally model motion graphics, manipulating them in both temporal and spatial components. We are able to combine both these aspects by representing animations as sets of affine transformations sampled uniformly in both space and time. Rules and operators in the grammar manipulate all spatio-temporal matrices as a whole, allowing us to expressively construct animation with few rules. The grammar animates shapes, which are represented as highly tessellated polygons, by applying the affine transforms to each shape vertex given the vertex position and the animation time. We introduce a small set of operators showing how we can produce 2D animations of geometric objects, by combining the expressive power of the grammar model, the composability of the operators with themselves, and the capabilities that derive from using a unified spatio-temporal representation for animation data. Throughout the paper, we show how timeslice grammars can produce a wide variety of animations that would take artists hours of tedious and time-consuming work. In particular, in cases where change of shapes is very common, our grammar can add motion detail to large collections of shapes with greater control over per-shape animations along with a compact rules structure

    3DFlow: Continuous Summarization of Mesh Editing Workflows

    Get PDF
    Mesh editing software is continually improving allowing more detailed meshes to be create efficiently by skilled artists. Many of these are interested in sharing not only the final mesh, but also their whole workflows both for creating tutorials as well as for showcasing the artist\u27s talent, style, and expertise. Unfortunately, while creating meshes is improving quickly, sharing editing workflows remains cumbersome since time-lapsed or sped-up videos remain the most common medium. In this paper, we present 3DFlow, an algorithm that computes continuous summarizations of mesh editing workflows. 3DFlow takes as input a sequence of meshes and outputs a visualization of the workflow summarized at any level of detail. The output is enhanced by highlighting edited regions and, if provided, overlaying visual annotations to indicated the artist\u27s work, e.g. summarizing brush strokes in sculpting. We tested 3DFlow with a large set of inputs using a variety of mesh editing techniques, from digital sculpting to low-poly modeling, and found 3DFlow performed well for all. Furthermore, 3DFlow is independent of the modeling software used since it requires only mesh snapshots, using additional information only for optional overlays. We open source 3DFlow for artists to showcase their work and release all our datasets so other researchers can improve upon our work

    Toward Evaluating Lighting Design Interface Paradigms for Novice Users

    Get PDF
    Lighting design is a complex and fundamental task in computer cinematography, involving adjustment of light parameters to define final scene appearance. Many lighting interfaces have been proposed to improve lighting design work flow. These paradigms exist in three paradigm categories: direct light parameter manipulation, indirect light feature manipulation (e.g., shadow dragging), and goal-based optimization of light through painting. To this date, no formal evaluation of the relative effectiveness of these methods has been performed. In this paper, we present a first step toward evaluating the three paradigms in the form of a user study with novice users. We focus our evaluation on simple tasks that directly affect lighting features, such as highlights, shadows and intensity gradients, in scenes with up to 2 point lights and 5 objects under direct illumination. We perform quantitative experiments to measure relative efficiency between interfaces together with qualitative input to explore the intuitiveness of the paradigms. Our results indicate that paint-based goal specification is more cumbersome than either direct or indirect manipulation. Furthermore, our investigation suggests improvements to not only the implementation of the paradigms, but also overall paradigm structure for further exploration

    Light-Based Sample Reduction Methods for Interactive Relighting of Scenes with Minute Geometric Scale

    Get PDF
    Rendering production-quality cinematic scenes requires high computational and temporal costs. From an artist\u27s perspective, one must wait for several hours for feedback on even minute changes of light positions and parameters. Previous work approximates scenes so that adjustments on lights may be carried out with interactive feedback, so long as geometry and materials remain constant. We build on these methods by proposing means by which objects with high geometric complexity at the subpixel level, such as hair and foliage, can be approximated for real-time cinematic relighting. Our methods make no assumptions about the geometry or shaders in a scene, and as such are fully generalized. We show that clustering techniques can greatly reduce multisampling, while still maintaining image fidelity at an error significantly lower than sparsely sampling without clustering, provided that no shadows are computed. Scenes that produce noise-like shadow patterns when sparse shadow samples are taken suffer from additional error introduced by those shadows. We present a viable solution to scalable scene approximation for lower sampling reolutions, provided a robust solution to shadow approximation for sub-pixel geomery can be provided in the future

    Toward Evaluating Progressive Rendering Methods in Appearance Design Tasks

    Get PDF
    Progressive rendering is becoming a popular alternative to precomputation approaches for appearance design tasks. Images created by different progressive algorithms exhibit various kinds of visual artifacts at the early stages of computation. We present a user study that investigates the effects of these artifacts on user performance in appearance design tasks. Specifically, we ask both novice and expert subjects to perform lighting and material editing tasks with the following algorithms: random path tracing, quasi-random path tracing, progressive photon mapping, and virtual point light (VPL) rendering. Data collected from the experiments suggest that path tracing is strongly preferred to progressive photon mapping and VPL rendering by both experts and novices. There is no indication that quasi-random path tracing is systematically preferred to random path tracing or vice versa; the same holds between progressive photon mapping and VPL rendering. Interestingly, we did not observe any significant difference in user workflow for the different algorithms. As can be expected, experts are faster and more accurate than novices, but surprisingly both groups have similar subjective preferences and workflow
    • …
    corecore