143 research outputs found
Scalable partitioning for parallel position based dynamics
We introduce a practical partitioning technique designed for parallelizing Position Based Dynamics, and exploiting
the ubiquitous multi-core processors present in current commodity GPUs. The input is a set of particles whose
dynamics is influenced by spatial constraints. In the initialization phase, we build a graph in which each node
corresponds to a constraint and two constraints are connected by an edge if they influence at least one common
particle. We introduce a novel greedy algorithm for inserting additional constraints (phantoms) in the graph
such that the resulting topology is q-colourable, where ˆ qˆ ≥ 2 is an arbitrary number. We color the graph, and
the constraints with the same color are assigned to the same partition. Then, the set of constraints belonging to
each partition is solved in parallel during the animation phase. We demonstrate this by using our partitioning
technique; the performance hit caused by the GPU kernel calls is significantly decreased, leaving unaffected the
visual quality, robustness and speed of serial position based dynamics
gMotion: A spatio-temporal grammar for the procedural generation of motion graphics
Creating by hand compelling 2D animations that choreograph several groups of shapes requires a large number of manual edits. We present a method to procedurally generate motion graphics with timeslice grammars. Timeslice grammars are to time what split grammars are to space. We use this grammar to formally model motion graphics, manipulating them in both temporal and spatial components. We are able to combine both these aspects by representing animations as sets of affine transformations sampled uniformly in both space and time. Rules and operators in the grammar manipulate all spatio-temporal matrices as a whole, allowing us to expressively construct animation with few rules. The grammar animates shapes, which are represented as highly tessellated polygons, by applying the affine transforms to each shape vertex given the vertex position and the animation time. We introduce a small set of operators showing how we can produce 2D animations of geometric objects, by combining the expressive power of the grammar model, the composability of the operators with themselves, and the capabilities that derive from using a unified spatio-temporal representation for animation data. Throughout the paper, we show how timeslice grammars can produce a wide variety of animations that would take artists hours of tedious and time-consuming work. In particular, in cases where change of shapes is very common, our grammar can add motion detail to large collections of shapes with greater control over per-shape animations along with a compact rules structure
MeshGit: Diffing and Merging Polygonal Meshes
This paper presents MeshGit, a practical algorithm for diffing and merging polygonal meshes. Inspired by version control for text editing, we introduce the mesh edit distance as a measure of the dissimilarity between meshes. This distance is defined as the minimum cost of matching the vertices and faces of one mesh to those of another. We propose an iterative greedy algorithm to approximate the mesh edit distance, which scales well with model complexity, providing a practical solution to our problem. We translate the mesh correspondence into a set of mesh editing operations that transforms the first mesh into the second. The editing operations can be displayed directly to provide a meaningful visual difference between meshes. For merging, we compute the difference between two versions and their common ancestor, as sets of editing operations. We robustly detect conflicting operations, automatically apply non-conflicting edits, and allow the user to choose how to merge the conflicting edits. We evaluate MeshGit by diffing and merging a variety of meshes and find it to work well for all
3DFlow: Continuous Summarization of Mesh Editing Workflows
Mesh editing software is continually improving allowing more detailed meshes to be create efficiently by skilled artists. Many of these are interested in sharing not only the final mesh, but also their whole workflows both for creating tutorials as well as for showcasing the artist\u27s talent, style, and expertise. Unfortunately, while creating meshes is improving quickly, sharing editing workflows remains cumbersome since time-lapsed or sped-up videos remain the most common medium. In this paper, we present 3DFlow, an algorithm that computes continuous summarizations of mesh editing workflows. 3DFlow takes as input a sequence of meshes and outputs a visualization of the workflow summarized at any level of detail. The output is enhanced by highlighting edited regions and, if provided, overlaying visual annotations to indicated the artist\u27s work, e.g. summarizing brush strokes in sculpting. We tested 3DFlow with a large set of inputs using a variety of mesh editing techniques, from digital sculpting to low-poly modeling, and found 3DFlow performed well for all. Furthermore, 3DFlow is independent of the modeling software used since it requires only mesh snapshots, using additional information only for optional overlays. We open source 3DFlow for artists to showcase their work and release all our datasets so other researchers can improve upon our work
Visualizing Paths in Context
Data about movement through a space is increasingly becoming available for capture and analysis. In many applications, this data is captured or modeled as transitions between a small number of areas of interests, or a finite set of states, and these transitions constitute paths in the space. Similarities and differences between paths are of great importance to such analyses, but can be difficult to assess. In this work we present a visualization approach for representing paths in context, where individual paths can be compared to other paths or to a group of paths. Our approach summarizes path behavior using a simple circular layout, including information about state and transition likelihood using Markov random models, together with information about specific path and state behavior. The layout avoids line crossovers entirely, making it easy to observe patterns while reducing visual clutter. In our tool, paths can either be compared in their natural sequence or by aligning multiple paths using Multiple Sequence Alignment, which can better highlight path similarities. We applied our technique to eye tracking data and cell phone tower data used to capture human movement
CrossComp: Comparing Multiple Artists Performing Similar Modeling Tasks
In two previous papers, we have focused on summarizing and visualizing the edits of a single workflow and visualizing and merging the edits of two independent workflows. In this paper, we focus on visualizing the similarities and dissimilarities of many workflows where digital artists perform similar tasks. The tasks have been chosen so each artist starts and ends with a common state. We show how to leverage the previous work to produce a visualization tool that allows for easy scanning through the workflows
Toward Evaluating Lighting Design Interface Paradigms for Novice Users
Lighting design is a complex and fundamental task in computer cinematography, involving adjustment of light parameters to define final scene appearance. Many lighting interfaces have been proposed to improve lighting design work flow. These paradigms exist in three paradigm categories: direct light parameter manipulation, indirect light feature manipulation (e.g., shadow dragging), and goal-based optimization of light through painting. To this date, no formal evaluation of the relative effectiveness of these methods has been performed. In this paper, we present a first step toward evaluating the three paradigms in the form of a user study with novice users. We focus our evaluation on simple tasks that directly affect lighting features, such as highlights, shadows and intensity gradients, in scenes with up to 2 point lights and 5 objects under direct illumination. We perform quantitative experiments to measure relative efficiency between interfaces together with qualitative input to explore the intuitiveness of the paradigms. Our results indicate that paint-based goal specification is more cumbersome than either direct or indirect manipulation. Furthermore, our investigation suggests improvements to not only the implementation of the paradigms, but also overall paradigm structure for further exploration
Light-Based Sample Reduction Methods for Interactive Relighting of Scenes with Minute Geometric Scale
Rendering production-quality cinematic scenes requires high computational and temporal costs. From an artist\u27s perspective, one must wait for several hours for feedback on even minute changes of light positions and parameters. Previous work approximates scenes so that adjustments on lights may be carried out with interactive feedback, so long as geometry and materials remain constant. We build on these methods by proposing means by which objects with high geometric complexity at the subpixel level, such as hair and foliage, can be approximated for real-time cinematic relighting. Our methods make no assumptions about the geometry or shaders in a scene, and as such are fully generalized. We show that clustering techniques can greatly reduce multisampling, while still maintaining image fidelity at an error significantly lower than sparsely sampling without clustering, provided that no shadows are computed. Scenes that produce noise-like shadow patterns when sparse shadow samples are taken suffer from additional error introduced by those shadows. We present a viable solution to scalable scene approximation for lower sampling reolutions, provided a robust solution to shadow approximation for sub-pixel geomery can be provided in the future
- …