48 research outputs found
Reconstruction of machine-made shapes from bitmap sketches
We propose a method of reconstructing 3D machine-made shapes from
bitmap sketches by separating an input image into individual patches and
jointly optimizing their geometry. We rely on two main observations: (1)
human observers interpret sketches of man-made shapes as a collection of
simple geometric primitives, and (2) sketch strokes often indicate occlusion
contours or sharp ridges between those primitives. Using these main observations we design a system that takes a single bitmap image of a shape, estimates image depth and segmentation into primitives with neural networks,
then fits primitives to the predicted depth while determining occlusion contours and aligning intersections with the input drawing via optimization.
Unlike previous work, our approach does not require additional input, annotation, or templates, and does not require retraining for a new category
of man-made shapes. Our method produces triangular meshes that display
sharp geometric features and are suitable for downstream applications, such
as editing, rendering, and shading
AdaptNet: Policy Adaptation for Physics-Based Character Control
Motivated by humans' ability to adapt skills in the learning of new ones,
this paper presents AdaptNet, an approach for modifying the latent space of
existing policies to allow new behaviors to be quickly learned from like tasks
in comparison to learning from scratch. Building on top of a given
reinforcement learning controller, AdaptNet uses a two-tier hierarchy that
augments the original state embedding to support modest changes in a behavior
and further modifies the policy network layers to make more substantive
changes. The technique is shown to be effective for adapting existing
physics-based controllers to a wide range of new styles for locomotion, new
task targets, changes in character morphology and extensive changes in
environment. Furthermore, it exhibits significant increase in learning
efficiency, as indicated by greatly reduced training times when compared to
training from scratch or using other approaches that modify existing policies.
Code is available at https://motion-lab.github.io/AdaptNet.Comment: SIGGRAPH Asia 2023. Video: https://youtu.be/WxmJSCNFb28. Website:
https://motion-lab.github.io/AdaptNet, https://pei-xu.github.io/AdaptNe
Interactive procedural simulation of paper tearing with sound
International audienceWe present a phenomenological model for the real-time simulation of paper tearing and sound. The model uses as input rotations of the hand along with the index and thumb of left and right hands to drive the position and orientation of two regions of a sheet of paper. The motion of the hands produces a cone shaped deformation of the paper and guides the formation and growth of the tear. We create a model for the direction of the tear based on empirical observation, and add detail to the tear with a directed noise model. Furthermore, we present a procedural sound synthesis method to produce tearing sounds during interaction. We show a variety of paper tearing examples and discuss applications and limitations
Preserving Topology and Elasticity for Embedded Deformable Models
International audienceIn this paper we introduce a new approach for the embedding of linear elastic deformable models. Our technique results in significant improvements in the efficient physically based simulation of highly detailed objects. First, our embedding takes into account topological details, that is, disconnected parts that fall into the same coarse element are simulated independently. Second, we account for the varying material properties by computing stiffness and interpolation functions for coarse elements which accurately approximate the behaviour of the embedded material. Finally, we also take into account empty space in the coarse embeddings, which provides a better simulation of the boundary. The result is a straightforward approach to simulating complex deformable models with the ease and speed associated with a coarse regular embedding, and with a quality of detail that would only be possible at much finer resolution
Finding Common Ground: A Survey of Capacitive Sensing in Human-Computer Interaction
For more than two decades, capacitive sensing has played a prominent role in human-computer interaction research. Capacitive sensing has become ubiquitous on mobile, wearable, and stationary devices---enabling fundamentally new interaction techniques on, above, and around them. The research community has also enabled human position estimation and whole-body gestural interaction in instrumented environments. However, the broad field of capacitive sensing research has become fragmented by different approaches and terminology used across the various domains. This paper strives to unify the field by advocating consistent terminology and proposing a new taxonomy to classify capacitive sensing approaches. Our extensive survey provides an analysis and review of past research and identifies challenges for future work. We aim to create a common understanding within the field of human-computer interaction, for researchers and practitioners alike, and to stimulate and facilitate future research in capacitive sensing
Fast contact evolution for piecewise smooth surfaces
Dynamics simulation of smooth bodies in contact is a critical problem in
physically based animation and interactive virtual environments. We describe a
technique which uses reduced coordinates to evolve a single continuous contact between
Loop subdivision surfaces. The incorporation of both slip and no-slip friction
into our algorithm is straightforward. The dynamics equations, though slightly
more complex due to the reduced coordinate formulation, can be integrated easily
using explicit integrators without the need for constraint stabilization. The use
of reduced coordinates also confines integration errors to lie within the constraint
manifold which is preferable for visualization.
Our algorithm is suitable for piecewise parametric or parameterizable surfaces
with polygonal domain boundaries. Because a contact will not always remain
in the same patch, we demonstrate how a contact can be evolved across patch boundaries.
We also address the issue of non-regular parameterizations occurring in Loop
subdivision surfaces through surface replacement with n sided S-patch surfaces.
Three simulations show our results. We partially verify our technique first
with a frictionless system and then with a blob sliding and rolling inside a bowl. Our
third simulation shows that our formulation correctly predicts the spin reversal of a
rattleback top. We also present timings of the various components of the algorithm.Science, Faculty ofComputer Science, Department ofGraduat
Interaction capture and synthesis of human hands
This thesis addresses several issues in modelling interaction with human hands
in computer graphics and animation. Modifying motion capture to satisfy the constraints
of new animation is difficult when contact is involved because physical interaction
involves energy or power transfer between the system of interest and the
environment, and is a critical problem for computer animation of hands. Although
contact force measurements provide a means of monitoring this transfer, motion capture
as currently used for creating animation has largely ignored contact forces. We
present a system of capturing synchronized motion and contact forces, called interaction
capture. We transform interactions such as grasping into joint compliances and a
nominal reference trajectory in an approach inspired by the equilibrium point hypothesis
of human motor control. New interactions are synthesized through simulation of a
quasi-static compliant articulated model in a dynamic environment that includes friction.
This uses a novel position-based linear complementarity problem formulation
that includes friction, breaking contact, and coupled compliance between contacts
at different fingers. We present methods for reliable interaction capture, addressing
calibration, force estimation, and synchronization. Additionally, although joint compliances
are traditionally estimated with perturbation-based methods, we introduce
a technique that instead produces estimates without perturbation. We validate our
results with data from previous work and our own perturbation-based estimates. A
complementary goal of this work is hand-based interaction in virtual environments.
We present techniques for whole-hand interaction using the Tango, a novel sensor
that performs interaction capture by measuring pressure images and accelerations.
We approximate grasp hand-shapes from previously observed data through rotationally
invariant comparison of pressure measurements. We also introduce methods
involving heuristics and thresholds that make reliable drift-free navigation possible
with the Tango. Lastly, rendering the skin deformations of articulated characters
is a fundamental problem for computer animation of hands. We present a deformation
model, called EigenSkin, which provides a means of rendering physically- or
example-based deformation models at interactive rates on graphics hardware.Science, Faculty ofComputer Science, Department ofGraduat