233 research outputs found
Kinematic Motion Retargeting for Contact-Rich Anthropomorphic Manipulations
Hand motion capture data is now relatively easy to obtain, even for
complicated grasps; however this data is of limited use without the ability to
retarget it onto the hands of a specific character or robot. The target hand
may differ dramatically in geometry, number of degrees of freedom (DOFs), or
number of fingers. We present a simple, but effective framework capable of
kinematically retargeting multiple human hand-object manipulations from a
publicly available dataset to a wide assortment of kinematically and
morphologically diverse target hands through the exploitation of contact areas.
We do so by formulating the retarget operation as a non-isometric shape
matching problem and use a combination of both surface contact and marker data
to progressively estimate, refine, and fit the final target hand trajectory
using inverse kinematics (IK). Foundational to our framework is the
introduction of a novel shape matching process, which we show enables
predictable and robust transfer of contact data over full manipulations while
providing an intuitive means for artists to specify correspondences with
relatively few inputs. We validate our framework through thirty demonstrations
across five different hand shapes and six motions of different objects. We
additionally compare our method against existing hand retargeting approaches.
Finally, we demonstrate our method enabling novel capabilities such as object
substitution and the ability to visualize the impact of design choices over
full trajectories
Recommended from our members
Learning Silhouette Features for Control of Human Motion
We present a vision-based performance interface for controlling animated human characters. The system interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality. Such an interactive system might be useful for authoring, for teleconferencing, or as a control interface for a character in a game. In our implementation, the user performs in front of three video cameras; the resulting silhouettes are used to estimate his orientation and body configuration based on a set of discriminative local features. Those features are selected by a machine-learning algorithm during a preprocessing step. Sequences of motions that approximate the user's actions are extracted from the motion database and scaled in time to match the speed of the user's motion. We use swing dancing, a complex human motion, to demonstrate the effectiveness of our approach. We compare our results to those obtained with a set of global features, Hu moments, and ground truth measurements from a motion capture system.Engineering and Applied Science
Motion synthesis for sports using unobtrusive lightweight body-worn and environment sensing
The ability to accurately achieve performance capture of athlete motion during competitive play in near real-time promises to revolutionise not only broadcast sports graphics visualisation and commentary, but also potentially performance analysis, sports medicine, fantasy sports and wagering. In this paper, we present a highly portable, non-intrusive approach for synthesising human athlete motion in competitive game-play with lightweight instru- mentation of both the athlete and field of play. Our data-driven puppetry technique relies on a pre-captured database of short segments of motion capture data to construct a motion graph augmented with interpolated mo- tions and speed variations. An athlete’s performed motion is synthesised by finding a related action sequence through the motion graph using a sparse set of measurements from the performance, acquired from both worn inertial and global location sensors. We demonstrate the efficacy of our approach in a challenging application scenario, with a high-performance tennis athlete wearing one or more lightweight body-worn accelerometers and a single overhead camera providing the athlete’s global position and orientation data. However, the approach is flexible in both the number and variety of input sensor data used. The technique can also be adopted for searching a motion graph efficiently in linear time in alternative applications
A Visualization Framework for Team Sports Captured using Multiple Static Cameras
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.DOI: http://dx.doi.org/
10.1016/j.cviu.2013.09.006We present a novel approach for robust localization of multiple people observed using a set of static cameras. We use this
location information to generate a visualization of the virtual offside line in soccer games. To compute the position of the offside line,
we need to localize players' positions, and identify their team roles. We solve the problem of fusing corresponding players' positional
information by finding minimum weight K-length cycles in a complete K-partite graph. Each partite of the graph corresponds to one of
the K cameras, whereas each node of a partite encodes the position and appearance of a player observed from a particular camera.
To find the minimum weight cycles in this graph, we use a dynamic programming based approach that varies over a continuum from
maximally to minimally greedy in terms of the number of graph-paths explored at each iteration. We present proofs for the efficiency
and performance bounds of our algorithms. Finally, we demonstrate the robustness of our framework by testing it on 82,000 frames of
soccer footage captured over eight different illumination conditions, play types, and team attire. Our framework runs in near-real time,
and processes video from 3 full HD cameras in about 0.4 seconds for each set of corresponding 3 frames
Two Methods for Display of High Contrast Images
High contrast images are common in night scenes and other scenes that include dark shadows and bright light sources. These scenes are difficult to display because their contrasts greatly exceed the range of most display devices for images. As a result, the image contrasts are compressed or truncated, obscuring subtle textures and details. Humans view and understand high contrast scenes easily, ``adapting'' their visual response to avoid compression or truncation with no apparent loss of detail. By imitating some of these visual adaptation processes, we developed two methods for the improved display of high contrast images. The first builds a display image from several layers of lighting and surface properties. Only the lighting layers are compressed, drastically reducing contrast while preserving much of the image detail. This method is practical only for synthetic images where the layers can be retained from the rendering process. The second method interactively adjusts the displayed image to preserve local contrasts in a small ``foveal'' neighborhood. Unlike the first method, this technique is usable on any image and includes a new tone reproduction operator. Both methods use a sigmoid function for contrast compression. This function has no effect when applied to small signals but compresses large signals to fit within an asymptotic limit. We demonstrate the effectiveness of these approaches by comparing processed and unprocessed images
- …