14,956 research outputs found
Recommended from our members
Analysis of complex movements
In most everyday repetitive movements such as walking, sitting, and reaching, humans exhibit large degrees of regularity. However, at the other end of the movement spectrum, in complex movement tasks, such as retrieving an object from a cluttered environment or choosing balance positions for transporting a large, unwieldy object, humans are inventive problem solvers. Therefore, in the quest to understand the human movement system, it would be essential to know if general movements have regularities across subjects as it would provide an essential scaffold in the development of more detailed dynamic movement models.
This research mainly aims to learn the principles behind large-scale arbitrary movements, particularly regarding variations between different subjects. For example, given a goal-directed task, do the movements appear similar across subjects, or are movements very individualized? The tasks for the research covers developing an interactive virtual reality environment to capture goal-directed whole-body human movements, getting insights into the regularities underlying those motion capture data (kinematics), and finally analyzing the corresponding energy cost by using a forty-eight degree of freedom dynamic human model (dynamics). The results illustrate that humans chose trajectories that are economical in energetic cost while accomplishing goal-directed tasks.Computer Science
Speech-driven Animation with Meaningful Behaviors
Conversational agents (CAs) play an important role in human computer
interaction. Creating believable movements for CAs is challenging, since the
movements have to be meaningful and natural, reflecting the coupling between
gestures and speech. Studies in the past have mainly relied on rule-based or
data-driven approaches. Rule-based methods focus on creating meaningful
behaviors conveying the underlying message, but the gestures cannot be easily
synchronized with speech. Data-driven approaches, especially speech-driven
models, can capture the relationship between speech and gestures. However, they
create behaviors disregarding the meaning of the message. This study proposes
to bridge the gap between these two approaches overcoming their limitations.
The approach builds a dynamic Bayesian network (DBN), where a discrete variable
is added to constrain the behaviors on the underlying constraint. The study
implements and evaluates the approach with two constraints: discourse functions
and prototypical behaviors. By constraining on the discourse functions (e.g.,
questions), the model learns the characteristic behaviors associated with a
given discourse class learning the rules from the data. By constraining on
prototypical behaviors (e.g., head nods), the approach can be embedded in a
rule-based system as a behavior realizer creating trajectories that are timely
synchronized with speech. The study proposes a DBN structure and a training
approach that (1) models the cause-effect relationship between the constraint
and the gestures, (2) initializes the state configuration models increasing the
range of the generated behaviors, and (3) captures the differences in the
behaviors across constraints by enforcing sparse transitions between shared and
exclusive states per constraint. Objective and subjective evaluations
demonstrate the benefits of the proposed approach over an unconstrained model.Comment: 13 pages, 12 figures, 5 table
Human Motion Retrieval Using Video or Drawn Sketch
The importance of motion retrieval is increasing now a days. The majority of existing motion retrieval labor intensive, there has been a recent paradigm move in the animation industry with an increasing use of pre-recorded movement of animating exclusive figures. An essential need to use motion catch data is an efficient method for listing and accessing movements. I n this work, a novel sketching interface for interpreting the problem is provided. This simple strategy allows the user to determine the necessary movement by drawing several movement swings over a attracted personality, which needs less effort and extends the users expressiveness. To support the real-time interface, a specific development of the movements and the hand-drawn question is needed. Here we are implementing the Conjugate Gradient method for retrieving motion from hand drawn sketch and video. It is an optimization and prominent iterative method. It is fast and uses a small amount of storage
Implantation of subcutaneous heart rate data loggers in southern elephant seals (Mirounga leonina)
Unlike most phocid species (Phocidae), Mirounga leonina (southern elephant seals) experience a catastrophic moult where they not only replace their hair but also their epidermis when ashore for approximately 1 month. Few studies have investigated behavioural and physiological adaptations of southern elephant seals during the moult fast, a particularly energetically costly life cycle’s phase. Recording heart rate is a reliable technique for estimating energy expenditure in the field. For the first time, subcutaneous heart rate data loggers were successfully implanted during the moult in two free-ranging southern elephant seals over 3–6 days. No substantial postoperative complications were encountered and consistent heart rate data were obtained. This promising surgical technique opens new opportunities for monitoring heart rate in phocid seals
Structure from Recurrent Motion: From Rigidity to Recurrency
This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM)
from a long monocular video sequence observing a non-rigid object performing
recurrent and possibly repetitive dynamic action. Departing from the
traditional idea of using linear low-order or lowrank shape model for the task
of NRSfM, our method exploits the property of shape recurrency (i.e., many
deforming shapes tend to repeat themselves in time). We show that recurrency is
in fact a generalized rigidity. Based on this, we reduce NRSfM problems to
rigid ones provided that certain recurrency condition is satisfied. Given such
a reduction, standard rigid-SfM techniques are directly applicable (without any
change) to the reconstruction of non-rigid dynamic shapes. To implement this
idea as a practical approach, this paper develops efficient algorithms for
automatic recurrency detection, as well as camera view clustering via a
rigidity-check. Experiments on both simulated sequences and real data
demonstrate the effectiveness of the method. Since this paper offers a novel
perspective on rethinking structure-from-motion, we hope it will inspire other
new problems in the field.Comment: To appear in CVPR 201
Retrieving, annotating and recognizing human activities in web videos
Recent e orts in computer vision tackle the problem of human activity understanding in video sequences. Traditionally, these algorithms require annotated video data to learn models. In this work, we introduce a novel data collection framework, to take advantage of the large amount of video data available on the web. We use this new framework to retrieve videos of human activities, and build training and evaluation datasets for computer vision algorithms. We rely on Amazon Mechanical Turk workers to obtain high accuracy annotations. An agglomerative clustering technique brings the possibility to achieve reliable and consistent annotations for temporal localization of human activities in videos. Using two datasets, Olympics Sports and our novel Daily Human Activities dataset, we show that our collection/annotation framework can make robust annotations of human activities in large amount of video data
- …