19,843 research outputs found
Learning to Represent Haptic Feedback for Partially-Observable Tasks
The sense of touch, being the earliest sensory system to develop in a human
body [1], plays a critical part of our daily interaction with the environment.
In order to successfully complete a task, many manipulation interactions
require incorporating haptic feedback. However, manually designing a feedback
mechanism can be extremely challenging. In this work, we consider manipulation
tasks that need to incorporate tactile sensor feedback in order to modify a
provided nominal plan. To incorporate partial observation, we present a new
framework that models the task as a partially observable Markov decision
process (POMDP) and learns an appropriate representation of haptic feedback
which can serve as the state for a POMDP model. The model, that is parametrized
by deep recurrent neural networks, utilizes variational Bayes methods to
optimize the approximate posterior. Finally, we build on deep Q-learning to be
able to select the optimal action in each state without access to a simulator.
We test our model on a PR2 robot for multiple tasks of turning a knob until it
clicks.Comment: IEEE International Conference on Robotics and Automation (ICRA), 201
Recommended from our members
Gulf Estuarine Research Society 2014 Meeting
Table of Contents: Thank You to Our Sponsors! (p. 3) -- About the Gulf Estuarine Research Society (p. 4) -- Student Travel Award winners (p. 5) -- Abbreviated Schedule (p. 7) -- 2014 Plenary Speaker â Dr. Michael Osland (p. 8) -- 2014 Plenary Speaker â Dr. Maggie Walser (p. 9) -- Full Schedule (p. 10) -- Poster Session Directory (p. 17) -- Oral Presentation Abstracts (p. 21) -- Poster Presentation Abstracts (p. 38) -- Things to Do in Port Aransas (p. 52) -- Greening the Meeting (p. 53) -- Map of University of Texas Marine Science Institute (p. 54)Coastal and Estuarine Research Foundation, Port Aransas, Gulf of Mexico Foundation, Coastal Bend Bays & Estuaries Program, Lotek Wireless Fish & Wildlife Monitoring, Sea Grant Mississippi-Alabama, Sea Grant Louisiana, Sea Grant Texas, The University of Austin Marine Science Institute, Mission-Aransas National Estuarine Research ReserveMarine Scienc
Speech-driven Animation with Meaningful Behaviors
Conversational agents (CAs) play an important role in human computer
interaction. Creating believable movements for CAs is challenging, since the
movements have to be meaningful and natural, reflecting the coupling between
gestures and speech. Studies in the past have mainly relied on rule-based or
data-driven approaches. Rule-based methods focus on creating meaningful
behaviors conveying the underlying message, but the gestures cannot be easily
synchronized with speech. Data-driven approaches, especially speech-driven
models, can capture the relationship between speech and gestures. However, they
create behaviors disregarding the meaning of the message. This study proposes
to bridge the gap between these two approaches overcoming their limitations.
The approach builds a dynamic Bayesian network (DBN), where a discrete variable
is added to constrain the behaviors on the underlying constraint. The study
implements and evaluates the approach with two constraints: discourse functions
and prototypical behaviors. By constraining on the discourse functions (e.g.,
questions), the model learns the characteristic behaviors associated with a
given discourse class learning the rules from the data. By constraining on
prototypical behaviors (e.g., head nods), the approach can be embedded in a
rule-based system as a behavior realizer creating trajectories that are timely
synchronized with speech. The study proposes a DBN structure and a training
approach that (1) models the cause-effect relationship between the constraint
and the gestures, (2) initializes the state configuration models increasing the
range of the generated behaviors, and (3) captures the differences in the
behaviors across constraints by enforcing sparse transitions between shared and
exclusive states per constraint. Objective and subjective evaluations
demonstrate the benefits of the proposed approach over an unconstrained model.Comment: 13 pages, 12 figures, 5 table
Hearing meanings: the revenge of context
According to the perceptual view of language comprehension, listeners typically recover high-level linguistic properties such as utterance meaning without inferential work. The perceptual view is subject to the Objection from Context: since utterance meaning is massively context-sensitive, and context-sensitivity requires cognitive inference, the perceptual view is false. In recent work, Berit Brogaard provides a challenging reply to this objection. She argues that in language comprehension context-sensitivity is typically exercised not through inferences, but rather through top-down perceptual modulations or perceptual learning. This paper provides a complete formulation of the Objection from Context and evaluates Brogaards reply to it. Drawing on conceptual considerations and empirical examples, we argue that the exercise of context-sensitivity in language comprehension does, in fact, typically involve inference
Holistic corpus-based dialectology
This paper is concerned with sketching future directions for corpus-based dialectology. We advocate a holistic approach to the study of geographically conditioned linguistic variability, and we present a suitable methodology, 'corpusbased dialectometry', in exactly this spirit. Specifically, we argue that in order to live up to the potential of the corpus-based method, practitioners need to (i) abandon their exclusive focus on individual linguistic features in favor of the study of feature aggregates, (ii) draw on computationally advanced multivariate analysis techniques (such as multidimensional scaling, cluster analysis, and principal component analysis), and (iii) aid interpretation of empirical results by marshalling state-of-the-art data visualization techniques. To exemplify this line of analysis, we present a case study which explores joint frequency variability of 57 morphosyntax features in 34 dialects all over Great Britain
- âŠ