23 research outputs found
Stabilize to Act: Learning to Coordinate for Bimanual Manipulation
Key to rich, dexterous manipulation in the real world is the ability to
coordinate control across two hands. However, while the promise afforded by
bimanual robotic systems is immense, constructing control policies for dual arm
autonomous systems brings inherent difficulties. One such difficulty is the
high-dimensionality of the bimanual action space, which adds complexity to both
model-based and data-driven methods. We counteract this challenge by drawing
inspiration from humans to propose a novel role assignment framework: a
stabilizing arm holds an object in place to simplify the environment while an
acting arm executes the task. We instantiate this framework with BimanUal
Dexterity from Stabilization (BUDS), which uses a learned restabilizing
classifier to alternate between updating a learned stabilization position to
keep the environment unchanged, and accomplishing the task with an acting
policy learned from demonstrations. We evaluate BUDS on four bimanual tasks of
varying complexities on real-world robots, such as zipping jackets and cutting
vegetables. Given only 20 demonstrations, BUDS achieves 76.9% task success
across our task suite, and generalizes to out-of-distribution objects within a
class with a 52.7% success rate. BUDS is 56.0% more successful than an
unstructured baseline that instead learns a BC stabilizing policy due to the
precision required of these complex tasks. Supplementary material and videos
can be found at https://sites.google.com/view/stabilizetoact .Comment: Conference on Robot Learning, 202
Exploring the need of an assistive robot to support reading process: A pilot study
Reading is one of the main activities that readers
are immensely practicing during their daily lives and this activity is accompanied with some challenges that may cause disengagement during the process.Recently, assistive robotics technologies have shown extensive powerful effects in assisting its users to tackle various domain specific problems.From that perspective, the main goal of this pilot study is to investigate the problems that readers encounter during reading process.In
addition, it aims to probe the need of an assistive robot that makes reading process less challenging.A questionnaire survey was distributed to 100 students at Universiti Utara Malaysia and
the analysis of the results showed that an assistive robot is promising to support reading process. Similarly, this study detailed the embodiment and the design aspects that need to be
applied while designing an assistive reading robot
Motion for cooperation and vitality in Human-robot interaction
In social interactions, human movement is a rich source of information for all those who
take part in the collaboration. In fact, a variety of intuitive messages are communicated
through motion and continuously inform the partners about the future unfolding of the
actions. A similar exchange of implicit information could support movement coordination
in the context of Human-Robot Interaction. Also the style of an action, i.e. the way it is
performed, has a strong influence on interaction between humans. The same gesture has
different consequences when it is performed aggressively or kindly, and humans are very
sensitive to these subtle differences in others\u2019 behaviors. During the three years of my
PhD, I focused on these two aspects of human motion. In a firs study, we investigated how
implicit signaling in an interaction with a humanoid robot can lead to emergent coordination
in the form of automatic speed adaptation. In particular, we assessed whether different
cultures \u2013 specifically Japanese and Italian \u2013 have a different impact on motor resonance and
synchronization in HRI. Japanese people show a higher general acceptance toward robots
when compared with Western cultures. Since acceptance, or better affiliation, is tightly
connected to imitation and mimicry, we hypothesized a higher degree of speed imitation for
Japanese participants when compared to Italians. In the experimental studies undertaken
both in Japan and Italy,we observed that cultural differences do not impact on the natural
predisposition of subjects to adapt to the robot. In a second study, we investigated how to
endow a humanoid robot with behaviors expressing different vitality forms, by modulating
robot action kinematics and voice. Drawing inspiration from humans, we modified actions
and voice commands performed by the robot to convey an aggressive or kind attitude. In
a series of experiments we demonstrated that the humanoid was consistently perceived as
aggressive or kind. Human behavior changed in response to the different robot attitudes and
matched the behavior of iCub, in fact participants were faster when the robot was aggressive
and slower when the robot was gentle. The opportunity of humanoid behavior to express
vitality enriches the array of nonverbal communication that can be exploited by robots to
foster seamless interaction. Such behavior might be crucial in emergency and in authoritative
situations in which the robot should instinctively be perceived as assertive and in charge, as
in case of police robots or teachers
Safe and Efficient Robot Action Choice Using Human Intent Prediction in Physically-Shared Space Environments.
Emerging robotic systems are capable of autonomously planning and executing well-defined tasks, particularly when the environment can be accurately modeled. Robots supporting human space exploration must be able to safely interact with human astronaut companions during intravehicular and extravehicular activities. Given a shared workspace, efficiency can be gained by leveraging robotic awareness of its human companion. This dissertation presents a modular architecture that allows a human and robotic manipulator to efficiently complete independent sets of tasks in a shared physical workspace without the robot requiring oversight or situational awareness from its human companion. We propose that a robot requires four capabilities to act safely and optimally with awareness of its companion: sense the environment and the human within it; translate sensor data into a form useful for decision-making; use this data to predict the human’s future intent; and then use this information to inform its action-choice based also on the robot’s goals and safety constraints. We first present a series of human subject experiments demonstrating that human intent can help a robot predict and avoid conflict, and that sharing the workspace need not degrade human performance so long as the manipulator does not distract or introduce conflict. We describe an architecture that relies on Markov Decision Processes (MDPs) to support robot decision-making. A key contribution of our architecture is its decomposition of the decision problem into two parts: human intent prediction (HIP) and robot action choice (RAC). This decomposition is made possible by an assumption that the robot’s actions will not influence human intent. Presuming an observer that can feedback human actions in real-time, we leverage the well-known space environment and task scripts astronauts rehearse in advance to devise models for human intent prediction and robot action choice. We describe a series of case studies for HIP and RAC using a minimal set of state attributes, including an abbreviated action-history. MDP policies are evaluated in terms of model fitness and safety/efficiency performance tradeoffs. Simulation results indicate that incorporation of both observed and predicted human actions improves robot action choice. Future work could extend to more general human-robot interaction.PhDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/107160/1/cmcghan_1.pd
Incorporating Human Expertise in Robot Motion Learning and Synthesis
With the exponential growth of robotics and the fast development of their advanced cognitive and motor capabilities, one can start to envision humans and robots jointly working together in unstructured environments. Yet, for that to be possible, robots need to be programmed for such types of complex scenarios, which demands significant domain knowledge in robotics and control. One viable approach to enable robots to acquire skills in a more flexible and efficient way is by giving them the capabilities of autonomously learn from human demonstrations and expertise through interaction. Such framework helps to make the creation of skills in robots more social and less demanding on programing and robotics expertise. Yet, current imitation learning approaches suffer from significant limitations, mainly about the flexibility and efficiency for representing, learning and reasoning about motor tasks. This thesis addresses this problem by exploring cost-function-based approaches to learning robot motion control, perception and the interplay between them. To begin with, the thesis proposes an efficient probabilistic algorithm to learn an impedance controller to accommodate motion contacts. The learning algorithm is able to incorporate important domain constraints, e.g., about force representation and decomposition, which are nontrivial to handle by standard techniques. Compliant handwriting motions are developed on an articulated robot arm and a multi-fingered hand. This work provides a flexible approach to learn robot motion conforming to both task and domain constraints. Furthermore, the thesis also contributes with techniques to learn from and reason about demonstrations with partial observability. The proposed approach combines inverse optimal control and ensemble methods, yielding a tractable learning of cost functions with latent variables. Two task priors are further incorporated. The first human kinematics prior results in a model which synthesizes rich and believable dynamical handwriting. The latter prior enforces dynamics on the latent variable and facilitates a real-time human intention cognition and an on-line motion adaptation in collaborative robot tasks. Finally, the thesis establishes a link between control and perception modalities. This work offers an analysis that bridges inverse optimal control and deep generative model, as well as a novel algorithm that learns cost features and embeds the modal coupling prior. This work contributes an end-to-end system for synthesizing arm joint motion from letter image pixels. The results highlight its robustness against noisy and out-of-sample sensory inputs. Overall, the proposed approach endows robots the potential to reason about diverse unstructured data, which is nowadays pervasive but hard to process for current imitation learning
Proceedings of the Scientific-Practical Conference "Research and Development - 2016"
talent management; sensor arrays; automatic speech recognition; dry separation technology; oil production; oil waste; laser technolog
Proceedings of the Scientific-Practical Conference "Research and Development - 2016"
talent management; sensor arrays; automatic speech recognition; dry separation technology; oil production; oil waste; laser technolog