24,243 research outputs found
Effective methods for human-robot-environment interaction by means of haptic robotics
University of Technology, Sydney. Faculty of Engineering and Information Technology.Industrial robots have been widely used to perform well-defined repetitive tasks in carefully constructed simple environments such as manufacturing factories. The futuristic vision of industrial robots is to operate in complex, unstructured and unknown (or partially known) environments, to assist human workers in undertaking hazardous tasks such as sandblasting in steel bridge maintenance. Autonomous operation of industrial robots in such environments is ideal, but semi-autonomous or manual operation with human interaction is a practical solution because it utilises human intelligence and experience combined with the power and accuracy of an industrial robot. To achieve the human interaction operation, there are several challenges that need to be addressed: environmental awareness, effective robot-environment interaction and human-robot interaction.
This thesis aims to develop methodologies that enable natural and efficient Human- Robot-Environment Interaction (HREI) and apply them in a steel bridge maintenance robotic system. Three research issues are addressed: Robot-Environment-Interaction (REI), haptic device and robot interface and intuitive human-robot interaction.
To enable efficient robot-environment interaction, a potential field-based Virtual Force Field (VF2) approach has been investigated. The VF2 approach includes an Attractive Force (AF) method and a force control algorithm for robot motion control, and a 3D Virtual Force Field (3D-VF2) method for real-time collision avoidance. Results obtained from simulation, experiments in a laboratory setup and field test have verified and validated these methods.
A haptic device-robot interface has been developed for providing intuitive human-robot interaction. Haptic devices are normally small compared to industrial robots. Thus, the workspace of a haptic device is much smaller than the workspace of a big industrial manipulator. A novel workspace mapping method, which includes drifting control, scaling control and edge motion control, has been investigated for mapping a small haptic workspace to the large workspace of manipulator with the aim of providing natural kinesthetic feedback to an operator and smooth control of robot operation. A haptic force control approach has also been studied for transferring the virtual contact force (between the robot and the environment) and the inertia of the manipulator to the operator's hand through a force feedback function.
Human factors have significant effect on the performance of haptic-based human-robot interaction. An eXtended Hand Movement (XHM) model for eye-guided hand movement has been investigated in this thesis with the aim of providing natural and comfortable interaction between a human operator and a robot, and improving the operational performance. The model has been studied for increasing the speed of the manipulator while maintaining the control accuracy. This model is applied into a robotic system and it has been verified by various experiments.
These theoretical methods and algorithms have been successfully implemented in a steel bridge maintenance robotic system, and tested in both laboratory and a bridge maintenance site located in Sydney
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing
Robotic assistance presents an opportunity to benefit the lives of many
people with physical disabilities, yet accurately sensing the human body and
tracking human motion remain difficult for robots. We present a
multidimensional capacitive sensing technique that estimates the local pose of
a human limb in real time. A key benefit of this sensing method is that it can
sense the limb through opaque materials, including fabrics and wet cloth. Our
method uses a multielectrode capacitive sensor mounted to a robot's end
effector. A neural network model estimates the position of the closest point on
a person's limb and the orientation of the limb's central axis relative to the
sensor's frame of reference. These pose estimates enable the robot to move its
end effector with respect to the limb using feedback control. We demonstrate
that a PR2 robot can use this approach with a custom six electrode capacitive
sensor to assist with two activities of daily living-dressing and bathing. The
robot pulled the sleeve of a hospital gown onto able-bodied participants' right
arms, while tracking human motion. When assisting with bathing, the robot moved
a soft wet washcloth to follow the contours of able-bodied participants' limbs,
cleaning their surfaces. Overall, we found that multidimensional capacitive
sensing presents a promising approach for robots to sense and track the human
body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation
Robotics 201
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Assistance strategies for robotized laparoscopy
Robotizing laparoscopic surgery not only allows achieving better
accuracy to operate when a scale factor is applied between master and slave or thanks to the use of tools with 3 DoF, which cannot be used in conventional manual surgery, but also due to additional informatic support. Relying on computer assistance different strategies that facilitate the task of the surgeon can be incorporated, either in the form of autonomous navigation or cooperative guidance, providing sensory or visual feedback, or introducing certain limitations of movements. This paper describes different ways of assistance aimed at improving the work capacity of the surgeon and achieving more safety for the patient, and the results obtained with the prototype developed at UPC.Peer ReviewedPostprint (author's final draft
- …