57,969 research outputs found
Interactive learning gives the tempo to an intrinsically motivated robot learner
International audienceThis paper studies an interactive learning system that couples internally guided learning and social interaction for robot learning of motor skills. We present Socially Guided Intrinsic Motivation with Interactive learning at the Meta level (SGIM-IM), an algorithm for learning forward and inverse models in high-dimensional, continuous and non-preset environments. The robot actively self-determines: at a meta level a strategy, whether to choose active autonomous learning or social learning strategies; and at the task level a goal task in autonomous exploration. We illustrate through 2 experimental set-ups that SGIM-IM efficiently combines the advantages of social learning and intrinsic motivation to be able to produce a wide range of effects in the environment, and develop precise control policies in large spaces, while minimising its reliance on the teacher, and offering a flexible interaction framework with human
A Collision Detection Algorithm For Virtual Robot-Centered Flexible Manufacturing Cell
Collision detection is crucial in virtual manufacturing applications such as virtual prototyping, virtual assembly and virtual robot path planning. For accurate simulation of manufacturing systems and processes in virtual environment, physical interaction with the objects in the scene are triggered by collision detection. This thesis presents a collision detection algorithm for accurate simulation of a virtual flexible manufacturing cell. The technique utilizes the narrow phase approach in detecting collision detection of non-convex object by testing collision between basic primitive and polygon. This algorithm is implemented in a virtual flexible manufacturing cell for the loading and unloading process performed by the robot. The robot’s gripper is treated as non-convex object and the exact point of collision is represented with a virtual sphere and collision is tested between the virtual sphere and the polygon. To verify the collision detection algorithm, it is tested with different positions and heights of the storage system during simulation of the virtual flexible manufacturing cell. The results showed that the collision detection algorithm can be used to support the concept of hardware reconfigurablility of FMC which can be achieved by changing, removing, recombining or rearranging its manufacturing elements in order to meet new demands such as introduction of new product or change
Soft Legged Wheel-Based Robot with Terrestrial Locomotion Abilities
In recent years robotics has been influenced by a new approach, soft-robotics, bringing the idea that safe interaction with user and more adaptation to the environment can be achieved by exploiting easily deformable materials and flexible components in the structure of robots. In 2016, the soft-robotics community has promoted a new robotics challenge, named RoboSoft Grand Challenge, with the aim of bringing together different opinions on the usefulness and applicability of softness and compliancy in robotics. In this paper we describe the design and implementation of a terrestrial robot based on two soft legged wheels. The tasks predefined by the challenge were set as targets in the robot design, which finally succeeded to accomplish all the tasks. The wheels of the robot can passively climb over stairs and adapt to slippery grounds using two soft legs embedded in their structure. The soft legs, fabricated by integration of soft and rigid materials and mounted on the circumference of a conventional wheel, succeed to enhance its functionality and easily adapt to unknown grounds. The robot has a semi stiff tail that helps in the stabilization and climbing of stairs. An active wheel is embedded at the extremity of the tail in order to increase the robot maneuverability in narrow environments. Moreover two parallelogram linkages let the robot to reconfigure and shrink its size allowing entering inside gates smaller than its initial dimensions
Attention-based Robot Learning of Haptic Interaction
Moringen A, Fleer S, Walck G, Ritter H. Attention-based Robot Learning of Haptic Interaction. In: Nisky I, Hartcher-O’Brien J, Wiertlewski M, Smeets J, eds. Haptics: Science, Technology, Applications. 12th International Conference, EuroHaptics 2020, Leiden, The Netherlands, September 6–9, 2020, Proceedings. Lecture Notes in Computer Science. Vol 12272. Cham: Springer; 2020: 462-470.Haptic interaction involved in almost any physical interaction with the environment performed by humans is a highly sophisticated and to a large extent a computationally unmodelled process. Unlike humans, who seamlessly handle a complex mixture of haptic features and profit from their integration over space and time, even the most advanced robots are strongly constrained in performing contact-rich interaction tasks. In this work we approach the described problem by demonstrating the success of our online haptic interaction learning approach on an example task: haptic identification of four unknown objects. Building upon our previous work performed with a floating haptic sensor array, here we show functionality of our approach within a fully-fledged robot simulation. To this end, we utilize the haptic attention model (HAM), a meta-controller neural network architecture trained with reinforcement learning. HAM is able to learn to optimally parameterize a sequence of so-called haptic glances, primitive actions of haptic control derived from elementary human haptic interaction. By coupling a simulated KUKA robot arm with the haptic attention model, we pursue to mimic the functionality of a finger.
Our modeling strategy allowed us to arrive at a tactile reinforcement learning architecture and characterize some of its advantages. Owing to a rudimentary experimental setting and an easy acquisition of simulated data, we believe our approach to be particularly useful for both time-efficient robot training and a flexible algorithm prototyping
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges
Human-swarm interaction (HSI) involves a number of human factors impacting
human behaviour throughout the interaction. As the technologies used within HSI
advance, it is more tempting to increase the level of swarm autonomy within the
interaction to reduce the workload on humans. Yet, the prospective negative
effects of high levels of autonomy on human situational awareness can hinder
this process. Flexible autonomy aims at trading-off these effects by changing
the level of autonomy within the interaction when required; with
mixed-initiatives combining human preferences and automation's recommendations
to select an appropriate level of autonomy at a certain point of time. However,
the effective implementation of mixed-initiative systems raises fundamental
questions on how to combine human preferences and automation recommendations,
how to realise the selected level of autonomy, and what the future impacts on
the cognitive states of a human are. We explore open challenges that hamper the
process of developing effective flexible autonomy. We then highlight the
potential benefits of using system modelling techniques in HSI by illustrating
how they provide HSI designers with an opportunity to evaluate different
strategies for assessing the state of the mission and for adapting the level of
autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling
Conference, Canberra, Australi
Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality
We address the problem of interactively controlling the workspace of a mobile
robot to ensure a human-aware navigation. This is especially of relevance for
non-expert users living in human-robot shared spaces, e.g. home environments,
since they want to keep the control of their mobile robots, such as vacuum
cleaning or companion robots. Therefore, we introduce virtual borders that are
respected by a robot while performing its tasks. For this purpose, we employ a
RGB-D Google Tango tablet as human-robot interface in combination with an
augmented reality application to flexibly define virtual borders. We evaluated
our system with 15 non-expert users concerning accuracy, teaching time and
correctness and compared the results with other baseline methods based on
visual markers and a laser pointer. The experimental results show that our
method features an equally high accuracy while reducing the teaching time
significantly compared to the baseline methods. This holds for different border
lengths, shapes and variations in the teaching process. Finally, we
demonstrated the correctness of the approach, i.e. the mobile robot changes its
navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR
A Framework for Interactive Teaching of Virtual Borders to Mobile Robots
The increasing number of robots in home environments leads to an emerging
coexistence between humans and robots. Robots undertake common tasks and
support the residents in their everyday life. People appreciate the presence of
robots in their environment as long as they keep the control over them. One
important aspect is the control of a robot's workspace. Therefore, we introduce
virtual borders to precisely and flexibly define the workspace of mobile
robots. First, we propose a novel framework that allows a person to
interactively restrict a mobile robot's workspace. To show the validity of this
framework, a concrete implementation based on visual markers is implemented.
Afterwards, the mobile robot is capable of performing its tasks while
respecting the new virtual borders. The approach is accurate, flexible and less
time consuming than explicit robot programming. Hence, even non-experts are
able to teach virtual borders to their robots which is especially interesting
in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure
Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers
The success of the human-robot co-worker team in a flexible manufacturing
environment where robots learn from demonstration heavily relies on the correct
and safe operation of the robot. How this can be achieved is a challenge that
requires addressing both technical as well as human-centric research questions.
In this paper we discuss the state of the art in safety assurance, existing as
well as emerging standards in this area, and the need for new approaches to
safety assurance in the context of learning machines. We then focus on robotic
learning from demonstration, the challenges these techniques pose to safety
assurance and indicate opportunities to integrate safety considerations into
algorithms "by design". Finally, from a human-centric perspective, we stipulate
that, to achieve high levels of safety and ultimately trust, the robotic
co-worker must meet the innate expectations of the humans it works with. It is
our aim to stimulate a discussion focused on the safety aspects of
human-in-the-loop robotics, and to foster multidisciplinary collaboration to
address the research challenges identified
- …