79,183 research outputs found
Emerging robot swarm traffic
We discuss traffic patterns generated by swarms of robots while commuting to and from a base station. The overall question is whether to explicitly organise the traffic or whether a certain regularity develops `naturally'.
Human driven motorized traffic is rigidly structured in two lanes. However, army ants develop a three-lane pattern in their traffic, while human pedestrians generate a main trail and secondary trials in either direction.
Our robot swarm approach is bottom-up: designing individual agents we first investigate the mathematics of cases occurring when applying the artificial potential field method to three 'perfect' robots. We show that traffic lane pattern will not be disturbed by the internal system of forces. Next, we define models of sensor designs to account for the practical fact that robots (and ants) have limited visibility and compare the sensor models in groups of three robots. In the final step we define layouts of a highway: an unbounded open space, a trail with surpassable edges and a hard defined (walled) highway.
Having defined the preliminaries we run swarm simulations and look for emerging traffic patterns. Apparently, depending on the initial situation a variety of lane patterns occurs, however, high traffic densities do delay the emergence of traffic lanes considerably. Overall we conclude that regularities do emerge naturally and can be turned into an advantage to obtain efficient robot traffic
Vine Robots: Design, Teleoperation, and Deployment for Navigation and Exploration
A new class of continuum robots has recently been explored, characterized by
tip extension, significant length change, and directional control. Here, we
call this class of robots "vine robots," due to their similar behavior to
plants with the growth habit of trailing. Due to their growth-based movement,
vine robots are well suited for navigation and exploration in cluttered
environments, but until now, they have not been deployed outside the lab.
Portability of these robots and steerability at length scales relevant for
navigation are key to field applications. In addition, intuitive
human-in-the-loop teleoperation enables movement in unknown and dynamic
environments. We present a vine robot system that is teleoperated using a
custom designed flexible joystick and camera system, long enough for use in
navigation tasks, and portable for use in the field. We report on deployment of
this system in two scenarios: a soft robot navigation competition and
exploration of an archaeological site. The competition course required movement
over uneven terrain, past unstable obstacles, and through a small aperture. The
archaeological site required movement over rocks and through horizontal and
vertical turns. The robot tip successfully moved past the obstacles and through
the tunnels, demonstrating the capability of vine robots to achieve navigation
and exploration tasks in the field.Comment: IEEE Robotics and Automation Magazine, 2019. Video available at
https://youtu.be/9NtXUL69g_
A mosaic of eyes
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties
Push recovery with stepping strategy based on time-projection control
In this paper, we present a simple control framework for on-line push
recovery with dynamic stepping properties. Due to relatively heavy legs in our
robot, we need to take swing dynamics into account and thus use a linear model
called 3LP which is composed of three pendulums to simulate swing and torso
dynamics. Based on 3LP equations, we formulate discrete LQR controllers and use
a particular time-projection method to adjust the next footstep location
on-line during the motion continuously. This adjustment, which is found based
on both pelvis and swing foot tracking errors, naturally takes the swing
dynamics into account. Suggested adjustments are added to the Cartesian 3LP
gaits and converted to joint-space trajectories through inverse kinematics.
Fixed and adaptive foot lift strategies also ensure enough ground clearance in
perturbed walking conditions. The proposed structure is robust, yet uses very
simple state estimation and basic position tracking. We rely on the physical
series elastic actuators to absorb impacts while introducing simple laws to
compensate their tracking bias. Extensive experiments demonstrate the
functionality of different control blocks and prove the effectiveness of
time-projection in extreme push recovery scenarios. We also show self-produced
and emergent walking gaits when the robot is subject to continuous dragging
forces. These gaits feature dynamic walking robustness due to relatively soft
springs in the ankles and avoiding any Zero Moment Point (ZMP) control in our
proposed architecture.Comment: 20 pages journal pape
Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction
This paper introduces a novel neural network-based reinforcement learning
approach for robot gaze control. Our approach enables a robot to learn and to
adapt its gaze control strategy for human-robot interaction neither with the
use of external sensors nor with human supervision. The robot learns to focus
its attention onto groups of people from its own audio-visual experiences,
independently of the number of people, of their positions and of their physical
appearances. In particular, we use a recurrent neural network architecture in
combination with Q-learning to find an optimal action-selection policy; we
pre-train the network using a simulated environment that mimics realistic
scenarios that involve speaking/silent participants, thus avoiding the need of
tedious sessions of a robot interacting with people. Our experimental
evaluation suggests that the proposed method is robust against parameter
estimation, i.e. the parameter values yielded by the method do not have a
decisive impact on the performance. The best results are obtained when both
audio and visual information is jointly used. Experiments with the Nao robot
indicate that our framework is a step forward towards the autonomous learning
of socially acceptable gaze behavior.Comment: Paper submitted to Pattern Recognition Letter
3LP: a linear 3D-walking model including torso and swing dynamics
In this paper, we present a new model of biped locomotion which is composed
of three linear pendulums (one per leg and one for the whole upper body) to
describe stance, swing and torso dynamics. In addition to double support, this
model has different actuation possibilities in the swing hip and stance ankle
which could be widely used to produce different walking gaits. Without the need
for numerical time-integration, closed-form solutions help finding periodic
gaits which could be simply scaled in certain dimensions to modulate the motion
online. Thanks to linearity properties, the proposed model can provide a
computationally fast platform for model predictive controllers to predict the
future and consider meaningful inequality constraints to ensure feasibility of
the motion. Such property is coming from describing dynamics with joint torques
directly and therefore, reflecting hardware limitations more precisely, even in
the very abstract high level template space. The proposed model produces
human-like torque and ground reaction force profiles and thus, compared to
point-mass models, it is more promising for precise control of humanoid robots.
Despite being linear and lacking many other features of human walking like CoM
excursion, knee flexion and ground clearance, we show that the proposed model
can predict one of the main optimality trends in human walking, i.e. nonlinear
speed-frequency relationship. In this paper, we mainly focus on describing the
model and its capabilities, comparing it with human data and calculating
optimal human gait variables. Setting up control problems and advanced
biomechanical analysis still remain for future works.Comment: Journal paper under revie
Tele-operated high speed anthropomorphic dextrous hands with object shape and texture identification
This paper reports on the development of two number of robotic hands have been developed which focus on tele-operated high speed anthropomorphic dextrous robotic hands. The aim of developing these hands was to achieve a system that seamlessly interfaced between humans and robots. To provide sensory feedback, to a remote operator tactile sensors were developed to be mounted on the robotic hands. Two systems were developed, the first, being a skin sensor capable of shape reconstruction placed on the palm of the hand to feed back the shape of objects grasped and the second is a highly sensitive tactile array for surface texture identification
- …