4,088 research outputs found
Inverted tracking algorithm for the field survey through artificial vision and robotics
The area of artificial vision and robotics has very important advances in the recognition and tracking of objects, not only in indoor scenes but also in outdoor ones. These methods and algorithms have given rise to very important technological advances in different areas of knowledge. In the area of Precision Agriculture, the main problem of its use lies in its application in field surveys, whereas in the case of cultivation, we will have fixed objects (seedlings) in established spaces (furrows and plots), but in uncontrolled environments. The determination of the density of these crops and their distance between furrows among other data is in many cases, relevant to their performance. It is the purpose of this paper to solve the automated sensing of this data through the use of cameras and artificial vision techniques. In this work, an inverted tracking algorithm is defined in order to automatically determine the necessary shot-points by means of which the cameras involved as sensors on a robotic platform capture scene images. This will help to survey the density and distance of the crop to be analyzed
Teaching humanoid robotics by means of human teleoperation through RGB-D sensors
This paper presents a graduate course project on humanoid robotics offered by the University of Padova. The target is to safely lift an object by teleoperating a small humanoid. Students have to map human limbs into robot joints, guarantee the robot stability during the motion, and teleoperate the robot to perform the correct movement. We introduce the following innovative aspects with respect to classical robotic classes: i) the use of humanoid robots as teaching tools; ii) the simplification of the stable locomotion problem by exploiting the potential of teleoperation; iii) the adoption of a Project-Based Learning constructivist approach as teaching methodology. The learning objectives of both course and project are introduced and compared with the students\u2019 background. Design and constraints students have to deal with are reported, together with the amount of time they and their instructors dedicated to solve tasks. A set of evaluation results are provided in order to validate the authors\u2019 purpose, including the students\u2019 personal feedback. A discussion about possible future improvements is reported, hoping to encourage further spread of educational robotics in schools at all levels
Motion Planning
Motion planning is a fundamental function in robotics and numerous intelligent machines. The global concept of planning involves multiple capabilities, such as path generation, dynamic planning, optimization, tracking, and control. This book has organized different planning topics into three general perspectives that are classified by the type of robotic applications. The chapters are a selection of recent developments in a) planning and tracking methods for unmanned aerial vehicles, b) heuristically based methods for navigation planning and routes optimization, and c) control techniques developed for path planning of autonomous wheeled platforms
Data-driven learning for robot physical intelligence
The physical intelligence, which emphasizes physical capabilities such as dexterous manipulation and dynamic mobility, is essential for robots to physically coexist with humans. Much research on robot physical intelligence has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this dissertation, a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation is proposed. This method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. Besides, the power of crowdsourcing is brought to tackle case-specific engineering problem in the robot physical intelligence. Crowdsourcing has demonstrated great potential in recent development of artificial intelligence. Constant learning from a large group of human mentors breaks the limit of learning from one or a few mentors in individual cases, and has achieved success in image recognition, translation, and many other cyber applications. A robot learning scheme that allows a robot to synthesize new physical skills using knowledge acquired from crowdsourced human mentors is proposed. The work is expected to provide a long-term and big-scale measure to produce advanced robot physical intelligence
Recommended from our members
Visual Feedback Stabilisation of a Cart Inverted Pendulum A
Vision-based object stabilisation is an exciting and challenging area of research, and is one that promises great technical advancements in the field of computer vision. As humans, we are capable of a tremendous array of skilful interactions, particularly when balancing unstable objects that have complex, non-linear dynamics. These complex dynamics impose a difficult control problem, since the object must be stabilised through collaboration between applied forces and vision-based feedback. To coordinate our actions and facilitate delivery of precise amounts of muscle torque, we primarily use our eyes to provide feedback in a closed-loop control scheme. This ability to control an inherently unstable object by vision-only feedback demonstrates an exceptionally high degree of voluntary motor skill. Despite the pervasiveness of vision-based stabilisation in humans and animals, relatively little is known about the neural strategies used to achieve this task.
In the last few decades, with advancements in technology, we have tried to impart the skill of vision-based object stabilisation to machines, with varying degrees of success. Within the context of this research, we continue this pursuit by employing the classic Cart Inverted Pendulum; an inherently unstable, non-linear system to investigate dynamic object balancing by vision-only feedback. The Inverted Pendulum is considered to be one of the most fundamental benchmark systems in control theory; as a platform, it provides us with a strong, well established test bed for this research.
We seek to discover what strategies are used to stabilise the Cart Inverted Pendulum, and to determine if these strategies can be deployed in Real-Time, using cost-effective solutions. The thesis confronts, and overcomes the problems imposed by low-bandwidth USB cameras; such as poor colour-balance, image noise and low frame rates etc., to successfully achieve vision-based stabilisation.
The thesis presents a comprehensive vision-based control system that is capable of balancing an inverted pendulum with a resting oscillation of approximately ±1º. We employ a novel, segment-based location and tracking algorithm, which was found to have excellent noise immunity and enhanced robustness. We successfully demonstrate the resilience of the tracking and pose estimation algorithm against visual disturbances in Real-Time, and with minimal recovery delay. The algorithm was evaluated against peer reviewed research; in terms of processing time, amplitude of oscillation, measurement accuracy and resting oscillation. For each key performance indicator, our system was found to be superior in many cases to that found in the literature.
The thesis also delivers a complete test software environment, where vision-based algorithms can be evaluated. This environment includes a flexible tracking model generator to allow customisation of visual markers used by the system. We conclude by successfully performing off-line optimization of our method by means of Artificial Neural Networks, to achieve a significant improvement in angle measurement accuracy.Goodrich Engine Control Systems and Balfour Beatty Rail Technologie
Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control
Trial-and-error based reinforcement learning (RL) has seen rapid advancements
in recent times, especially with the advent of deep neural networks. However,
the majority of autonomous RL algorithms require a large number of interactions
with the environment. A large number of interactions may be impractical in many
real-world applications, such as robotics, and many practical systems have to
obey limitations in the form of state space or control constraints. To reduce
the number of system interactions while simultaneously handling constraints, we
propose a model-based RL framework based on probabilistic Model Predictive
Control (MPC). In particular, we propose to learn a probabilistic transition
model using Gaussian Processes (GPs) to incorporate model uncertainty into
long-term predictions, thereby, reducing the impact of model errors. We then
use MPC to find a control sequence that minimises the expected long-term cost.
We provide theoretical guarantees for first-order optimality in the GP-based
transition models with deterministic approximate inference for long-term
planning. We demonstrate that our approach does not only achieve
state-of-the-art data efficiency, but also is a principled way for RL in
constrained environments.Comment: Accepted at AISTATS 2018
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …