47,243 research outputs found
Learning cloth manipulation with demonstrations
Recent advances in Deep Reinforcement learning and computational capabilities of GPUs have led to variety of research being conducted in the learning side of robotics. The main aim being that of making autonomous robots that are capable of learning how to solve a task on their own with minimal requirement for engineering on the planning, vision, or control side. Efforts have been made to learn the manipulation of rigid objects through the help of human demonstrations, specifically in the tasks such as stacking of multiple blocks on top of each other, inserting a pin into a hole, etc. These Deep RL algorithms successfully learn how to complete a task involving the manipulation of rigid objects, but autonomous manipulation of textile objects such as clothes through Deep RL algorithms is still not being studied in the community.
The main objectives of this work involve, 1) implementing the state of the art Deep RL algorithms for rigid object manipulation and getting a deep understanding of the working of these various algorithms, 2) Creating an open-source simulation environment for simulating textile objects such as clothes, 3) Designing Deep RL algorithms for learning autonomous manipulation of textile objects through demonstrations.Peer ReviewedPreprin
Agile Autonomous Driving using End-to-End Deep Imitation Learning
We present an end-to-end imitation learning system for agile, off-road
autonomous driving using only low-cost sensors. By imitating a model predictive
controller equipped with advanced sensors, we train a deep neural network
control policy to map raw, high-dimensional observations to continuous steering
and throttle commands. Compared with recent approaches to similar tasks, our
method requires neither state estimation nor on-the-fly planning to navigate
the vehicle. Our approach relies on, and experimentally validates, recent
imitation learning theory. Empirically, we show that policies trained with
online imitation learning overcome well-known challenges related to covariate
shift and generalize better than policies trained with batch imitation
learning. Built on these insights, our autonomous driving system demonstrates
successful high-speed off-road driving, matching the state-of-the-art
performance.Comment: 13 pages, Robotics: Science and Systems (RSS) 201
Scene Understanding for Autonomous Manipulation with Deep Learning
Over the past few years, deep learning techniques have achieved tremendous success
in many visual understanding tasks such as object detection, image segmentation,
and caption generation. Despite this thriving in computer vision and natural language
processing, deep learning has not yet shown signicant impact in robotics.
Due to the gap between theory and application, there are many challenges when
applying the results of deep learning to the real robotic systems. In this study,
our long-term goal is to bridge the gap between computer vision and robotics by
developing visual methods that can be used in real robots. In particular, this work
tackles two fundamental visual problems for autonomous robotic manipulation: affordance
detection and ne-grained action understanding. Theoretically, we propose
dierent deep architectures to further improves the state of the art in each problem.
Empirically, we show that the outcomes of our proposed methods can be applied in
real robots and allow them to perform useful manipulation tasks
Design of an Autonomous Agriculture Robot for Real Time Weed Detection using CNN
Agriculture has always remained an integral part of the world. As the human
population keeps on rising, the demand for food also increases, and so is the
dependency on the agriculture industry. But in today's scenario, because of low
yield, less rainfall, etc., a dearth of manpower is created in this
agricultural sector, and people are moving to live in the cities, and villages
are becoming more and more urbanized. On the other hand, the field of robotics
has seen tremendous development in the past few years. The concepts like Deep
Learning (DL), Artificial Intelligence (AI), and Machine Learning (ML) are
being incorporated with robotics to create autonomous systems for various
sectors like automotive, agriculture, assembly line management, etc. Deploying
such autonomous systems in the agricultural sector help in many aspects like
reducing manpower, better yield, and nutritional quality of crops. So, in this
paper, the system design of an autonomous agricultural robot which primarily
focuses on weed detection is described. A modified deep-learning model for the
purpose of weed detection is also proposed. The primary objective of this robot
is the detection of weed on a real-time basis without any human involvement,
but it can also be extended to design robots in various other applications
involved in farming like weed removal, plowing, harvesting, etc., in turn
making the farming industry more efficient. Source code and other details can
be found at https://github.com/Dhruv2012/Autonomous-Farm-RobotComment: Published at the AVES 2021 conference. Source code and other details
can be found at https://github.com/Dhruv2012/Autonomous-Farm-Robo
- …