5,421 research outputs found
Autonomous Robot Navigation with Rich Information Mapping in Nuclear Storage Environments
This paper presents our approach to develop a method for an unmanned ground
vehicle (UGV) to perform inspection tasks in nuclear environments using rich
information maps. To reduce inspectors' exposure to elevated radiation levels,
an autonomous navigation framework for the UGV has been developed to perform
routine inspections such as counting containers, recording their ID tags and
performing gamma measurements on some of them. In order to achieve autonomy, a
rich information map is generated which includes not only the 2D global cost
map consisting of obstacle locations for path planning, but also the location
and orientation information for the objects of interest from the inspector's
perspective. The UGV's autonomy framework utilizes this information to
prioritize locations to navigate to perform the inspections. In this paper, we
present our method of generating this rich information map, originally
developed to meet the requirements of the International Atomic Energy Agency
(IAEA) Robotics Challenge. We demonstrate the performance of our method in a
simulated testbed environment containing uranium hexafluoride (UF6) storage
container mock ups
Development of a bio-inspired vision system for mobile micro-robots
In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts' vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
Fully-autonomous miniaturized robots (e.g., drones), with artificial
intelligence (AI) based visual navigation capabilities are extremely
challenging drivers of Internet-of-Things edge intelligence capabilities.
Visual navigation based on AI approaches, such as deep neural networks (DNNs)
are becoming pervasive for standard-size drones, but are considered out of
reach for nanodrones with size of a few cm. In this work, we
present the first (to the best of our knowledge) demonstration of a navigation
engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based
visual navigation. To achieve this goal we developed a complete methodology for
parallel execution of complex DNNs directly on-bard of resource-constrained
milliwatt-scale nodes. Our system is based on GAP8, a novel parallel
ultra-low-power computing platform, and a 27 g commercial, open-source
CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the
software mapping techniques that enable the state-of-the-art deep convolutional
neural network presented in [1] to be fully executed on-board within a strict 6
fps real-time constraint with no compromise in terms of flight results, while
all processing is done with only 64 mW on average. Our navigation engine is
flexible and can be used to span a wide performance range: at its peak
performance corner it achieves 18 fps while still consuming on average just
3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication
in the IEEE Internet of Things Journal (IEEE IOTJ
Generative Adversarial Super-Resolution at the Edge with Knowledge Distillation
Single-Image Super-Resolution can support robotic tasks in environments where
a reliable visual stream is required to monitor the mission, handle
teleoperation or study relevant visual details. In this work, we propose an
efficient Generative Adversarial Network model for real-time Super-Resolution.
We adopt a tailored architecture of the original SRGAN and model quantization
to boost the execution on CPU and Edge TPU devices, achieving up to 200 fps
inference. We further optimize our model by distilling its knowledge to a
smaller version of the network and obtain remarkable improvements compared to
the standard training approach. Our experiments show that our fast and
lightweight model preserves considerably satisfying image quality compared to
heavier state-of-the-art models. Finally, we conduct experiments on image
transmission with bandwidth degradation to highlight the advantages of the
proposed system for mobile robotic applications
A Cost-Effective Person-Following System for Assistive Unmanned Vehicles with Deep Learning at the Edge
The vital statistics of the last century highlight a sharp increment of the
average age of the world population with a consequent growth of the number of
older people. Service robotics applications have the potentiality to provide
systems and tools to support the autonomous and self-sufficient older adults in
their houses in everyday life, thereby avoiding the task of monitoring them
with third parties. In this context, we propose a cost-effective modular
solution to detect and follow a person in an indoor, domestic environment. We
exploited the latest advancements in deep learning optimization techniques, and
we compared different neural network accelerators to provide a robust and
flexible person-following system at the edge. Our proposed cost-effective and
power-efficient solution is fully-integrable with pre-existing navigation
stacks and creates the foundations for the development of fully-autonomous and
self-contained service robotics applications
- …