2,439 research outputs found
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
A survey on fractional order control techniques for unmanned aerial and ground vehicles
In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade
Design and control of laser micromachining workstation
The production process of miniature devices and microsystems requires the utilization of non-conventional micromachining techniques. In the past few decades laser micromachining has became micro-manufacturing technique of choice for many industrial and research applications. This paper discusses the design of motion control system for a laser micromachining workstation with particulars about automatic focusing and control of work platform used in the workstation. The automatic focusing is solved in a sliding mode optimization framework and preview controller is used to control the motion platform. Experimental results of both motion control and actual laser micromachining are presented
Discovery and recognition of motion primitives in human activities
We present a novel framework for the automatic discovery and recognition of
motion primitives in videos of human activities. Given the 3D pose of a human
in a video, human motion primitives are discovered by optimizing the `motion
flux', a quantity which captures the motion variation of a group of skeletal
joints. A normalization of the primitives is proposed in order to make them
invariant with respect to a subject anatomical variations and data sampling
rate. The discovered primitives are unknown and unlabeled and are
unsupervisedly collected into classes via a hierarchical non-parametric Bayes
mixture model. Once classes are determined and labeled they are further
analyzed for establishing models for recognizing discovered primitives. Each
primitive model is defined by a set of learned parameters.
Given new video data and given the estimated pose of the subject appearing on
the video, the motion is segmented into primitives, which are recognized with a
probability given according to the parameters of the learned models.
Using our framework we build a publicly available dataset of human motion
primitives, using sequences taken from well-known motion capture datasets. We
expect that our framework, by providing an objective way for discovering and
categorizing human motion, will be a useful tool in numerous research fields
including video analysis, human inspired motion generation, learning by
demonstration, intuitive human-robot interaction, and human behavior analysis
RGB-D datasets using microsoft kinect or similar sensors: a survey
RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms
Adaptive SLAM with synthetic stereo dataset generation for real-time dense 3D reconstruction
International audienceIn robotic mapping and navigation, of prime importance today with the trend for autonomous cars, simultaneous localization and mapping (SLAM) algorithms often use stereo vision to extract 3D information of the surrounding world. Whereas the number of creative methods for stereo-based SLAM is continuously increasing, the variety of datasets is relatively poor and the size of their contents relatively small. This size issue is increasingly problematic, with the recent explosion of deep learning based approaches, several methods require an important amount of data. Those multiple techniques contribute to enhance the precision of both localization estimation and mapping estimation to a point where the accuracy of the sensors used to get the ground truth might be questioned. Finally, because today most of these technologies are embedded on on-board systems, the power consumption and real-time constraints turn to be key requirements. Our contribution is twofold: we propose an adaptive SLAM method that reduces the number of processed frame with minimum impact error, and we make available a synthetic flexible stereo dataset with absolute ground truth, which allows to run new benchmarks for visual odometry challenges. This dataset is available online at http://alastor.labri.fr/
Real2Sim2Real Transfer for Control of Cable-driven Robots via a Differentiable Physics Engine
Tensegrity robots, composed of rigid rods and flexible cables, exhibit high
strength-to-weight ratios and extreme deformations, enabling them to navigate
unstructured terrain and even survive harsh impacts. However, they are hard to
control due to their high dimensionality, complex dynamics, and coupled
architecture. Physics-based simulation is one avenue for developing locomotion
policies that can then be transferred to real robots, but modeling tensegrity
robots is a complex task, so simulations experience a substantial sim2real gap.
To address this issue, this paper describes a Real2Sim2Real strategy for
tensegrity robots. This strategy is based on a differential physics engine that
can be trained given limited data from a real robot (i.e. offline measurements
and one random trajectory) and achieve a high enough accuracy to discover
transferable locomotion policies. Beyond the overall pipeline, key
contributions of this work include computing non-zero gradients at contact
points, a loss function, and a trajectory segmentation technique that avoid
conflicts in gradient evaluation during training. The proposed pipeline is
demonstrated and evaluated on a real 3-bar tensegrity robot.Comment: Submitted to ICRA202
Artificial Intelligence Applications for Drones Navigation in GPS-denied or degraded Environments
L'abstract è presente nell'allegato / the abstract is in the attachmen
- …