603 research outputs found

    A survey on fractional order control techniques for unmanned aerial and ground vehicles

    Get PDF
    In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios

    Full text link
    Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this paper, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate - to the best of our knowledge - the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high-dynamic range scenes.Comment: 8 pages, 9 figures, 2 table

    Application of Simultaneous Localization and Mapping Algorithms for Haptic Teleoperation of Aerial Vehicles

    Get PDF
    In this thesis, a new type of haptic teleoperator system for remote control of Unmanned Aerial Vehicles (UAVs) has been developed, where the Simultaneous Localization and Mapping (SLAM) algorithms are implemented for the purpose of generating the haptic feedback. Specifically, the haptic feedback is provided to the human operator through interaction with artificial potential field built around the obstacles in the virtual environment which is located at the master site of the teleoperator system. The obstacles in the virtual environment replicate essential features of the actual remote environment where the UAV executes its tasks. The state of the virtual environment is generated and updated in real time using Extended Kalman Filter SLAM algorithms based on measurements performed by the UAV in the actual remote environment. Two methods for building haptic feedback from SLAM algorithms have been developed. The basic SLAM-based haptic feedback algorithm uses fixed size potential field around the obstacles, while the robust SLAM-based haptic feedback algorithm changes the size of potential field around the obstacle depending on the amount of uncertainty in obstacle location, which is represented by the covariance estimate provided by EKF. Simulations and experimental results are presented that evaluate the performance of the proposed teleoperator system

    Perception-aware time optimal path parameterization for quadrotors

    Full text link
    The increasing popularity of quadrotors has given rise to a class of predominantly vision-driven vehicles. This paper addresses the problem of perception-aware time optimal path parametrization for quadrotors. Although many different choices of perceptual modalities are available, the low weight and power budgets of quadrotor systems makes a camera ideal for on-board navigation and estimation algorithms. However, this does come with a set of challenges. The limited field of view of the camera can restrict the visibility of salient regions in the environment, which dictates the necessity to consider perception and planning jointly. The main contribution of this paper is an efficient time optimal path parametrization algorithm for quadrotors with limited field of view constraints. We show in a simulation study that a state-of-the-art controller can track planned trajectories, and we validate the proposed algorithm on a quadrotor platform in experiments.Comment: Accepted to appear at ICRA 202

    Mixed marker-based/marker-less visual odometry system for mobile robots

    Get PDF
    When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test

    Fire detection of Unmanned Aerial Vehicle in a Mixed Reality-based System

    Get PDF
    This paper proposes the employment of a low-cost Micro-electro-mechanical system including; inertial measurement unit (IMU), a consumer-grade digital camera and a fire detection algorithm with a nano unmanned aerial vehicle for inspection application. The video stream (monocular camera) and navigation data (IMU) rely on state-of-the-art indoor/outdoor navigation system. The system combines robotic operating system and computer vision techniques to render metric scale of monocular vision and gravity observable to provide robust, accurate and novel inter-frame motion estimates. The collected onboard data are communicated to the ground station and processed using a Simultaneous Localisation and Mapping (SLAM) system. A robust and efficient re-localisation SLAM was performed to recover from tracking failure, motion blur and frame lost in the received data. The fire detection algorithm was deployed based on the colour, movement attributes, temporal variation of fire's intensity and its accumulation around a point. A cumulative time derivative matrix was used to detect areas with fire's high-frequency luminance flicker (random characteristic) to analyse the frame-by-frame changes. We considered colour, surface coarseness, boundary roughness and skewness features while the quadrotor flies autonomously within clutter and congested areas. Mixed Reality system was adopted to visualise and test the proposed system in a physical/virtual environment. The results showed that the UAV could successfully detect fire and flame, fly towards and hover around it, communicate with the ground station and generate SLAM system

    Implementing Tracking Error Control for Quadrotor UAV

    Get PDF

    Unmanned Robotic Systems and Applications

    Get PDF
    This book presents recent studies of unmanned robotic systems and their applications. With its five chapters, the book brings together important contributions from renowned international researchers. Unmanned autonomous robots are ideal candidates for applications such as rescue missions, especially in areas that are difficult to access. Swarm robotics (multiple robots working together) is another exciting application of the unmanned robotics systems, for example, coordinated search by an interconnected group of moving robots for the purpose of finding a source of hazardous emissions. These robots can behave like individuals working in a group without a centralized control
    corecore