34 research outputs found

    Optic-Flow Based Control of a 46g Quadrotor

    Get PDF
    We aim at developing autonomous miniature hov- ering flying robots capable of navigating in unstructured GPS- denied environments. A major challenge is the miniaturization of the embedded sensors and processors allowing such platforms to fly autonomously. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in real- time on a microcontroller. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional distance sensors or assumptions about the environment. Key to this method is the introduction of the translational optic-flow direction constraint (TOFDC), which does not use the optic-flow scale, but only its direction to correct for inertial sensor drift during changes of direction. This solution requires comparatively much simpler electronics and sensors and works in environments of any geometries. We demonstrate the implementation of this algorithm on a miniature 46g quadrotor for closed-loop position control

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    3D Scene Reconstruction with Micro-Aerial Vehicles and Mobile Devices

    Full text link
    Scene reconstruction is the process of building an accurate geometric model of one\u27s environment from sensor data. We explore the problem of real-time, large-scale 3D scene reconstruction in indoor environments using small laser range-finders and low-cost RGB-D (color plus depth) cameras. We focus on computationally-constrained platforms such as micro-aerial vehicles (MAVs) and mobile devices. These platforms present a set of fundamental challenges - estimating the state and trajectory of the device as it moves within its environment and utilizing lightweight, dynamic data structures to hold the representation of the reconstructed scene. The system needs to be computationally and memory-efficient, so that it can run in real time, onboard the platform. In this work, we present three scene reconstruction systems. The first system uses a laser range-finder and operates onboard a quadrotor MAV. We address the issues of autonomous control, state estimation, path-planning, and teleoperation. We propose the multi-volume occupancy grid (MVOG) - a novel data structure for building 3D maps from laser data, which provides a compact, probabilistic scene representation. The second system uses an RGB-D camera to recover the 6-DoF trajectory of the platform by aligning sparse features observed in the current RGB-D image against a model of previously seen features. We discuss our work on camera calibration and the depth measurement model. We apply the system onboard an MAV to produce occupancy-based 3D maps, which we utilize for path-planning. Finally, we present our contributions to a scene reconstruction system for mobile devices with built-in depth sensing and motion-tracking capabilities. We demonstrate reconstructing and rendering a global mesh on the fly, using only the mobile device\u27s CPU, in very large (300 square meter) scenes, at a resolutions of 2-3cm. To achieve this, we divide the scene into spatial volumes indexed by a hash map. Each volume contains the truncated signed distance function for that area of space, as well as the mesh segment derived from the distance function. This approach allows us to focus computational and memory resources only in areas of the scene which are currently observed, as well as leverage parallelization techniques for multi-core processing

    QuadAALper - The Ambient Assisted Living Quadcopter

    Get PDF

    Airborne Navigation by Fusing Inertial and Camera Data

    Get PDF
    Unmanned aircraft systems (UASs) are often used as measuring system. Therefore, precise knowledge of their position and orientation are required. This thesis provides research in the conception and realization of a system which combines GPS-assisted inertial navigation systems with the advances in the area of camera-based navigation. It is presented how these complementary approaches can be used in a joint framework. In contrast to widely used concepts utilizing only one of the two approaches, a more robust overall system is realized. The presented algorithms are based on the mathematical concepts of rigid body motions. After derivation of the underlying equations, the methods are evaluated in numerical studies and simulations. Based on the results, real-world systems are used to collect data, which is evaluated and discussed. Two approaches for the system calibration, which describes the offsets between the coordinate systems of the sensors, are proposed. The first approach integrates the parameters of the system calibration in the classical bundle adjustment. The optimization is presented very descriptive in a graph based formulation. Required is a high precision INS and data from a measurement flight. In contrast to classical methods, a flexible flight course can be used and no cost intensive ground control points are required. The second approach enables the calibration of inertial navigation systems with a low positional accuracy. Line observations are used to optimize the rotational part of the offsets. Knowledge of the offsets between the coordinate systems of the sensors allows transforming measurements bidirectional. This is the basis for a fusion concept combining measurements from the inertial navigation system with an approach for the visual navigation. As a result, more robust estimations of the own position and orientation are achieved. Moreover, the map created from the camera images is georeferenced. It is shown how this map can be used to navigate an unmanned aerial system back to its starting position in the case of a disturbed or failed GPS reception. The high precision of the map allows the navigation through previously unexplored area by taking into consideration the maximal drift for the camera-only navigation. The evaluated concept provides insight into the possibility of the robust navigation of unmanned aerial systems with complimentary sensors. The constantly increasing computing power allows the evaluation of big amounts of data and the development of new concept to fuse the information. Future navigation systems will use the data of all available sensors to achieve the best navigation solution at any time
    corecore