67 research outputs found

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper

    2007 Annual Report of the Graduate School of Engineering and Management, Air Force Institute of Technology

    Get PDF
    The Graduate School\u27s Annual Report highlights research focus areas, new academic programs, faculty accomplishments and news, and provides top-level sponsor-funded research data and information

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation

    Aerial collective systems

    Get PDF
    Deployment of multiple flying robots has attracted the interest of several research groups in the recent times both because such a feat represents many interesting scientific challenges and because aerial collective systems have a huge potential in terms of applications. By working together, multiple robots can perform a given task quicker or more efficiently than a single system. Furthermore, multiple robots can share computing, sensing and communication payloads thus leading to lighter robots that could be safer than a larger system, easier to transport and even disposable in some cases. Deploying a fleet of unmanned aerial vehicles instead of a single aircraft allows rapid coverage of a relatively larger area or volume. Collaborating airborne agents can help each other by relaying communication or by providing navigation means to their neighbours. Flying in formation provides an effective way of decongesting the airspace. Aerial swarms also have an enormous artistic potential because they allow creating physical 3D structures that can dynamically change their shape over time. However, the challenges to actually build and control aerial swarms are numerous. First of all, a flying platform is often more complicated to engineer than a terrestrial robot because of the inherent weight constraints and the absence of mechanical link with any inertial frame that could provide mechanical stability and state reference. In the first section of this chapter, we therefore review this challenges and provide pointers to state-of-the-art methods to solve them. Then as soon as flying robots need to interact with each other, all sorts of problems arise such as wireless communication from and to rapidly moving objects and relative positioning. The aim of section 3 is therefore to review possible approaches to technically enable coordination among flying systems. Finally, section 4 tackles the challenge of designing individual controllers that enable a coherent behavior at the level of the swarm. This challenge is made even more difficult with flying robots because of their 3D nature and their motion constraints that are often related to the specific architectures of the underlying physical platforms. In this third section is complementary to the rest of this book as it focusses only on methods that have been designed for aerial collective systems

    Control and Stabilization of an Unmanned Helicopter Following a Dynamic Trajectory

    Get PDF
    katedra kybernetik

    Toward a robot swarm protecting a group of migrants

    Get PDF
    Different geopolitical conflicts of recent years have led to mass migration of several civilian populations. These migrations take place in militarized zones, indicating real danger contexts for the populations. Indeed, civilians are increasingly targeted during military assaults. Defense and security needs have increased; therefore, there is a need to prioritize the protection of migrants. Very few or no arrangements are available to manage the scale of displacement and the protection of civilians during migration. In order to increase their security during mass migration in an inhospitable territory, this article proposes an assistive system using a team of mobile robots, labeled a rover swarm that is able to provide safety area around the migrants. We suggest a coordination algorithm including CNN and fuzzy logic that allows the swarm to synchronize their movements and provide better sensor coverage of the environment. Implementation is carried out using on a reduced scale rover to enable evaluation of the functionalities of the suggested software architecture and algorithms. Results bring new perspectives to helping and protecting migrants with a swarm that evolves in a complex and dynamic environment

    Flexible collaborative transportation by a team of rotorcraft

    Full text link
    We propose a combined method for the collaborative transportation of a suspended payload by a team of rotorcraft. A recent distance-based formation-motion control algorithm based on assigning distance disagreements among robots generates the acceleration signals to be tracked by the vehicles. In particular, the proposed method does not need global positions nor tracking prescribed trajectories for the motion of the members of the team. The acceleration signals are followed accurately by an Incremental Nonlinear Dynamic Inversion controller designed for rotorcraft that measures and resists the tensions from the payload. Our approach allows us to analyze the involved accelerations and forces in the system so that we can calculate the worst case conditions explicitly to guarantee a nominal performance, provided that the payload starts at rest in the 2D centroid of the formation, and it is not under significant disturbances. For example, we can calculate the maximum safe deformation of the team with respect to its desired shape. We demonstrate our method with a team of four rotorcraft carrying a suspended object two times heavier than the maximum payload for an individual. Last but not least, our proposed algorithm is available for the community in the open-source autopilot Paparazzi.Comment: ICRA 2019, 6+1 page

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation
    corecore