85 research outputs found

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    Available Bandwidth Estimation for Adaptive Video Streaming in Mobile Ad Hoc

    Full text link
    [EN] We propose in this paper an algorithm for available bandwidth estimation in mobile ad hoc networks and its integration into a conventional routing protocol like AODV for improving the rate-adaptive video streaming. We have introduced in our approach a local estimation of the available bandwidth as well as a prediction of the consumed bandwidth. This information allows video application to adjust its transmission rate avoiding network congestion. We conducted a performance evaluation of our solution through simulation experiments using two network scenarios. In the simulation study, transmission of video streams encoded with the H.264/MPEG-4 advanced video coding standard was evaluated. The results reveal performance improvements in terms of packet loss, delay and PSNR.Castellanos, W.; Guerri Cebollada, JC.; Arce Vila, P. (2019). Available Bandwidth Estimation for Adaptive Video Streaming in Mobile Ad Hoc. International Journal of Wireless Information Networks. 26(3):218-229. https://doi.org/10.1007/s10776-019-00431-0S21822926

    The 1st International Electronic Conference on Algorithms

    Get PDF
    This book presents 22 of the accepted presentations at the 1st International Electronic Conference on Algorithms which was held completely online from September 27 to October 10, 2021. It contains 16 proceeding papers as well as 6 extended abstracts. The works presented in the book cover a wide range of fields dealing with the development of algorithms. Many of contributions are related to machine learning, in particular deep learning. Another main focus among the contributions is on problems dealing with graphs and networks, e.g., in connection with evacuation planning problems

    A distributed architecture for unmanned aerial systems based on publish/subscribe messaging and simultaneous localisation and mapping (SLAM) testbed

    Get PDF
    A dissertation submitted in fulfilment for the degree of Master of Science. School of Computational and Applied Mathematics, University of the Witwatersrand, Johannesburg, South Africa, November 2017The increased capabilities and lower cost of Micro Aerial Vehicles (MAVs) unveil big opportunities for a rapidly growing number of civilian and commercial applications. Some missions require direct control using a receiver in a point-to-point connection, involving one or very few MAVs. An alternative class of mission is remotely controlled, with the control of the drone automated to a certain extent using mission planning software and autopilot systems. For most emerging missions, there is a need for more autonomous, cooperative control of MAVs, as well as more complex data processing from sensors like cameras and laser scanners. In the last decade, this has given rise to an extensive research from both academia and industry. This research direction applies robotics and computer vision concepts to Unmanned Aerial Systems (UASs). However, UASs are often designed for specific hardware and software, thus providing limited integration, interoperability and re-usability across different missions. In addition, there are numerous open issues related to UAS command, control and communication(C3), and multi-MAVs. We argue and elaborate throughout this dissertation that some of the recent standardbased publish/subscribe communication protocols can solve many of these challenges and meet the non-functional requirements of MAV robotics applications. This dissertation assesses the MQTT, DDS and TCPROS protocols in a distributed architecture of a UAS control system and Ground Control Station software. While TCPROS has been the leading robotics communication transport for ROS applications, MQTT and DDS are lightweight enough to be used for data exchange between distributed systems of aerial robots. Furthermore, MQTT and DDS are based on industry standards to foster communication interoperability of “things”. Both protocols have been extensively presented to address many of today’s needs related to networks based on the internet of things (IoT). For example, MQTT has been used to exchange data with space probes, whereas DDS was employed for aerospace defence and applications of smart cities. We designed and implemented a distributed UAS architecture based on each publish/subscribe protocol TCPROS, MQTT and DDS. The proposed communication systems were tested with a vision-based Simultaneous Localisation and Mapping (SLAM) system involving three Parrot AR Drone2 MAVs. Within the context of this study, MQTT and DDS messaging frameworks serve the purpose of abstracting UAS complexity and heterogeneity. Additionally, these protocols are expected to provide low-latency communication and scale up to meet the requirements of real-time remote sensing applications. The most important contribution of this work is the implementation of a complete distributed communication architecture for multi-MAVs. Furthermore, we assess the viability of this architecture and benchmark the performance of the protocols in relation to an autonomous quadcopter navigation testbed composed of a SLAM algorithm, an extended Kalman filter and a PID controller.XL201

    Cooperative Navigation for Low-bandwidth Mobile Acoustic Networks.

    Full text link
    This thesis reports on the design and validation of estimation and planning algorithms for underwater vehicle cooperative localization. While attitude and depth are easily instrumented with bounded-error, autonomous underwater vehicles (AUVs) have no internal sensor that directly observes XY position. The global positioning system (GPS) and other radio-based navigation techniques are not available because of the strong attenuation of electromagnetic signals in seawater. The navigation algorithms presented herein fuse local body-frame rate and attitude measurements with range observations between vehicles within a decentralized architecture. The acoustic communication channel is both unreliable and low bandwidth, precluding many state-of-the-art terrestrial cooperative navigation algorithms. We exploit the underlying structure of a post-process centralized estimator in order to derive two real-time decentralized estimation frameworks. First, the origin state method enables a client vehicle to exactly reproduce the corresponding centralized estimate within a server-to-client vehicle network. Second, a graph-based navigation framework produces an approximate reconstruction of the centralized estimate onboard each vehicle. Finally, we present a method to plan a locally optimal server path to localize a client vehicle along a desired nominal trajectory. The planning algorithm introduces a probabilistic channel model into prior Gaussian belief space planning frameworks. In summary, cooperative localization reduces XY position error growth within underwater vehicle networks. Moreover, these methods remove the reliance on static beacon networks, which do not scale to large vehicle networks and limit the range of operations. Each proposed localization algorithm was validated in full-scale AUV field trials. The planning framework was evaluated through numerical simulation.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113428/1/jmwalls_1.pd

    A collaborative monocular visual simultaneous localization and mapping solution to generate a semi-dense 3D map.

    Get PDF
    The utilization and generation of indoor maps are critical in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques used for such map generation. In SLAM, an agent generates a map of an unknown environment while approximating its own location in it. The prevalence and afford-ability of cameras encourage the use of Monocular Visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of indoor maps, thus requiring a distributed computational framework. Each agent generates its own local map, which can then be combined with those of other agents into a map covering a larger area. In doing so, they cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of collaborative SLAM is identifying overlapping maps, especially when the relative starting positions of the agents are unknown. We propose a system comprised of multiple monocular agents with unknown relative starting positions to generate a semi-dense global map of the environment

    Advances in point process filters and their application to sympathetic neural activity

    Get PDF
    This thesis is concerned with the development of techniques for analyzing the sequences of stereotypical electrical impulses within neurons known as spikes. Sequences of spikes, also called spike trains, transmit neural information; decoding them often provides details about the physiological processes generating the neural activity. Here, the statistical theory of event arrivals, called point processes, is applied to human muscle sympathetic spike trains, a peripheral nerve signal responsible for cardiovascular regulation. A novel technique that uses observed spike trains to dynamically derive information about the physiological processes generating them is also introduced. Despite the emerging usage of individual spikes in the analysis of human muscle sympathetic nerve activity, the majority of studies in this field remain focused on bursts of activity at or below cardiac rhythm frequencies. Point process theory applied to multi-neuron spike trains captured both fast and slow spiking rhythms. First, analysis of high-frequency spiking patterns within cardiac cycles was performed and, surprisingly, revealed fibers with no cardiac rhythmicity. Modeling spikes as a function of average firing rates showed that individual nerves contribute substantially to the differences in the sympathetic stressor response across experimental conditions. Subsequent investigation of low-frequency spiking identified two physiologically relevant frequency bands, and modeling spike trains as a function of hemodynamic variables uncovered complex associations between spiking activity and biophysical covariates at these two frequencies. For example, exercise-induced neural activation enhances the relationship of spikes to respiration but does not affect the extremely precise alignment of spikes to diastolic blood pressure. Additionally, a novel method of utilizing point process observations to estimate an internal state process with partially linear dynamics was introduced. Separation of the linear components of the process model and reduction of the sampled space dimensionality improved the computational efficiency of the estimator. The method was tested on an established biophysical model by concurrently computing the dynamic electrical currents of a simulated neuron and estimating its conductance properties. Computational load reduction, improved accuracy, and applicability outside neuroscience establish the new technique as a valuable tool for decoding large dynamical systems with linear substructure and point process observations

    Visual / acoustic detection and localisation in embedded systems

    Get PDF
    ©Cranfield UniversityThe continuous miniaturisation of sensing and processing technologies is increasingly offering a variety of embedded platforms, enabling the accomplishment of a broad range of tasks using such systems. Motivated by these advances, this thesis investigates embedded detection and localisation solutions using vision and acoustic sensors. Focus is particularly placed on surveillance applications using sensor networks. Existing vision-based detection solutions for embedded systems suffer from the sensitivity to environmental conditions. In the literature, there seems to be no algorithm able to simultaneously tackle all the challenges inherent to real-world videos. Regarding the acoustic modality, many research works have investigated acoustic source localisation solutions in distributed sensor networks. Nevertheless, it is still a challenging task to develop an ecient algorithm that deals with the experimental issues, to approach the performance required by these systems and to perform the data processing in a distributed and robust manner. The movement of scene objects is generally accompanied with sound emissions with features that vary from an environment to another. Therefore, considering the combination of the visual and acoustic modalities would offer a significant opportunity for improving the detection and/or localisation using the described platforms. In the light of the described framework, we investigate in the first part of the thesis the use of a cost-effective visual based method that can deal robustly with the issue of motion detection in static, dynamic and moving background conditions. For motion detection in static and dynamic backgrounds, we present the development and the performance analysis of a spatio- temporal form of the Gaussian mixture model. On the other hand, the problem of motion detection in moving backgrounds is addressed by accounting for registration errors in the captured images. By adopting a robust optimisation technique that takes into account the uncertainty about the visual measurements, we show that high detection accuracy can be achieved. In the second part of this thesis, we investigate solutions to the problem of acoustic source localisation using a trust region based optimisation technique. The proposed method shows an overall higher accuracy and convergence improvement compared to a linear-search based method. More importantly, we show that through characterising the errors in measurements, which is a common problem for such platforms, higher accuracy in the localisation can be attained. The last part of this work studies the different possibilities of combining visual and acoustic information in a distributed sensors network. In this context, we first propose to include the acoustic information in the visual model. The obtained new augmented model provides promising improvements in the detection and localisation processes. The second investigated solution consists in the fusion of the measurements coming from the different sensors. An evaluation of the accuracy of localisation and tracking using a centralised/decentralised architecture is conducted in various scenarios and experimental conditions. Results have shown the capability of this fusion approach to yield higher accuracy in the localisation and tracking of an active acoustic source than by using a single type of data

    Architectures for embedded multimodal sensor data fusion systems in the robotics : and airport traffic suveillance ; domain

    Get PDF
    Smaller autonomous robots and embedded sensor data fusion systems often suffer from limited computational and hardware resources. Many ‘Real Time’ algorithms for multi modal sensor data fusion cannot be executed on such systems, at least not in real time and sometimes not at all, because of the computational and energy resources needed, resulting from the architecture of the computational hardware used in these systems. Alternative hardware architectures for generic tracking algorithms could provide a solution to overcome some of these limitations. For tracking and self localization sequential Bayesian filters, in particular particle filters, have been shown to be able to handle a range of tracking problems that could not be solved with other algorithms. But particle filters have some serious disadvantages when executed on serial computational architectures used in most systems. The potential increase in performance for particle filters is huge as many of the computational steps can be done concurrently. A generic hardware solution for particle filters can relieve the central processing unit from the computational load associated with the tracking task. The general topic of this research are hardware-software architectures for multi modal sensor data fusion in embedded systems in particular tracking, with the goal to develop a high performance computational architecture for embedded applications in robotics and airport traffic surveillance domain. The primary concern of the research is therefore: The integration of domain specific concept support into hardware architectures for low level multi modal sensor data fusion, in particular embedded systems for tracking with Bayesian filters; and a distributed hardware-software tracking systems for airport traffic surveillance and control systems. Runway Incursions are occurrences at an aerodrome involving the incorrect presence of an aircraft, vehicle, or person on the protected area of a surface designated for the landing and take-off of aircraft. The growing traffic volume kept runway incursions on the NTSB’s ‘Most Wanted’ list for safety improvements for over a decade. Recent incidents show that problem is still existent. Technological responses that have been deployed in significant numbers are ASDE-X and A-SMGCS. Although these technical responses are a significant improvement and reduce the frequency of runway incursions, some runway incursion scenarios are not optimally covered by these systems, detection of runway incursion events is not as fast as desired, and they are too expensive for all but the biggest airports. Local, short range sensors could be a solution to provide the necessary affordable surveillance accuracy for runway incursion prevention. In this context the following objectives shall be reached. 1) Show the feasibility of runway incursion prevention systems based on localized surveillance. 2) Develop a design for a local runway incursion alerting system. 3) Realize a prototype of the system design using the developed tracking hardware.Kleinere autonome Roboter und eingebettete Sensordatenfusionssysteme haben oft mit stark begrenzter Rechenkapazität und eingeschränkten Hardwareressourcen zu kämpfen. Viele Echtzeitalgorithmen für die Fusion von multimodalen Sensordaten können, bedingt durch den hohen Bedarf an Rechenkapazität und Energie, auf solchen Systemen überhaupt nicht ausgeführt werden, oder zu mindesten nicht in Echtzeit. Der hohe Bedarf an Energie und Rechenkapazität hat seine Ursache darin, dass die Architektur der ausführenden Hardware und der ausgeführte Algorithmus nicht aufeinander abgestimmt sind. Dies betrifft auch Algorithmen zu Spurverfolgung. Mit Hilfe von alternativen Hardwarearchitekturen für die generische Ausführung solcher Algorithmen könnten sich einige der typischerweise vorliegenden Einschränkungen überwinden lassen. Eine Reihe von Aufgaben, die sich mit anderen Spurverfolgungsalgorithmen nicht lösen lassen, lassen sich mit dem Teilchenfilter, einem Algorithmus aus der Familie der Bayesschen Filter lösen. Bei der Ausführung auf traditionellen Architekturen haben Teilchenfilter gegenüber anderen Algorithmen einen signifikanten Nachteil, allerdings ist hier ein großer Leistungszuwachs durch die nebenläufige Ausführung vieler Rechenschritte möglich. Eine generische Hardwarearchitektur für Teilchenfilter könnte deshalb die oben genannten Systeme stark entlasten. Das allgemeine Thema dieses Forschungsvorhabens sind Hardware-Software-Architekturen für die multimodale Sensordatenfusion auf eingebetteten Systemen - speziell für Aufgaben der Spurverfolgung, mit dem Ziel eine leistungsfähige Architektur für die Berechnung entsprechender Algorithmen auf eingebetteten Systemen zu entwickeln, die für Anwendungen in der Robotik und Verkehrsüberwachung auf Flughäfen geeignet ist. Das Augenmerk des Forschungsvorhabens liegt dabei auf der Integration von vom Einsatzgebiet abhängigen Konzepten in die Architektur von Systemen zur Spurverfolgung mit Bayeschen Filtern, sowie auf verteilten Hardware-Software Spurverfolgungssystemen zur Überwachung und Führung des Rollverkehrs auf Flughäfen. Eine „Runway Incursion“ (RI) ist ein Vorfall auf einem Flugplatz, bei dem ein Fahrzeug oder eine Person sich unerlaubt in einem Abschnitt der Start- bzw. Landebahn befindet, der einem Verkehrsteilnehmer zur Benutzung zugewiesen wurde. Der wachsende Flugverkehr hat dafür gesorgt, das RIs seit über einem Jahrzehnt auf der „Most Wanted“-Liste des NTSB für Verbesserungen der Sicherheit stehen. Jüngere Vorfälle zeigen, dass das Problem noch nicht behoben ist. Technologische Maßnahmen die in nennenswerter Zahl eingesetzt wurden sind das ASDE-X und das A-SMGCS. Obwohl diese Maßnahmen eine deutliche Verbesserung darstellen und die Zahl der RIs deutlich reduzieren, gibt es einige RISituationen die von diesen Systemen nicht optimal abgedeckt werden. Außerdem detektieren sie RIs ist nicht so schnell wie erwünscht und sind - außer für die größten Flughäfen - zu teuer. Lokale Sensoren mit kurzer Reichweite könnten eine Lösung sein um die für die zuverlässige Erkennung von RIs notwendige Präzision bei der Überwachung des Rollverkehrs zu erreichen. Vor diesem Hintergrund sollen die folgenden Ziele erreicht werden. 1) Die Machbarkeit eines Runway Incursion Vermeidungssystems, das auf lokalen Sensoren basiert, zeigen. 2) Einen umsetzbaren Entwurf für ein solches System entwickeln. 3) Einen Prototypen des Systems realisieren, das die oben gennannte Hardware zur Spurverfolgung einsetzt

    Multi-robot Collaborative Visual Navigation with Micro Aerial Vehicles

    Get PDF
    Micro Aerial Vehicles (MAVs), particularly multi-rotor MAVs have gained significant popularity in the autonomous robotics research field. The small size and agility of these aircraft makes them safe to use in contained environments. As such MAVs have numerous applications with respect to both the commercial and research fields, such as Search and Rescue (SaR), surveillance, inspection and aerial mapping. In order for an autonomous MAV to safely and reliably navigate within a given environment the control system must be able to determine the state of the aircraft at any given moment. The state consists of a number of extrinsic variables such as the position, velocity and attitude of the MAV. The most common approach for outdoor operations is the Global Positioning System (GPS). While GPS has been widely used for long range navigation in open environments, its performance degrades significantly in constrained environments and is unusable indoors. As a result state estimation for MAVs in such constrained environments is a popular and exciting research area. Many successful solutions have been developed using laser-range finder sensors. These sensors provide very accurate measurements at the cost of increased power and weight requirements. Cameras offer an attractive alternative state estimation sensor; they offer high information content per image coupled with light weight and low power consumption. As a result much recent work has focused on state estimation on MAVs where a camera is the only exteroceptive sensor. Much of this recent work focuses on single MAVs, however it is the author's belief that the full potential and benefits of the MAV platform can only be realised when teams of MAVs are able to cooperatively perform tasks such as SaR or mapping. Therefore the work presented in this thesis focuses on the problem of vision-based navigation for MAVs from a multi-robot perspective. Multi-robot visual navigation presents a number of challenges, as not only must the MAVs be able to estimate their state from visual observations of the environment but they must also be able to share the information they gain about their environment with other members of the team in a meaningful fashion. The meaningful sharing of observations is achieved when the MAVs have a common frame of reference for both positioning and observations. Such meaningful information sharing is key to achieving cooperative multi-robot navigation. In this thesis two main ideas are explored to address these issues. Firstly the idea of appearance based (re)-localisation is explored as a means of establishing a common reference frame for multiple MAVs. This approach allows a team of MAVs to very easily establish a common frame of reference prior to starting their mission. The common reference frame allows all subsequent operations, such as surveillance or mapping, to proceed with direct cooperative between all MAVs. The second idea focuses on the structure and nature of the inter-robot communication with respect to visual navigation; the thesis explores how a partially distributed architecture can be used to vastly improve the scalability and robustness of a multi-MAV visual navigation framework. A navigation framework would not be complete without a means of control. In the multi-robot setting the control problem is complicated by the need for inter-robot collision avoidance. This thesis presents a MAV trajectory controller based on a combination of classical control theory and distributed Velocity Obstacle (VO) based collision avoidance. Once a means of control is established an autonomous multi-MAV team requires a mission. One such mission is the task of exploration; that is exploration of a previously unknown environment in order to produce a map and/or search for objects of interest. This thesis also addressed the problem of multi-robot exploration using only the sparse interest-point data collected from the visual navigation system. In a multi-MAV exploration scenario the problem of task allocation, assigning areas to each MAV to explore, can be a challenging one. An auction-based protocol is considered to address the task allocation problem. The two applications discussed, VO-based trajectory control and auction-based environment exploration, form two case studies which serve as the partial basis of the evaluation of the navigation solutions presented in this thesis. In summary the visual navigation systems presented in this thesis allow MAVs to cooperatively perform task such as collision avoidance and environment exploration in a robust and efficient manner, with large teams of MAVs. The work presented is a step in the direction of fully autonomous teams of MAVs performing complex, dangerous and useful tasks in the real world
    corecore