13 research outputs found

    Guided Autonomy for Quadcopter Photography

    Get PDF
    Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction

    Coastal Eye: Monitoring Coastal Environments Using Lightweight Drones

    Get PDF
    Monitoring coastal environments is a challenging task. This is because of both the logistical demands involved with in-situ data collection and the dynamic nature of the coastal zone, where multiple processes operate over varying spatial and temporal scales. Remote sensing products derived from spaceborne and airborne platforms have proven highly useful in the monitoring of coastal ecosystems, but often they fail to capture fine scale processes and there remains a lack of cost-effective and flexible methods for coastal monitoring at these scales. Proximal sensing technology such as lightweight drones and kites has greatly improved the ability to capture fine spatial resolution data at user-dictated visit times. These approaches are democratising, allowing researchers and managers to collect data in locations and at defined times themselves. In this thesis I develop our scientific understanding of the application of proximal sensing within coastal environments. The two critical review pieces consolidate disparate information on the application of kites as a proximal sensing platform, and the often overlooked hurdles of conducting drone operations in challenging environments. The empirical work presented then tests the use of this technology in three different coastal environments spanning the land-sea interface. Firstly, I use kite aerial photography and uncertainty-assessed structure-from-motion multi-view stereo (SfM-MVS) processing to track changes in coastal dunes over time. I report that sub-decimetre changes (both erosion and accretion) can be detected with this methodology. Secondly, I used lightweight drones to capture fine spatial resolution optical data of intertidal seagrass meadows. I found that estimations of plant cover were more similar to in-situ measures in sparsely populated than densely populated meadows. Lastly, I developed a novel technique utilising lightweight drones and SfM-MVS to measure benthic structural complexity in tropical coral reefs. I found that structural complexity measures were obtainable from SfM-MVS derived point clouds, but that the technique was influenced by glint type artefacts in the image data. Collectively, this work advances the knowledge of proximal sensing in the coastal zone, identifying both the strengths and weaknesses of its application across several ecosystems.Natural Environment Research Council (NERC

    Future technological factors affecting unmanned aircraft systems (UAS):a South African perspective towards 2025

    Get PDF
    The fact that pilots are not physically situated in the aircraft for UAS operations makes the current standards applicable to manned aircraft not suitable for UAS operations (FAA, 2013). FAA (2013:18) states that ―removing the pilot from the aircraft creates a series of performance considerations between manned and unmanned aircraft that need to be fully researched and understood to determine acceptability and potential impact on safe operations in the NAS. According to ERSG (2013), not all technologies necessary to ensure the safe integration of civil UASs into civilian airspace are available today. The extrapolation that can be made based on the above arguments is that advancement of UAS technologies will more likely have a significant bearing on the safe integration of UASs into civilian airspace. Therefore, as an identified research gap, the research/main objective of this research is to identify future technological factors affecting Unmanned Aircraft Systems in the Republic of South Africa leading towards the year 2025

    Intelligently Assisting Human-Guided Quadcopter Photography

    No full text
    Drones are a versatile platform for both amateur and professional photographers, enabling them to capture photos that are impossible to shoot with ground-based cameras. However, when guided by inexperienced pilots, they have a high incidence of collisions, crashes, and poorly framed photographs. This paper presents an intelligent user interface for photographing objects that is robust against navigation errors and reliably collects high quality photographs. By retaining the human in the loop, our system is faster and more selective than purely autonomous UAVs that employ simple coverage algorithms. The intelligent user interface operates in multiple modes, allowing the user to either directly control the quadcopter or fly in a semi-autonomous mode around a target object in the environment. To evaluate the interface, users completed a data set collection task in which they were asked to photograph objects from multiple views. Our sketch-based control paradigm facilitated task completion, reduced crashes, and was favorably reviewed by the participants

    Unmanned Aerial Vehicle (UAV)-Enabled Wireless Communications and Networking

    Get PDF
    The emerging massive density of human-held and machine-type nodes implies larger traffic deviatiolns in the future than we are facing today. In the future, the network will be characterized by a high degree of flexibility, allowing it to adapt smoothly, autonomously, and efficiently to the quickly changing traffic demands both in time and space. This flexibility cannot be achieved when the network’s infrastructure remains static. To this end, the topic of UAVs (unmanned aerial vehicles) have enabled wireless communications, and networking has received increased attention. As mentioned above, the network must serve a massive density of nodes that can be either human-held (user devices) or machine-type nodes (sensors). If we wish to properly serve these nodes and optimize their data, a proper wireless connection is fundamental. This can be achieved by using UAV-enabled communication and networks. This Special Issue addresses the many existing issues that still exist to allow UAV-enabled wireless communications and networking to be properly rolled out

    Technology in conservation: towards a system for in-field drone detection of invasive vegetation

    Get PDF
    Remote sensing can assist in monitoring the spread of invasive vegetation. The adoption of camera-carrying unmanned aerial vehicles, commonly referred to as drones, as remote sensing tools has yielded images of higher spatial resolution than traditional techniques. Drones also have the potential to interact with the environment through the delivery of bio-control or herbicide, as seen with their adoption in precision agriculture. Unlike in agricultural applications, however, invasive plants do not have a predictable position relative to each other within the environment. To facilitate the adoption of drones as an environmental monitoring and management tool, drones need to be able to intelligently distinguish between invasive and non-invasive vegetation on the fly. In this thesis, we present the augmentation of a commercially available drone with a deep machine learning model to investigate the viability of differentiating between an invasive shrub and other vegetation. As a case study, this was applied to the shrub genus Hakea, originating in Australia and invasive in several countries including South Africa. However, for this research, the methodology is important, rather than the chosen target plant. A dataset was collected using the available drone and manually annotated to facilitate the supervised training of the model. Two approaches were explored, namely, classification and semantic segmentation. For each of these, several models were trained and evaluated to find the optimal one. The chosen model was then interfaced with the drone via an Android application on a mobile device and its performance was preliminarily evaluated in the field. Based on these findings, refinements were made and thereafter a thorough field evaluation was performed to determine the best conditions for model operation. Results from the classification task show that deep learning models are capable of distinguishing between target and other shrubs in ideal candidate windows. However, classification in this manner is restricted by the proposal of such candidate windows. End-to-end image segmentation using deep learning overcomes this problem, classifying the image in a pixel-wise manner. Furthermore, the use of appropriate loss functions was found to improve model performance. Field tests show that illumination and shadow pose challenges to the model, but that good recall can be achieved when the conditions are ideal. False positive detection remains an issue that could be improved. This approach shows the potential for drones as an environmental monitoring and management tool when coupled with deep machine learning techniques and outlines potential problems that may be encountered

    Canonical explorations of 'Tel' environments for computer programming

    Get PDF
    This paper applies a novel technique of canonical gradient analysis, pioneered in ecological sciences, with the aim of exploring student performance and behaviours (such as communication and collaboration) while undertaking formative and summative tasks in technology enhanced learning (TEL) environments for computer programming. The research emphasis is, therefore, on revealing complex patterns, trends, tacit communications and technology interactions associated with a particular type of learning environment, rather than the testing of discrete hypotheses. The study is based on observations of first year programming modules in BSc Computing and closely related joint-honours with software engineering, web and game development courses. This research extends earlier work, and evaluates the suitability of canonical approaches for exploring complex dimensional gradients represented by multivariate and technology-enhanced learning environments. The advancements represented here are: (1) an extended context, beyond the use of the ‘Ceebot’ learning platform, to include learning-achievement following advanced instruction using an industrystandard integrated development environment, or IDE, for engineering software; and (2) longitudinal comparison of consistency of findings across cohort years. Direct findings (from analyses based on code tests, module assessment and questionnaire surveys) reveal overall engagement with and high acceptance of collaborative working and of the TEL environments used, but an inconsistent relationship between deeply learned programming skills and module performance. The paper also discusses research findings in the contexts of established and emerging teaching practices for computer programming, as well as government policies and commercial requirements for improved capacity in computer-science related industries

    Strategic Latency Unleashed: The Role of Technology in a Revisionist Global Order and the Implications for Special Operations Forces

    Get PDF
    The article of record may be found at https://cgsr.llnl.govThis work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory in part under Contract W-7405-Eng-48 and in part under Contract DE-AC52-07NA27344. The views and opinions of the author expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC. ISBN-978-1-952565-07-6 LCCN-2021901137 LLNL-BOOK-818513 TID-59693This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory in part under Contract W-7405-Eng-48 and in part under Contract DE-AC52-07NA27344. The views and opinions of the author expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC. ISBN-978-1-952565-07-6 LCCN-2021901137 LLNL-BOOK-818513 TID-5969
    corecore