397 research outputs found

    DEVELOPMENT OF AN UNMANNED AERIAL VEHICLE FOR LOW-COST REMOTE SENSING AND AERIAL PHOTOGRAPHY

    Get PDF
    The paper describes major features of an unmanned aerial vehicle, designed undersafety and performance requirements for missions of aerial photography and remotesensing in precision agriculture. Unmanned aerial vehicles have vast potential asobservation and data gathering platforms for a wide variety of applications. The goalof the project was to develop a small, low cost, electrically powered, unmanned aerialvehicle designed in conjunction with a payload of imaging equipment to obtainremote sensing images of agricultural fields. The results indicate that this conceptwas feasible in obtaining high quality aerial images

    Open source R for applying machine learning to RPAS remote sensing images

    Get PDF
    The increase in the number of remote sensing platforms, ranging from satellites to close-range Remotely Piloted Aircraft System (RPAS), is leading to a growing demand for new image processing and classification tools. This article presents a comparison of the Random Forest (RF) and Support Vector Machine (SVM) machine-learning algorithms for extracting land-use classes in RPAS-derived orthomosaic using open source R packages. The camera used in this work captures the reflectance of the Red, Blue, Green and Near Infrared channels of a target. The full dataset is therefore a 4-channel raster image. The classification performance of the two methods is tested at varying sizes of training sets. The SVM and RF are evaluated using Kappa index, classification accuracy and classification error as accuracy metrics. The training sets are randomly obtained as subset of 2 to 20% of the total number of raster cells, with stratified sampling according to the land-use classes. Ten runs are done for each training set to calculate the variance in results. The control dataset consists of an independent classification obtained by photointerpretation. The validation is carried out(i) using the K-Fold cross validation, (ii) using the pixels from the validation test set, and (iii) using the pixels from the full test set. Validation with K-fold and with the validation dataset show SVM give better results, but RF prove to be more performing when training size is larger. Classification error and classification accuracy follow the trend of Kappa index

    Development of a reconfigurable protective system for multi-rotor UAS

    Get PDF
    The purpose of this study is to illustrate how the design and deployment of a minimal protective system for multi-rotorcraft can cater for changes in legislation and provide for greater use both in and outdoors. A methodology is presented to evaluate the design and development of a system which protects both single axial and co-axial rotorcraft. The key emphasis of the development presented is the scenario in which the multi-rotorcraft can fly with increased speed including the capability of flying through windows and doors without the fear of system failure due to rotor disruption. Discussed as well is the degree of autonomy the reconfigurable system should feature as well as the effects of drag and added component mass to the performance of the system

    Vision-based Learning for Drones: A Survey

    Full text link
    Drones as advanced cyber-physical systems are undergoing a transformative shift with the advent of vision-based learning, a field that is rapidly gaining prominence due to its profound impact on drone autonomy and functionality. Different from existing task-specific surveys, this review offers a comprehensive overview of vision-based learning in drones, emphasizing its pivotal role in enhancing their operational capabilities under various scenarios. We start by elucidating the fundamental principles of vision-based learning, highlighting how it significantly improves drones' visual perception and decision-making processes. We then categorize vision-based control methods into indirect, semi-direct, and end-to-end approaches from the perception-control perspective. We further explore various applications of vision-based drones with learning capabilities, ranging from single-agent systems to more complex multi-agent and heterogeneous system scenarios, and underscore the challenges and innovations characterizing each area. Finally, we explore open questions and potential solutions, paving the way for ongoing research and development in this dynamic and rapidly evolving field. With growing large language models (LLMs) and embodied intelligence, vision-based learning for drones provides a promising but challenging road towards artificial general intelligence (AGI) in 3D physical world

    COUNTER-UXS ENERGY AND OPERATIONAL ANALYSIS

    Get PDF
    At present, there exists a prioritization of identifying novel and innovative approaches to managing the small Unmanned Aircraft Systems (sUAS) threat. The near-future sUAS threat to U.S. forces and infrastructure indicates that current Counter-UAS (C-UAS) capabilities and tactics, techniques, and procedures (TTPs) need to evolve to pace the threat. An alternative approach utilizes a networked squadron of unmanned aerial vehicles (UAVs) designed for sUAS threat interdiction. This approach leverages high performance and Size, Weight, and Power (SWaP) conformance to create less expensive, but more capable, C-UAS devices to augment existing capabilities. This capstone report documents efforts to develop C-UAS technologies to reduce energy consumption and collaterally disruptive signal footprint while maintaining operational effectiveness. This project utilized Model Based System Engineering (MBSE) techniques to explore and assess these technologies within a mission context. A Concept of Operations was developed to provide the C-UAS Operational Concept. Operational analysis led to development of operational scenarios to define the System of Systems (SoS) concept, operating conditions, and required system capabilities. Resource architecture was developed to define the functional behaviors and system performance characteristics for C-UAS technologies. Lastly, a modeling and simulation (M&S) tool was developed to evaluate mission scenarios for C-UAS.Outstanding ThesisCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    U.S. Unmanned Aerial Vehicles (UAVS) and Network Centric Warfare (NCW) impacts on combat aviation tactics from Gulf War I through 2007 Iraq

    Get PDF
    Unmanned, aerial vehicles (UAVs) are an increasingly important element of many modern militaries. Their success on battlefields in Afghanistan, Iraq, and around the globe has driven demand for a variety of types of unmanned vehicles. Their proven value consists in low risk and low cost, and their capabilities include persistent surveillance, tactical and combat reconnaissance, resilience, and dynamic re-tasking. This research evaluates past, current, and possible future operating environments for several UAV platforms to survey the changing dynamics of combat-aviation tactics and make recommendations regarding UAV employment scenarios to the Turkish military. While UAVs have already established their importance in military operations, ongoing evaluations of UAV operating environments, capabilities, technologies, concepts, and organizational issues inform the development of future systems. To what extent will UAV capabilities increasingly define tomorrow's missions, requirements, and results in surveillance and combat tactics? Integrating UAVs and concepts of operations (CONOPS) on future battlefields is an emergent science. Managing a transition from manned- to unmanned and remotely piloted aviation platforms involves new technological complexity and new aviation personnel roles, especially for combat pilots. Managing a UAV military transformation involves cultural change, which can be measured in decades.http://archive.org/details/usunmannedaerial109454211Turkish Air Force authors.Approved for public release; distribution is unlimited

    Best practices for using drones in seabird monitoring and research

    Get PDF
    Over the past decade, drones have become increasingly popular in environmental biology and have been used to study wildlife on all continents. Drones have become of global importance for surveying breeding seabirds, by providing opportunities to transform monitoring techniques and allow new research on some of the most threatened birds. However, such fast-changing and increasingly available technology presents challenges to regulators responding to requests to carry out surveys, and to researchers ensuring their work follows best practice and meets legal and ethical standards. Following a workshop convened at the 14th International Seabird Group Conference and a subsequent literature search, we collate information from over 100 studies and present a framework comprising eight steps to ensure drone-seabird surveys are safe, effective, and within the law: 1) Objectives and Feasibility; 2) Technology and Training; 3) Site Assessment and Permission; 4) Disturbance Mitigation; 5) Pre-deployment Checks; 6) Flying; 7) Data Handling and Analysis; and 8) Reporting. The audience is wide-ranging with sections having relevance for different users, including prospective and experienced drone-seabird pilots, landowners, and licensors. Regulations vary between countries and are frequently changing, but common principles exist. Taking-off, landing, and conducting in-flight changes in altitude and speed at ≥ 50 m from the study area and flying at ≥ 50 m above ground-nesting seabirds/horizontal distance from vertical colonies, should have limited disturbance impact on many seabird species; although surveys should stop if disturbance occurs. Compared to automated methods, manual or semi-automated image analyses are, at present, more suitable for infrequent drone surveys and surveys of relatively small colonies. When deciding if drone-seabird surveys are an appropriate monitoring method long-term, the cost, risks, and results obtained should be compared to traditional field monitoring where possible. Accurate and timely reporting of surveys is essential to developing adaptive guidelines for this increasingly common technology

    Autonomous Drone Landings on an Unmanned Marine Vehicle using Deep Reinforcement Learning

    Get PDF
    This thesis describes with the integration of an Unmanned Surface Vehicle (USV) and an Unmanned Aerial Vehicle (UAV, also commonly known as drone) in a single Multi-Agent System (MAS). In marine robotics, the advantage offered by a MAS consists of exploiting the key features of a single robot to compensate for the shortcomings in the other. In this way, a USV can serve as the landing platform to alleviate the need for a UAV to be airborne for long periods time, whilst the latter can increase the overall environmental awareness thanks to the possibility to cover large portions of the prevailing environment with a camera (or more than one) mounted on it. There are numerous potential applications in which this system can be used, such as deployment in search and rescue missions, water and coastal monitoring, and reconnaissance and force protection, to name but a few. The theory developed is of a general nature. The landing manoeuvre has been accomplished mainly identifying, through artificial vision techniques, a fiducial marker placed on a flat surface serving as a landing platform. The raison d'etre for the thesis was to propose a new solution for autonomous landing that relies solely on onboard sensors and with minimum or no communications between the vehicles. To this end, initial work solved the problem while using only data from the cameras mounted on the in-flight drone. In the situation in which the tracking of the marker is interrupted, the current position of the USV is estimated and integrated into the control commands. The limitations of classic control theory used in this approached suggested the need for a new solution that empowered the flexibility of intelligent methods, such as fuzzy logic or artificial neural networks. The recent achievements obtained by deep reinforcement learning (DRL) techniques in end-to-end control in playing the Atari video-games suite represented a fascinating while challenging new way to see and address the landing problem. Therefore, novel architectures were designed for approximating the action-value function of a Q-learning algorithm and used to map raw input observation to high-level navigation actions. In this way, the UAV learnt how to land from high latitude without any human supervision, using only low-resolution grey-scale images and with a level of accuracy and robustness. Both the approaches have been implemented on a simulated test-bed based on Gazebo simulator and the model of the Parrot AR-Drone. The solution based on DRL was further verified experimentally using the Parrot Bebop 2 in a series of trials. The outcomes demonstrate that both these innovative methods are both feasible and practicable, not only in an outdoor marine scenario but also in indoor ones as well

    Next generation mine countermeasures for the very shallow water zone in support of amphibious operations

    Get PDF
    This report describes system engineering efforts exploring next generation mine countermeasure (MCM) systems to satisfy high priority capability gaps in the Very Shallow Water (VSW) zone in support of amphibious operations. A thorough exploration of the problem space was conducted, including stakeholder analysis, MCM threat analysis, and current and future MCM capability research. Solution-neutral requirements and functions were developed for a bounded next generation system. Several alternative architecture solutions were developed that included a critical evaluation that compared performance and cost. The resulting MCM system effectively removes the man from the minefield through employment of autonomous capability, reduces operator burden with sensor data fusion and processing, and provides a real-time communication for command and control (C2) support to reduce or eliminate post mission analysis.http://archive.org/details/nextgenerationmi109456968N

    Autonomous High-Precision Landing on a Unmanned Surface Vehicle

    Get PDF
    THE MAIN GOAL OF THIS THESIS IS THE DEVELOPMENT OF AN AUTONOMOUS HIGH-PRECISION LANDING SYSTEM OF AN UAV IN AN AUTONOMOUS BOATIn this dissertation, a collaborative method for Multi Rotor Vertical Takeoff and Landing (MR-VTOL) Unmanned Aerial Vehicle (UAV)s’ autonomous landing is presented. The majority of common UAV autonomous landing systems adopt an approach in which the UAV scans the landing zone for a predetermined pattern, establishes relative positions, and uses those positions to execute the landing. These techniques have some shortcomings, such as extensive processing being carried out by the UAV itself and requires a lot of computational power. The fact that most of these techniques only work while the UAV is already flying at a low altitude, since the pattern’s elements must be plainly visible to the UAV’s camera, creates an additional issue. An RGB camera that is positioned in the landing zone and pointed up at the sky is the foundation of the methodology described throughout this dissertation. Convolutional Neural Networks and Inverse Kinematics approaches can be used to isolate and analyse the distinctive motion patterns the UAV presents because the sky is a very static and homogeneous environment. Following realtime visual analysis, a terrestrial or maritime robotic system can transmit orders to the UAV. The ultimate result is a model-free technique, or one that is not based on established patterns, that can help the UAV perform its landing manoeuvre. The method is trustworthy enough to be used independently or in conjunction with more established techniques to create a system that is more robust. The object detection neural network approach was able to detect the UAV in 91,57% of the assessed frames with a tracking error under 8%, according to experimental simulation findings derived from a dataset comprising three different films. Also created was a high-level position relative control system that makes use of the idea of an approach zone to the helipad. Every potential three-dimensional point within the zone corresponds to a UAV velocity command with a certain orientation and magnitude. The control system worked flawlessly to conduct the UAV’s landing within 6 cm of the target during testing in a simulated setting.Nesta dissertação, é apresentado um método de colaboração para a aterragem autónoma de Unmanned Aerial Vehicle (UAV)Multi Rotor Vertical Takeoff and Landing (MR-VTOL). A maioria dos sistemas de aterragem autónoma de UAV comuns adopta uma abordagem em que o UAV varre a zona de aterragem em busca de um padrão pré-determinado, estabelece posições relativas, e utiliza essas posições para executar a aterragem. Estas técnicas têm algumas deficiências, tais como o processamento extensivo a ser efectuado pelo próprio UAV e requer muita potência computacional. O facto de a maioria destas técnicas só funcionar enquanto o UAV já está a voar a baixa altitude, uma vez que os elementos do padrão devem ser claramente visíveis para a câmara do UAV, cria um problema adicional. Uma câmara RGB posicionada na zona de aterragem e apontada para o céu é a base da metodologia descrita ao longo desta dissertação. As Redes Neurais Convolucionais e as abordagens da Cinemática Inversa podem ser utilizadas para isolar e analisar os padrões de movimento distintos que o UAV apresenta, porque o céu é um ambiente muito estático e homogéneo. Após análise visual em tempo real, um sistema robótico terrestre ou marítimo pode transmitir ordens para o UAV. O resultado final é uma técnica sem modelo, ou que não se baseia em padrões estabelecidos, que pode ajudar o UAV a realizar a sua manobra de aterragem. O método é suficientemente fiável para ser utilizado independentemente ou em conjunto com técnicas mais estabelecidas para criar um sistema que seja mais robusto. A abordagem da rede neural de detecção de objectos foi capaz de detectar o UAV em 91,57% dos fotogramas avaliados com um erro de rastreio inferior a 8%, de acordo com resultados de simulação experimental derivados de um conjunto de dados composto por três filmes diferentes. Também foi criado um sistema de controlo relativo de posição de alto nível que faz uso da ideia de uma zona de aproximação ao heliporto. Cada ponto tridimensional potencial dentro da zona corresponde a um comando de velocidade do UAV com uma certa orientação e magnitude. O sistema de controlo funcionou sem falhas para conduzir a aterragem do UAV dentro de 6 cm do alvo durante os testes num cenário simulado. Traduzido com a versão gratuita do tradutor - www.DeepL.com/Translato
    • …
    corecore