180 research outputs found

    Unmanned Aerial Vehicles (UAVs) in environmental biology: A Review

    Get PDF
    Acquiring information about the environment is a key step during each study in the field of environmental biology at different levels, from an individual species to community and biome. However, obtaining information about the environment is frequently difficult because of, for example, the phenological timing, spatial distribution of a species or limited accessibility of a particular area for the field survey. Moreover, remote sensing technology, which enables the observation of the Earth’s surface and is currently very common in environmental research, has many limitations such as insufficient spatial, spectral and temporal resolution and a high cost of data acquisition. Since the 1990s, researchers have been exploring the potential of different types of unmanned aerial vehicles (UAVs) for monitoring Earth’s surface. The present study reviews recent scientific literature dealing with the use of UAV in environmental biology. Amongst numerous papers, short communications and conference abstracts, we selected 110 original studies of how UAVs can be used in environmental biology and which organisms can be studied in this manner. Most of these studies concerned the use of UAV to measure the vegetation parameters such as crown height, volume, number of individuals (14 studies) and quantification of the spatio-temporal dynamics of vegetation changes (12 studies). UAVs were also frequently applied to count birds and mammals, especially those living in the water. Generally, the analytical part of the present study was divided into following sections: (1) detecting, assessing and predicting threats on vegetation, (2) measuring the biophysical parameters of vegetation, (3) quantifying the dynamics of changes in plants and habitats and (4) population and behaviour studies of animals. At the end, we also synthesised all the information showing, amongst others, the advances in environmental biology because of UAV application. Considering that 33% of studies found and included in this review were published in 2017 and 2018, it is expected that the number and variety of applications of UAVs in environmental biology will increase in the future

    Reliable and safe autonomy for ground vehicles in unstructured environments

    Get PDF
    This thesis is concerned with the algorithms and systems that are required to enable safe autonomous operation of an unmanned ground vehicle (UGV) in an unstructured and unknown environment; one in which there is no speci c infrastructure to assist the vehicle autonomy and complete a priori information is not available. Under these conditions it is necessary for an autonomous system to perceive the surrounding environment, in order to perform safe and reliable control actions with respect to the context of the vehicle, its task and the world. Speci cally, exteroceptive sensors measure physical properties of the world. This information is interpreted to extract a higher level perception, then mapped to provide a consistent spatial context. This map of perceived information forms an integral part of the autonomous UGV (AUGV) control system architecture, therefore any perception or mapping errors reduce the reliability and safety of the system. Currently, commercially viable autonomous systems achieve the requisite level of reliability and safety by using strong structure within their operational environment. This permits the use of powerful assumptions about the world, which greatly simplify the perception requirements. For example, in an urban context, things that look approximately like roads are roads. In an indoor environment, vertical structure must be avoided and everything else is traversable. By contrast, when this structure is not available, little can be assumed and the burden on perception is very large. In these cases, reliability and safety must currently be provided by a tightly integrated human supervisor. The major contribution of this thesis is to provide a holistic approach to identify and mitigate the primary sources of error in typical AUGV sensor feedback systems (comprising perception and mapping), to promote reliability and safety. This includes an analysis of the geometric and temporal errors that occur in the coordinate transformations that are required for mapping and methods to minimise these errors in real systems. Interpretive errors are also studied and methods to mitigate them are presented. These methods combine information theoretic measures with multiple sensor modalities, to improve perceptive classi cation and provide sensor redundancy. The work in this thesis is implemented and tested on a real AUGV system, but the methods do not rely on any particular aspects of this vehicle. They are all generally and widely applicable. This thesis provides a rm base at a low level, from which continued research in autonomous reliability and safety at ever higher levels can be performed

    Airborne laser sensors and integrated systems

    Get PDF
    The underlying principles and technologies enabling the design and operation of airborne laser sensors are introduced and a detailed review of state-of-the-art avionic systems for civil and military applications is presented. Airborne lasers including Light Detection and Ranging (LIDAR), Laser Range Finders (LRF), and Laser Weapon Systems (LWS) are extensively used today and new promising technologies are being explored. Most laser systems are active devices that operate in a manner very similar to microwave radars but at much higher frequencies (e.g., LIDAR and LRF). Other devices (e.g., laser target designators and beam-riders) are used to precisely direct Laser Guided Weapons (LGW) against ground targets. The integration of both functions is often encountered in modern military avionics navigation-attack systems. The beneficial effects of airborne lasers including the use of smaller components and remarkable angular resolution have resulted in a host of manned and unmanned aircraft applications. On the other hand, laser sensors performance are much more sensitive to the vagaries of the atmosphere and are thus generally restricted to shorter ranges than microwave systems. Hence it is of paramount importance to analyse the performance of laser sensors and systems in various weather and environmental conditions. Additionally, it is important to define airborne laser safety criteria, since several systems currently in service operate in the near infrared with considerable risk for the naked human eye. Therefore, appropriate methods for predicting and evaluating the performance of infrared laser sensors/systems are presented, taking into account laser safety issues. For aircraft experimental activities with laser systems, it is essential to define test requirements taking into account the specific conditions for operational employment of the systems in the intended scenarios and to verify the performance in realistic environments at the test ranges. To support the development of such requirements, useful guidelines are provided for test and evaluation of airborne laser systems including laboratory, ground and flight test activities

    Sensing Mountains

    Get PDF
    Sensing mountains by close-range and remote techniques is a challenging task. The 4th edition of the international Innsbruck Summer School of Alpine Research 2022 – Close-range Sensing Techniques in Alpine Terrain brings together early career and experienced scientists from technical-, geo- and environmental-related research fields. The interdisciplinary setting of the summer school creates a creative space for exchanging and learning new concepts and solutions for mapping, monitoring and quantifying mountain environments under ongoing conditions of change

    Quantitative Analysis of Non-Linear Probabilistic State Estimation Filters for Deployment on Dynamic Unmanned Systems

    Get PDF
    The work conducted in this thesis is a part of an EU Horizon 2020 research initiative project known as DigiArt. This part of the DigiArt project presents and explores the design, formulation and implementation of probabilistically orientated state estimation algorithms with focus towards unmanned system positioning and three-dimensional (3D) mapping. State estimation algorithms are considered an influential aspect of any dynamic system with autonomous capabilities. Possessing the ability to predictively estimate future conditions enables effective decision making and anticipating any possible changes in the environment. Initial experimental procedures utilised a wireless ultra-wide band (UWB) based communication network. This system functioned through statically situated beacon nodes used to localise a dynamically operating node. The simultaneous deployment of this UWB network, an unmanned system and a Robotic Total Station (RTS) with active and remote tracking features enabled the characterisation of the range measurement errors associated with the UWB network. These range error metrics were then integrated into an Range based Extended Kalman Filter (R-EKF) state estimation algorithm with active outlier identification to outperform the native approach used by the UWB system for two-dimensional (2D) pose estimation.The study was then expanded to focus on state estimation in 3D, where a Six Degreeof-Freedom EKF (6DOF-EKF) was designed using Light Detection and Ranging (LiDAR) as its primary observation source. A two step method was proposed which extracted information between consecutive LiDAR scans. Firstly, motion estimation concerning Cartesian states x, y and the unmanned system’s heading (ψ) was achieved through a 2D feature matching process. Secondly, the extraction and alignment of ground planes from the LiDAR scan enabled motion extraction for Cartesian position z and attitude angles roll (θ) and pitch (φ). Results showed that the ground plane alignment failed when two scans were at 10.5◦ offset. Therefore, to overcome this limitation an Error State Kalman Filter (ES-KF) was formulated and deployed as a sub-system within the 6DOF-EKF. This enabled the successful tracking of roll, pitch and the calculation of z. The 6DOF-EKF was seen to outperform the R-EKF and the native UWB approach, as it was much more stable, produced less noise in its position estimations and provided 3D pose estimation

    On the use of smartphones as novel photogrammetric water gauging instruments: Developing tools for crowdsourcing water levels

    Get PDF
    The term global climate change is omnipresent since the beginning of the last decade. Changes in the global climate are associated with an increase in heavy rainfalls that can cause nearly unpredictable flash floods. Consequently, spatio-temporally high-resolution monitoring of rivers becomes increasingly important. Water gauging stations continuously and precisely measure water levels. However, they are rather expensive in purchase and maintenance and are preferably installed at water bodies relevant for water management. Small-scale catchments remain often ungauged. In order to increase the data density of hydrometric monitoring networks and thus to improve the prediction quality of flood events, new, flexible and cost-effective water level measurement technologies are required. They should be oriented towards the accuracy requirements of conventional measurement systems and facilitate the observation of water levels at virtually any time, even at the smallest rivers. A possible solution is the development of a photogrammetric smartphone application (app) for crowdsourcing water levels, which merely requires voluntary users to take pictures of a river section to determine the water level. Today’s smartphones integrate high-resolution cameras, a variety of sensors, powerful processors, and mass storage. However, they are designed for the mass market and use low-cost hardware that cannot comply with the quality of geodetic measurement technology. In order to investigate the potential for mobile measurement applications, research was conducted on the smartphone as a photogrammetric measurement instrument as part of the doctoral project. The studies deal with the geometric stability of smartphone cameras regarding device-internal temperature changes and with the accuracy potential of rotation parameters measured with smartphone sensors. The results show a high, temperature-related variability of the interior orientation parameters, which is why the calibration of the camera should be carried out during the immediate measurement. The results of the sensor investigations show considerable inaccuracies when measuring rotation parameters, especially the compass angle (errors up to 90° were observed). The same applies to position parameters measured by global navigation satellite system (GNSS) receivers built into smartphones. According to the literature, positional accuracies of about 5 m are possible in best conditions. Otherwise, errors of several 10 m are to be expected. As a result, direct georeferencing of image measurements using current smartphone technology should be discouraged. In consideration of the results, the water gauging app Open Water Levels (OWL) was developed, whose methodological development and implementation constituted the core of the thesis project. OWL enables the flexible measurement of water levels via crowdsourcing without requiring additional equipment or being limited to specific river sections. Data acquisition and processing take place directly in the field, so that the water level information is immediately available. In practice, the user captures a short time-lapse sequence of a river bank with OWL, which is used to calculate a spatio-temporal texture that enables the detection of the water line. In order to translate the image measurement into 3D object space, a synthetic, photo-realistic image of the situation is created from existing 3D data of the river section to be investigated. Necessary approximations of the image orientation parameters are measured by smartphone sensors and GNSS. The assignment of camera image and synthetic image allows for the determination of the interior and exterior orientation parameters by means of space resection and finally the transfer of the image-measured 2D water line into the 3D object space to derive the prevalent water level in the reference system of the 3D data. In comparison with conventionally measured water levels, OWL reveals an accuracy potential of 2 cm on average, provided that synthetic image and camera image exhibit consistent image contents and that the water line can be reliably detected. In the present dissertation, related geometric and radiometric problems are comprehensively discussed. Furthermore, possible solutions, based on advancing developments in smartphone technology and image processing as well as the increasing availability of 3D reference data, are presented in the synthesis of the work. The app Open Water Levels, which is currently available as a beta version and has been tested on selected devices, provides a basis, which, with continuous further development, aims to achieve a final release for crowdsourcing water levels towards the establishment of new and the expansion of existing monitoring networks.Der Begriff des globalen Klimawandels ist seit Beginn des letzten Jahrzehnts allgegenwärtig. Die Veränderung des Weltklimas ist mit einer Zunahme von Starkregenereignissen verbunden, die nahezu unvorhersehbare Sturzfluten verursachen können. Folglich gewinnt die raumzeitlich hochaufgelöste Überwachung von Fließgewässern zunehmend an Bedeutung. Pegelmessstationen erfassen kontinuierlich und präzise Wasserstände, sind jedoch in Anschaffung und Wartung sehr teuer und werden vorzugsweise an wasserwirtschaftlich-relevanten Gewässern installiert. Kleinere Gewässer bleiben häufig unbeobachtet. Um die Datendichte hydrometrischer Messnetze zu erhöhen und somit die Vorhersagequalität von Hochwasserereignissen zu verbessern, sind neue, kostengünstige und flexibel einsetzbare Wasserstandsmesstechnologien erforderlich. Diese sollten sich an den Genauigkeitsanforderungen konventioneller Messsysteme orientieren und die Beobachtung von Wasserständen zu praktisch jedem Zeitpunkt, selbst an den kleinsten Flüssen, ermöglichen. Ein Lösungsvorschlag ist die Entwicklung einer photogrammetrischen Smartphone-Anwendung (App) zum Crowdsourcing von Wasserständen mit welcher freiwillige Nutzer lediglich Bilder eines Flussabschnitts aufnehmen müssen, um daraus den Wasserstand zu bestimmen. Heutige Smartphones integrieren hochauflösende Kameras, eine Vielzahl von Sensoren, leistungsfähige Prozessoren und Massenspeicher. Sie sind jedoch für den Massenmarkt konzipiert und verwenden kostengünstige Hardware, die nicht der Qualität geodätischer Messtechnik entsprechen kann. Um das Einsatzpotential in mobilen Messanwendungen zu eruieren, sind Untersuchungen zum Smartphone als photogrammetrisches Messinstrument im Rahmen des Promotionsprojekts durchgeführt worden. Die Studien befassen sich mit der geometrischen Stabilität von Smartphone-Kameras bezüglich geräteinterner Temperaturänderungen und mit dem Genauigkeitspotential von mit Smartphone-Sensoren gemessenen Rotationsparametern. Die Ergebnisse zeigen eine starke, temperaturbedingte Variabilität der inneren Orientierungsparameter, weshalb die Kalibrierung der Kamera zum unmittelbaren Messzeitpunkt erfolgen sollte. Die Ergebnisse der Sensoruntersuchungen zeigen große Ungenauigkeiten bei der Messung der Rotationsparameter, insbesondere des Kompasswinkels (Fehler von bis zu 90° festgestellt). Selbiges gilt auch für Positionsparameter, gemessen durch in Smartphones eingebaute Empfänger für Signale globaler Navigationssatellitensysteme (GNSS). Wie aus der Literatur zu entnehmen ist, lassen sich unter besten Bedingungen Lagegenauigkeiten von etwa 5 m erreichen. Abseits davon sind Fehler von mehreren 10 m zu erwarten. Infolgedessen ist von einer direkten Georeferenzierung von Bildmessungen mittels aktueller Smartphone-Technologie abzusehen. Unter Berücksichtigung der gewonnenen Erkenntnisse wurde die Pegel-App Open Water Levels (OWL) entwickelt, deren methodische Entwicklung und Implementierung den Kern der Arbeit bildete. OWL ermöglicht die flexible Messung von Wasserständen via Crowdsourcing, ohne dabei zusätzliche Ausrüstung zu verlangen oder auf spezifische Flussabschnitte beschränkt zu sein. Datenaufnahme und Verarbeitung erfolgen direkt im Feld, so dass die Pegelinformationen sofort verfügbar sind. Praktisch nimmt der Anwender mit OWL eine kurze Zeitraffersequenz eines Flussufers auf, die zur Berechnung einer Raum-Zeit-Textur dient und die Erkennung der Wasserlinie ermöglicht. Zur Übersetzung der Bildmessung in den 3D-Objektraum wird aus vorhandenen 3D-Daten des zu untersuchenden Flussabschnittes ein synthetisches, photorealistisches Abbild der Aufnahmesituation erstellt. Erforderliche Näherungen der Bildorientierungsparameter werden von Smartphone-Sensoren und GNSS gemessen. Die Zuordnung von Kamerabild und synthetischem Bild erlaubt die Bestimmung der inneren und äußeren Orientierungsparameter mittels räumlichen Rückwärtsschnitt. Nach Rekonstruktion der Aufnahmesituation lässt sich die im Bild gemessene 2D-Wasserlinie in den 3D-Objektraum projizieren und der vorherrschende Wasserstand im Referenzsystem der 3D-Daten ableiten. Im Soll-Ist-Vergleich mit konventionell gemessenen Pegeldaten zeigt OWL ein erreichbares Genauigkeitspotential von durchschnittlich 2 cm, insofern synthetisches und reales Kamerabild einen möglichst konsistenten Bildinhalt aufweisen und die Wasserlinie zuverlässig detektiert werden kann. In der vorliegenden Dissertation werden damit verbundene geometrische und radiometrische Probleme ausführlich diskutiert sowie Lösungsansätze, auf der Basis fortschreitender Entwicklungen von Smartphone-Technologie und Bildverarbeitung sowie der zunehmenden Verfügbarkeit von 3D-Referenzdaten, in der Synthese der Arbeit vorgestellt. Mit der gegenwärtig als Betaversion vorliegenden und auf ausgewählten Geräten getesteten App Open Water Levels wurde eine Basis geschaffen, die mit kontinuierlicher Weiterentwicklung eine finale Freigabe für das Crowdsourcing von Wasserständen und damit den Aufbau neuer und die Erweiterung bestehender Monitoring-Netzwerke anstrebt

    Error budget for geolocation of spectroradiometer point observations from an unmanned aircraft system

    Get PDF
    We investigate footprint geolocation uncertainties of a spectroradiometer mounted on an unmanned aircraft system (UAS). Two microelectromechanical systems-based inertial measurement units (IMUs) and global navigation satellite system (GNSS) receivers were used to determine the footprint location and extent of the spectroradiometer. Errors originating from the on-board GNSS/IMU sensors were propagated through an aerial data georeferencing model, taking into account a range of values for the spectroradiometer field of view (FOV), integration time, UAS flight speed, above ground level (AGL) flying height, and IMU grade. The spectroradiometer under nominal operating conditions (8° FOV, 10 m AGL height, 0.6 s integration time, and 3 m/s flying speed) resulted in footprint extent of 140 cm across-track and 320 cm along-track, and a geolocation uncertainty of 11 cm. Flying height and orientation measurement accuracy had the largest influence on the geolocation uncertainty, whereas the FOV, integration time, and flying speed had the biggest impact on the size of the footprint. Furthermore, with an increase in flying height, the rate of increase in geolocation uncertainty was found highest for a low-grade IMU. To increase the footprint geolocation accuracy, we recommend reducing flying height while increasing the FOV which compensates the footprint area loss and increases the signal strength. The disadvantage of a lower flying height and a larger FOV is a higher sensitivity of the footprint size to changing distance from the target. To assist in matching the footprint size to uncertainty ratio with an appropriate spatial scale, we list the expected ratio for a range of IMU grades, FOVs and AGL heights.Deepak Gautam, Christopher Watson, Arko Lucieer and Zbynĕk Malenovsk

    Getting it right: integrating the intelligence, surveillance and reconnaissance enterprise

    Get PDF
    This paper examines the nature of the intelligence, surveillance and reconnaissance challenge confronting Australia and how that challenge is currently being met. Introduction Understanding the environment in which a conflict is being or will be conducted has always been a central element of military thinking. In today’s world, this understanding is embraced by three elements: Intelligence, Surveillance and Reconnaissance (ISR). Whilst ISR has traditionally focussed on military operations, the last century has seen an increasing emergence of ISR as a construct and capability that might support a broader ‘national interest’. Indeed, today the national security community is engaged as both a user and contributor, and the need has recently emerged for an ISR capability that supports border protection in which a ‘national’ or ‘sovereign’ interest, as opposed to a ‘military’ paradigm, has come to the fore. Conceptually, an ISR capability allows for the observation and analysis of events and the production of useful, timely information to support a national interest. In reality, this simple ISR construct is challenged by several factors: the number of events; the ability to observe; processing the observed events and the increasing amount of data; the time taken to conduct an analysis; the time to determine a course of action; and the time taken to respond. The simple ISR construct is further challenged when the many networked and linked sensors used to observe events are taken into consideration. Increased sensor inputs provide greater situational awareness and better predictive intelligence necessary to achieve superior decision-making and, hence, more effective operations. However, modern-day ISR systems have also significantly reduced the available time in the decision cycle for making sense of what is occurring and for carrying out an action as a result. The challenge, therefore, is to balance the greater situational awareness and better predictive intelligence with ensuring that decisions are not delayed waiting for additional information. The purpose of this Kokoda ISR Project is to develop new ideas for a future Australian ISR Enterprise that complements the emerging national security framework and positions ISR as a sovereign capability. Concerns have been expressed that the opportunities, challenges and risks confronting the National ISR Community have increased and become more diverse in recent years. Consequently, the potential for extending the current Whole-of-Government approach to exploiting ISR and better accommodating Industry into the National ISR infrastructure needs to be explored. Innovation and integration of new ISR methods, systems, and concepts will be important for future success. For the immediate future, Australia’s military and law-enforcement organisations will need to embrace strategic, operational, organisational, technological, process, and cultural change in a tough fiscal climate, and demonstrate how they can achieve more with existing assets and organisations. They will face challenges as they seek to cooperate more closely, yet feel the need to retain some of their traditional boundaries (noting that many of the traditional boundaries are set in legislation). They will need to meet the public expectation of effectiveness, responsiveness and accountability, and a well-integrated and robust ISR function will be critical in this respect. This Kokoda Paper examines the nature of the ISR challenge confronting Australia and how that challenge is currently being met. It argues that an extension of current policy approaches that involve making the most of Australia’s organisations, capabilities, and international and national cooperation is called for. It identifies those other key areas for improved policy and argues the importance of adopting a whole-of-nation approach, improving public engagement, accelerating the data-to-decision cycle, and synchronising ISR capabilities; and recommends specific proposals for pursuing these policy outcomes
    • …
    corecore