14 research outputs found

    Optimising mobile laser scanning for underground mines

    Full text link
    Despite several technological advancements, underground mines are still largely relied on visual inspections or discretely placed direct-contact measurement sensors for routine monitoring. Such approaches are manual and often yield inconclusive, unreliable and unscalable results besides exposing mine personnel to field hazards. Mobile laser scanning (MLS) promises an automated approach that can generate comprehensive information by accurately capturing large-scale 3D data. Currently, the application of MLS has relatively remained limited in mining due to challenges in the post-registration of scans and the unavailability of suitable processing algorithms to provide a fully automated mapping solution. Additionally, constraints such as the absence of a spatial positioning network and the deficiency of distinguishable features in underground mining spaces pose challenges in mobile mapping. This thesis aims to address these challenges in mine inspections by optimising different aspects of MLS: (1) collection of large-scale registered point cloud scans of underground environments, (2) geological mapping of structural discontinuities, and (3) inspection of structural support features. Firstly, a spatial positioning network was designed using novel three-dimensional unique identifiers (3DUID) tags and a 3D registration workflow (3DReG), to accurately obtain georeferenced and coregistered point cloud scans, enabling multi-temporal mapping. Secondly, two fully automated methods were developed for mapping structural discontinuities from point cloud scans – clustering on local point descriptors (CLPD) and amplitude and phase decomposition (APD). These methods were tested on both surface and underground rock mass for discontinuity characterisation and kinematic analysis of the failure types. The developed algorithms significantly outperformed existing approaches, including the conventional method of compass and tape measurements. Finally, different machine learning approaches were used to automate the recognition of structural support features, i.e. roof bolts from point clouds, in a computationally efficient manner. Roof bolts being mapped from a scanned point cloud provided an insight into their installation pattern, which underpinned the applicability of laser scanning to inspect roof supports rapidly. Overall, the outcomes of this study lead to reduced human involvement in field assessments of underground mines using MLS, demonstrating its potential for routine multi-temporal monitoring

    Approximate Image Mappings Between Nearly Boresight Aligned Optical and Range Sensors.

    Full text link

    Uydu görüntülerinden yer kontrol noktasız sayısal yüzey haritaları.

    Get PDF
    Generation of Digital Surface Models (DSMs) from stereo satellite (spaceborne) images is classically performed by Ground Control Points (GCPs) which require site visits and precise measurement equipment. However, collection of GCPs is not always possible and such requirement limits the usage of spaceborne imagery. This study aims at developing a fast, fully automatic, GCP-free workflow for DSM generation. The problems caused by GCP-free workflow are overcome using freely-available, low resolution static DSMs (LR-DSM). LR-DSM is registered to the reference satellite image and the registered LR-DSM is used for i) correspondence generation and ii) initial estimate generation for 3-D reconstruction. Novel methods are developed for bias removal for LR-DSM registration and bias equalization for projection functions of satellite imaging. The LR-DSM registration is also shown to be useful for computing the parameters of simple, piecewise empirical projective models. Recent computer vision approaches on stereo correspondence generation and dense depth estimation are tested and adopted for spaceborne DSM generation. The study also presents a complete, fully automatic scheme for GCPfree DSM generation and demonstrates that GCP-free DSM generation is possible and can be performed in much faster time on computers. The resulting DSM can be used in various remote sensing applications including building extraction, disaster monitoring and change detection.Ph.D. - Doctoral Progra

    Terrain Referenced Navigation Using SIFT Features in LiDAR Range-Based Data

    Get PDF
    The use of GNSS in aiding navigation has become widespread in aircraft. The long term accuracy of INS are enhanced by frequent updates of the highly precise position estimations GNSS provide. Unfortunately, operational environments exist where constant signal or the requisite number of satellites are unavailable, significantly degraded, or intentionally denied. This thesis describes a novel algorithm that uses scanning LiDAR range data, computer vision features, and a reference database to generate aircraft position estimations to update drifting INS estimates. The algorithm uses a single calibrated scanning LiDAR to sample the range and angle to the ground as an aircraft flies, forming a point cloud. The point cloud is orthorectified into a coordinate system common to a previously recorded reference of the flyover region. The point cloud is then interpolated into a Digital Elevation Model (DEM) of the ground. Range-based SIFT features are then extracted from both the airborne and reference DEMs. Features common to both the collected and reference range images are selected using a SIFT descriptor search. Geometrically inconsistent features are filtered out using RANSAC outlier removal, and surviving features are projected back to their source coordinates in the original point cloud. The point cloud features are used to calculate a least squares correspondence transform that aligns the collected features to the reference features. Applying the correspondence that best aligns the ground features is then applied to the nominal aircraft position, creating a new position estimate. The algorithm was tested on legacy flight data and typically produces position estimates within 10 meters of truth using threshold conditions

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Integrated estimation of UAV image orientation with a generalised building model

    Get PDF
    The estimation of position and attitude of a camera, addressed as image orientation in photogrammetry, is an important task to obtain information on where a platform is located in the world or relative to objects. Unmanned aerial vehicles (UAV) as an increasingly popular platform led to new applications, some of which involve low flight altitudes and specific requirements such as low weight and low cost of sensors. Image orientation needs additional information to retrieve not only relative measurements but position and attitude in a world coordinate system. Given the requirements on sensors and especially for flights in between obstacles in urban environments classically used information of Global Navigation Satellite Systems (GNSS) and Inertial Measurement Units (IMU) or specially marked ground control points (GCP) are often inaccurate or unavailable. The idea addressed within this thesis is to improve the UAV image orientation based on an existing generalised building model. Such models are increasingly available and provide ground control that is helpful to compensate inaccurate or unavailable camera positions measured by GNSS and drift effects of image orientation. Typically, for UAV applications in street corridors, the geometric accuracy and the level of detail of such models is low compared to the high accuracy and high geometric resolution of the image measurements. Therefore, although the building model differs from the observed scene due to its generalisation, relations of the photogrammetric measurements to the building model are formulated and used in the determination of image orientation. Three approaches to assign tie points to model planes in object space are presented, and a sliding window as well as a global hybrid bundle adjustment are set up for image orientation aided by a generalised building model. The assignments lead to fictitious observations of the distance of tie points to model planes and are iteratively refined by bundle adjustment. Experiments with an image sequence captured flying between buildings show an improvement of image orientation from the metre range with purely GNSS measurements to the decimetre range when using the generalised building model with the simplest assignment method based on point-to-plane distances. No improvement by searching planes in the tie point cloud to indirectly find the relations of tie points to model planes is observed. The results are compared to a building model of higher detail and systematic effects are investigated. In summary, the developed method is found to significantly improve UAV image orientation using a generalised building model successfully.Die Schätzung von Position und Lage einer Kamera, die in der Photogrammetrie als Bildorientierung bezeichnet wird, ist eine grundlegende Aufgabe, um Informationen darüber zu erhalten, wo sich eine Plattform in der Welt oder relativ zu Objekten befindet. Zunehmend führen unbemannte Luftfahrtsysteme (UAV) als Plattform zu neuen Anwendungen, die zum Teil geringe Flughöhen und spezifische Anforderungen wie Gewicht und Kosten der Sensoren mit sich bringen. Für die Bildorientierung werden zusätzliche Informationen benötigt, um nicht nur relative Messungen, sondern auch Position und Lage in einem Weltkoordinatensystem bestimmen zu können. Angesichts dieser Anforderungen und insbesondere für Flüge zwischen Hindernissen in städtischen Gebieten sind die klassisch verwendeten Informationen von Navigationssatelliten- (GNSS) und Intertialmesssystemen (IMU) oder auch speziell markierten Passpunkten (GCP) oft nicht verfügbar oder zu ungenau. Die hier behandelte Idee ist daher, die Bildorientierung von UAVs auf der Grundlage eines bestehenden generalisierten Gebäudemodells zu verbessern. Solche Modelle sind in zunehmendem Maße verfügbar und bieten eine Möglichkeit, ungenaue oder nicht verfügbare GNSS-Kamerapositionen und Drifteffekte der Bildorientierung zu kompensieren. Bei UAV-Befliegungen in Straßenschluchten sind die geometrische Genauigkeit und der Detaillierungsgrad solcher Modelle im Vergleich zur hohen Genauigkeit und hohen geometrischen Auflösung der Bildmessungen typischerweise gering. Obwohl das Modell also aufgrund seiner Generalisierung von der beobachteten Szene abweicht, können Beziehungen der photogrammetrischen Messungen zum Gebäudemodell formuliert und in der Bildorientierung verwendet werden. Es werden drei Ansätze zur Zuordnung von Verknüpfungspunkten zu Modellebenen im Objektraum sowie eine hybride Bündelausgleichung zur Bildorientierung mit Hilfe eines generalisierten Gebäudemodells, die global oder fensterbasiert abläuft, vorgestellt. Die Zuordnungen führen zu fiktiven Beobachtungen für den Abstand von Verknüpfungspunkten zu Modellebenen und werden während der iterativen Bündelausgleichung verfeinert. Experimente mit einer zwischen Gebäuden aufgenommenen Bildsequenz zeigen eine Verbesserung der Bildorientierung vom Meterbereich rein mit GNSS-Messungen in den Dezimeterbereich bei Verwendung des generalisierten Gebäudemodells mit der einfachsten Zuordnungsmethode auf Basis von Punkt-zu-Ebene-Distanzen. Eine Verbesserung der Punkt-zu-Ebene-Zuordnungen durch die Suche von Ebenen in der Punktwolke wird nicht beobachtet. Zusammenfassend lässt sich sagen, dass die entwickelte Methode die UAV-Bildorientierung mit Hilfe eines generalisierten Gebäudemodells signifikant verbessert

    Morphology-based landslide monitoring with an unmanned aerial vehicle

    Get PDF
    PhD ThesisLandslides represent major natural phenomena with often disastrous consequences. Monitoring landslides with time-series surface observations can help mitigate such hazards. Unmanned aerial vehicles (UAVs) employing compact digital cameras, and in conjunction with Structure-from-Motion (SfM) and modern Multi-View Stereo (MVS) image matching approaches, have become commonplace in the geoscience research community. These methods offer a relatively low-cost and flexible solution for many geomorphological applications. The SfM-MVS pipeline has expedited the generation of digital elevation models at high spatio-temporal resolution. Conventionally ground control points (GCPs) are required for co-registration. This task is often expensive and impracticable considering hazardous terrain. This research has developed a strategy for processing UAV visible wavelength imagery that can provide multi-temporal surface morphological information for landslide monitoring, in an attempt to overcome the reliance on GCPs. This morphological-based strategy applies the attribute of curvature in combination with the scale-invariant feature transform algorithm, to generate pseudo GCPs. Openness is applied to extract relatively stable regions whereby pseudo GCPs are selected. Image cross-correlation functions integrated with openness and slope are employed to track landslide motion with subsequent elevation differences and planimetric surface displacements produced. Accuracy assessment evaluates unresolved biases with the aid of benchmark datasets. This approach was tested in the UK, in two sites, first in Sandford with artificial surface change and then in an active landslide at Hollin Hill. In Sandford, the strategy detected a ±0.120 m 3D surface change from three-epoch SfM-MVS products derived from a consumer-grade UAV. For the Hollin Hill landslide six-epoch datasets spanning an eighteen-month duration period were used, providing a ± 0.221 m minimum change. Annual displacement rates of dm-level were estimated with optimal results over winter periods. Levels of accuracy and spatial resolution comparable to previous studies demonstrated the potential of the morphology-based strategy for a time-efficient and cost-effective monitoring at inaccessible areas

    Eye-to-Eye Calibration: Extrinsische Kalibrierung von Mehrkamerasystemen mittels Hand-Auge-Kalibrierverfahren

    Get PDF
    The problem addressed in this thesis is the extrinsic calibration of embedded multi-camera systems without overlapping views, i.e., to determine the positions and orientations of rigidly coupled cameras with respect to a common coordinate frame from captured images. Such camera systems are of increasing interest for computer vision applications due to their large combined field of view, providing practical use for visual navigation and 3d scene reconstruction. However, in order to propagate observations from one camera to another, the parameters of the coordinate transformation between both cameras have to be determined accurately. Classical methods for extrinsic camera calibration relying on spatial correspondences between images cannot be applied here. The central topic of this work is an analysis of methods based on hand-eye calibration that exploit constraints of rigidly coupled motions to solve this problem from visual camera ego-motion estimation only, without need for additional sensors for pose tracking such as inertial measurement units or vehicle odometry. The resulting extrinsic calibration methods are referred to as "eye-to-eye calibration". We provide solutions based on pose measurements (geometric eye-to-eye calibration), decoupling the actual pose estimation from the extrinsic calibration, and solutions based on images measurements (visual eye-to-eye calibration), integrating both steps within a general Structure from Motion framework. Specific solutions are also proposed for critical motion configurations such as planar motion which often occurs in vehicle-based applications.Diese Arbeit beschäftigt sich mit der extrinsischen Kalibrierung von Mehrkamerasystemen ohne überlappende Sichtbereiche aus Bildfolgen. Die extrinsischen Parameter fassen dabei Lage und Orientierung der als starr-gekoppelt vorausgesetzten Kameras in Bezug auf ein gemeinsames Referenzkoordinatensystem zusammen. Die Minimierung der Redundanz der einzelnen Sichtfelder zielt dabei auf ein möglichst großes kombiniertes Sichtfeld aller Kameras ab. Solche Aufnahmesysteme haben sich in den letzten Jahren als hilfreich für eine Reihe von Aufgabenstellungen der Computer Vision erwiesen, z. B. in den Bereichen der visuellen Navigation und der bildbasierten 3D-Szenenrekonstruktion. Um Messungen der einzelnen Kameras sinnvoll zusammenzuführen, müssen die Parameter der Koordinatentransformationen zwischen den Kamerakoordinatensystemen möglichst exakt bestimmt werden. Klassische Methoden zur extrinsischen Kamerakalibrierung basieren in der Regel auf räumlichen Korrespondenzen zwischen Kamerabildern, was ein überlappendes Sichtfeld voraussetzt. In dieser Arbeit werden alternative Methoden zur Lagebestimmung von Kameras innerhalb eines Mehrkamerasystems untersucht, die auf der Hand-Auge-Kalibrierung basieren und Zwangsbedingungen starr-gekoppelter Bewegung ausnutzen. Das Problem soll dabei im Wesentlichen anhand von Bilddaten gelöst werden, also unter Verzicht auf zusätzliche Inertialsensoren oder odometrische Daten. Die daraus abgeleiteten extrinsischen Kalibrierverfahren werden in Anlehnung an die Hand-Auge-Kalibrierung als Eye-to-Eye Calibration bezeichnet. Es werden Lösungsverfahren vorgestellt, die ausschließlich auf Posemessdaten basieren und den Prozess der Poseschätzung von der eigentlichen Kalibrierung entkoppeln, sowie Erweiterungen, die direkt auf visuellen Informationen der einzelnen Kameras basieren. Die beschriebenen Ansätze führen zu dem Entwurf eines Structure-from-Motion-Verfahrens, das Poseschätzung, Rekonstruktion der Szenengeometrie und extrinsische Kalibrierung der Kameras integriert. Bewegungskonfigurationen, die zu Singularitäten in den Kopplungsgleichungen führen, werden gesondert analysiert und es werden spezielle Lösungsstrategien zur partiellen Kalibrierung für solche Fälle entworfen. Ein Schwerpunkt liegt hier auf Bewegung in der Ebene, da diese besonders häufig in Anwendungsszenarien auftritt, in denen sich das Kamerasystem in oder auf einem Fahrzeug befindet

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
    corecore