18 research outputs found

    A review of laser scanning for geological and geotechnical applications in underground mining

    Full text link
    Laser scanning can provide timely assessments of mine sites despite adverse challenges in the operational environment. Although there are several published articles on laser scanning, there is a need to review them in the context of underground mining applications. To this end, a holistic review of laser scanning is presented including progress in 3D scanning systems, data capture/processing techniques and primary applications in underground mines. Laser scanning technology has advanced significantly in terms of mobility and mapping, but there are constraints in coherent and consistent data collection at certain mines due to feature deficiency, dynamics, and environmental influences such as dust and water. Studies suggest that laser scanning has matured over the years for change detection, clearance measurements and structure mapping applications. However, there is scope for improvements in lithology identification, surface parameter measurements, logistic tracking and autonomous navigation. Laser scanning has the potential to provide real-time solutions but the lack of infrastructure in underground mines for data transfer, geodetic networking and processing capacity remain limiting factors. Nevertheless, laser scanners are becoming an integral part of mine automation thanks to their affordability, accuracy and mobility, which should support their widespread usage in years to come

    Recording, compression and representation of dense light fields

    Get PDF
    The concept of light fields allows image based capture of scenes, providing, on a recorded dataset, many of the features available in computer graphics, like simulation of different viewpoints, or change of core camera parameters, including depth of field. Due to the increase in the recorded dimension from two for a regular image to four for a light field recording, previous works mainly concentrate on small or undersampled light field recordings. This thesis is concerned with the recording of a dense light field dataset, including the estimation of suitable sampling parameters, as well as the implementation of the required capture, storage and processing methods. Towards this goal, the influence of an optical system on the, possibly bandunlimited, light field signal is examined, deriving the required sampling rates from the bandlimiting effects of the camera and optics. To increase storage capacity and bandwidth a very fast image compression methods is introduced, providing an order of magnitude faster compression than previous methods, reducing the I/O bottleneck for light field processing. A fiducial marker system is provided for the calibration of the recorded dataset, which provides a higher number of reference points than previous methods, improving camera pose estimation. In conclusion this work demonstrates the feasibility of dense sampling of a large light field, and provides a dataset which may be used for evaluation or as a reference for light field processing tasks like interpolation, rendering and sampling.Das Konzept des Lichtfelds erlaubt eine bildbasierte Erfassung von Szenen und ermöglicht es, auf den erfassten Daten viele Effekte aus der Computergrafik zu berechnen, wie das Simulieren alternativer Kamerapositionen oder die Veränderung zentraler Parameter, wie zum Beispiel der Tiefenschärfe. Aufgrund der enorm vergrößerte Datenmenge die für eine Aufzeichnung benötigt wird, da Lichtfelder im Vergleich zu den zwei Dimensionen herkömmlicher Kameras über vier Dimensionen verfügen, haben frühere Arbeiten sich vor allem mit kleinen oder unterabgetasteten Lichtfeldaufnahmen beschäftigt. Diese Arbeit hat das Ziel eine dichte Aufnahme eines Lichtfeldes vorzunehmen. Dies beinhaltet die Berechnung adäquater Abtastparameter, sowie die Implementierung der benötigten Aufnahme-, Verarbeitungs- und Speicherprozesse. In diesem Zusammenhang werden die bandlimitierenden Effekte des optischen Aufnahmesystems auf das möglicherweise nicht bandlimiterte Signal des Lichtfeldes untersucht und die benötigten Abtastraten davon abgeleitet. Um die Bandbreite und Kapazität des Speichersystems zu erhöhen wird ein neues, extrem schnelles Verfahren der Bildkompression eingeführt, welches um eine Größenordnung schneller operiert als bisherige Methoden. Für die Kalibrierung der Kamerapositionen des aufgenommenen Datensatzes wird ein neues System von sich selbst identifizierenden Passmarken vorgestellt, welches im Vergleich zu früheren Methoden mehr Referenzpunkte auf gleichem Raum zu Verfügung stellen kann und so die Kamerakalibrierung verbessert. Kurz zusammengefasst demonstriert diese Arbeit die Durchführbarkeit der Aufnahme eines großen und dichten Lichtfeldes, und stellt einen entsprechenden Datensatz zu Verfügung. Der Datensatz ist geeignet als Referenz für die Untersuchung von Methoden zur Verarbeitung von Lichtfeldern, sowie für die Evaluation von Methoden zur Interpolation, zur Abtastung und zum Rendern

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Towards Optimization and Robustification of Data-Driven Models

    Get PDF
    In the past two decades, data-driven models have experienced a renaissance, with notable success achieved through the use of models such as deep neural networks (DNNs) in various applications. However, complete reliance on intelligent machine learning systems is still a distant dream. Nevertheless, the initial success of data-driven approaches presents a promising path for building trustworthy data-oriented models. This thesis aims to take a few steps toward improving the performance of existing data-driven frameworks in both the training and testing phases. Specifically, we focus on several key questions: 1) How to efficiently design optimization methods for learning algorithms that can be used in parallel settings and also when first-order information is unavailable? 2) How to revise existing adversarial attacks on DNNs to structured attacks with minimal distortion of benign samples? 3) How to integrate attention models such as Transformers into data-driven inertial navigation systems? 4) How to address the lack of data problem for existing data-driven models and enhance the performance of existing semi-supervised learning (SSL) methods? In terms of parallel optimization methods, our research focuses on investigating a delay-aware asynchronous variance-reduced coordinate descent approach. Additionally, we explore the development of a proximal zeroth-order algorithm for nonsmooth nonconvex problems when first-order information is unavailable. We also extend our study to zeroth-order stochastic gradient descent problems. As for robustness, we develop a structured white-box adversarial attack to enhance research on robust machine learning schemes. Furthermore, our research investigates a group threat model in which adversaries can only perturb image segments rather than the entire image to generate adversarial examples. We also explore the use of attention models, specifically Transformer models, for deep inertial navigation systems based on the Inertial Measurement Unit (IMU). In addressing the problem of data scarcity during the training process, we propose a solution that involves quantizing the uncertainty from the unlabeled data and corresponding pseudo-labels, and incorporating it into the loss term to compensate for noisy pseudo-labeling. We also extend the generic semi-supervised method for data-driven noise suppression frameworks by utilizing a reinforcement learning (RL) model to learn contrastive features in an SSL fashion. Each chapter of the thesis presents the problem and our solutions using concrete algorithms. We verify our approach through comparisons with existing methods on different benchmarks and discuss future research directions

    Zur GNSS-basierten Bestimmung von Position und Geschwindigkeit in der Fluggravimetrie

    Get PDF
    Das weltumspannende Satelliten-Navigationssystem GNSS spielt eine wichtige Rolle für die Fluggravimetrie. Gegenstand dieser Arbeit ist die Entwicklung zuverlässiger GNSS-Algorithmen und Software für die hochgenaue GNSS-Datenanalyse in der Fluggravimetrie. Ausgehend von den Anforderungen für praktische Anwendungen der Fluggravimetrie lassen sich die Beiträge und Schwerpunkte dieser Dissertation wie folgt zusammenfassen: Ausgleichs- bzw. Schätzungs-Algorithmen: Ausgehend von den Genauigkeitsanforderungen an die GNSS-basierte Positionsbestimmung in der Fluggravimetrie werden in einer kinematischen GNSS-Daten-Auswertung eine Schätzung nach kleinsten Quadraten einschließlich der Eliminierung von Störparametern sowie ein Zwei-Wege-Kalman-Filter angewendet. Das Ziel der beiden Ausgleichsverfahren ist es, an jedem Messzeitpunkt zunächst globale Parameter (wie System-Fehler und Trägerwellen-Ambiguities) und anschließend lokale Parameter (wie Position und Geschwindigkeit der bewegten Messplattform) zu bestimmen. Die angewandten Methoden sind sehr effizient und ergeben hochpräzise Resultate für die GNSS-Datenanalyse. Analyse von Genauigkeit und Zuverlässigkeit: Die Genauigkeit und Zuverlässigkeit der Resultate der präzisen kinematischen GNSS-Positionsbestimmung werden untersucht. Dabei wird eine besondere Methode zur Bewertung der Genauigkeit der kinematischen GNSS-Positionsbestimmung vorgeschlagen, wo bekannte Entfernungen zwischen mehreren GNSS-Antennen als Genauigkeits-Maßstab genommen werden. Weiterhin wird der Einfluss der Uhrenfehler der GNSS-Empfänger auf die Genauigkeit der kinematischen Positionsbestimmung für die Hochgeschwindigkeits-Plattform untersucht. Für dabei auftretende Probleme wird eine Lösung vorgeschlagen. Algorithmen der kinematischen Positionsbestimmung die auf mehreren Referenzstationen beruhen: Um das Problem der im Falle langer Basislinien abnehmenden Genauigkeit in der relativen kinematischen GNSS-Positionsbestimmung zu bewältigen, wird ein neuer Algorithmus vorgeschlagen. Er beruht auf der apriori Einführung von Exzentrizitäts-Bedingungen für mehrere Referenzstationen. Dieser Algorithmus erhöht die Genauigkeit und Zuverlässigkeit der Ergebnise in der kinematischen Positionsbestimmung für große Regionen resp. lange Basislinien. Präzise GNSS-Positionsbestimmung, beruhend auf robuster Schätzung: Das Vorhandensein von groben Fehlern in den GNSS-Beobachtungen verursacht das Auftreten von Ausreißern in den Ergebnissen der Positionsbestimmung. Um dieses Problem zu überwinden, wird ein robuster Ausgleichungs-Algorithmus angewendet, der die Auswirkungen von gro-ben Fehlern in den Ergebnissen der kinematischen GNSS-Positionsbestimmung beseitigt. Kinematische Positionierung auf der Basis mehrerer bewegter Stationen: In der Fluggravimetrie werden in der Regel mehrere GNSS-Antennen auf einer bewegten Plattform installiert. In diesem Zusammenhang wird deshalb erstens ein kinematisches GNSS-Positionsbestimmungsverfahren vorgeschlagen, das auf mehreren gleichzeitig bewegten GNSS-Stationen basiert. Aus den bekannten, konstanten Distanzen zwischen den GNSS-Antennen werden dabei apriori Exzentrizitäts-Bedingungen abgeleitet und in die Positions-schätzung eingeführt. Dies verbessert die Zuverlässigkeit des Messsystems. Zweitens wird solch ein Ansatz auch zur Bestimmung eines gemeinsamen Refraktionsparameters aller GNSS-Antennen der Plattform für den feuchten Teil der Atmosphäre verwendet. Dieses Verfahren reduziert nicht nur die Menge der geschätzten Parameter, sondern verringert auch die Korrelation zwischen den atmosphärischen Parametern. Kinematische Positionierung basierend auf der Kombination verschiedener GNSS-Systeme: Um die Zuverlässigkeit und Genauigkeit der kinematischen Positionsbestimmung zu verbessern, werden die Signale mehrerer GNSS-Systeme (d.h. GPS und GLONASS) gemeinsam registriert und ausgewertet (sog. GNSS-Integration). Zur Optimierung des relativen Gewichts zwischen den Daten der verschiedenen GNSS-Systeme wird die Helmertsche Varianz-Komponenten-Schätzung angewandt. Der auf dieser Basis entwickelte Kombinationsalgorithmus ermöglicht die Verbesserung der Beiträge von mehreren GNSS-Systemen. Geschwindigkeitsbestimmung mit GNSS-Doppler-Daten: Die Auswertung der Schwere-Messdaten in der Fluggravimetrie verlangt die hochgenaue Bestimmung des Geschwindigkeitsvektors der bewegten Plattform. Deshalb werden rohe GNSS-Doppler-Beobachtungen verwendet, um die Geschwindigkeit der bewegten Plattform im Falle hoch-dynamischer Flugbedingungen kinematisch zu bestimmen. Darüberhinaus werden aus der Trägerphase abgeleitete Doppler-Beobachtungen verwendet, um präzise Geschwindigkeitsschätzungen im Falle weniger dynamischer Flugbedingungen zu erhalten. Die Kombination verschiedener GNSS-Systeme wird auch bei der Doppler-Geschwindigkeitsbestimmung angewandt. Hierzu wird die Anwendung der Helmertschen Varianzkomponenten-Schätzung und einer robusten Schätzung untersucht. Software Entwicklung und Anwendung: Um die aktuellen Anforderungen der GNSS-basierten Positionsbestimmung in der Flug- sowie Schiffsgravimetrie zu erfüllen, wurde ein Software-System (HALO_GNSS) für die präzise kinematische GNSS-Flugbahn- und Geschwindigkeitsberechnung kinematischer Plattformen entwickelt. Die in dieser Arbeit vorgeschlagenen Algorithmen wurden in diese Software integriert. Um die Effizienz der vorgeschlagenen Algorithmen und der HALO_GNSS Software zu prüfen, wurde diese Software sowohl in Flug- als auch in Schiffsgravimetrie-Projekten des GFZ Potsdam angewandt. Alle Ergebnisse werden verglichen und geprüft und es wird gezeigt, dass die angewandten Methoden die Zuverlässigkeit und Genauigkeit der kinematischen Positions- und Geschwindigkeitsbestimmung effektiv verbessern. Die Verwendung der Software HA-LO_GNSS ermöglicht kinematische Positionsbestimmung mit einer Genauigkeit von 1-2 cm sowie Geschwindigkeitsbestimmung mit einer Genauigkeit von ca. 1 cm/s mit Roh- und etwa 1 mm/s mit aus der Trägerphase abgeleiteten Doppler-Beobachtungen.The Global Navigation Satellite System (GNSS) plays a significant role in the fields of airborne gravimetry. The objective of this thesis is to develop reliable GNSS algorithms and software for kinematic highly precise GNSS data analysis in airborne gravimetry. Based on the requirements for practical applications in airborne gravimetry and shipborne gravimetry projects, the core research and the contributions of this thesis are summarized as follows: Estimation Algorithm: Based on the accuracy requirements for GNSS precise positioning in airborne gravimetry, the estimation algorithms of least squares including the elimination of nuisance parameters as well as a two-way Kalman filter are applied to the kinematic GNSS data post-processing. The goal of these adjustment methods is to calculate non-epoch parameters (such as system error estimates or carrier phase ambiguity parameters) using all data in the first step, followed by the calculation of epoch parameters (such as position and velocity parameters of the kinematic platform) at every epoch. These methods are highly efficient when dealing with massive amounts of data, and give the highly precise results for the GNSS data analyzed. Accuracy Evaluation and Reliability Analysis: The accuracy evaluation and reliability analysis of the results from precise kinematic GNSS positioning is studied. A special accuracy evaluation method in GNSS kinematic positioning is proposed, where the known distances among multiple antennas of GNSS receivers are taken as an accuracy evaluation index. The effect of the GNSS receiver clock error in the accuracy evaluation for GNSS kinematic positioning results of a high-speed motion platform is studied and a solution is proposed. Kinematic Positioning Based on Multiple Reference Stations Algorithms: In order to overcome the problem of decreasing accuracy in GNSS relative kinematic positioning for long baselines, a new relative kinematic positioning method based on a priori constraints for multiple reference stations is proposed. This algorithm increases the accuracy and reliability of kinematic positioning results for large regions resp. long baselines. GNSS Precise Positioning Based on Robust Estimation: In order to solve the problem of outliers occurring in positioning results which are caused by the presence of gross errors in the GNSS observations, a robust estimation algorithm is applied to eliminate the effects of gross errors in the results of GNSS kinematic precise positioning. Kinematic Positioning Based on Multiple Kinematic Stations: In airborne gravimetry, multiple antennas of GNSS receivers are usually mounted on the kinematic platform. Firstly, a GNSS kinematic positioning method based on multiple kinematic stations is proposed. Using the known constant distances among the multiple GNSS antennas, a kinematic positioning method based on a priori distance constraints is proposed to improve the reliability of the system. Secondly, such an approach is also used for the estimation of a common atmospheric wet delay parameter among the multiple GNSS antennas mounted on the platform. This method does not only reduce the amount of estimated parameters, but also decreases the correlation among the atmospheric parameters. Kinematic Positioning Based on GNSS Integration: To improve the reliability and accuracy of kinematic positioning, a kinematic positioning method using multiple GNSS systems integration is addressed. Furthermore, a GNSS integration algorithm based on Helmert’s variance components estimation is proposed to adjust the weights in a reasonable way. This improves the results when combining data of the different GNSS systems. Velocity Determination Using GNSS Doppler Data: Airborne gravimetry requires instantaneous velocity results, thus raw Doppler observations are used to determine the kinematic instantaneous velocity in high-dynamic environments. Furthermore, carrier phase derived Doppler observations are used to obtain precise velocity estimates in low-dynamic environments. Then a method of Doppler velocity determination based on GNSS integration with Helmert’s variance components estimation and robust estimation is studied. Software Development and Application: In order to fulfill the actual requirements of airborne as well as shipborne gravimetry on GNSS precise positioning, a software system (HALO_GNSS) for precise kinematic GNSS trajectory and velocity determination for kinematic platforms has been developed. In this software, the algorithms as proposed in this thesis were adopted and applied. In order to evaluate the effectiveness of the proposed algorithm and the HALO_GNSS software, this software is applied in airborne as well as shipborne gravimetry projects of GFZ Potsdam. All results are compared and examined, and it is shown that the applied approaches can effectively improve the reliability and accuracy of the kinematic position and velocity determination. It allows the kinematic positioning with an accuracy of 1-2 cm and the velocity determination with an accuracy of approximately 1 cm/s using raw and approximately 1 mm/s using carrier phase derived Doppler observations

    Physics and astrophysics with gravitational waves from compact binary coalescence in ground based interferometers

    Get PDF
    Advanced ground based laser interferometer gravitational wave detectors are due to come online in late 2015 and are expected to make the first direct detections of gravitational waves, with compact binary coalescence widely regarded as one of the most promising sources for detection. In Chapter I I compare two techniques for predicting the uncertainty of sky localization of these sources with full Bayesian inference. I find that timing triangulation alone tends to over-estimate the uncertainty and that average predictions can be brought to better agreement by the inclusion of phase consistency information in timing-triangulation techniques. Gravitational wave signals will provide a testing ground for the strong field dynamics of GR. Bayesian data analysis pipelines are being developed to test GR in this new regime, as presented in Chapter 3 Appendix B. In Chapter II and Appendix C I compare the predicted from of the Bayes factor, presented by Cornish et al. and Vallisneri, with full Bayesian inference. I find that the approximate scheme predicts exact results with good accuracy above fitting factors of ~ 0.9. The expected rate of detection of Compact Binary Coalescence signals has large associated uncertainties due to unknown merger rates. The tool presented in Chapter III provides a way to estimate the expected rate of specified CBC systems in a selected detector

    Development of in-vitro µ-channel devices for continous long-term monitoring of neuron circuit development

    Get PDF
    In this thesis various methods are presented towards long-term electrophysiological monitoring of in-vitro neuron cultures in µ-channel devices. A new µ-channel device has been developed. The StarPoM device offers multiple culture chambers connected with µ-channels allowing to study communication between neuron populations. For its fabrication an advanced multi level SU-8 soft-lithography master was developed that can mold µ-channels and culture wells simultaneously. The problem of aligning features across a thick SU-8 layer has been solved by integrating a chrome mask into the substrate and then using backside exposure through the chrome mask. A long-term monitoring of neuron electrophysiological activity has been conducted continuously during 14 days in the StarPoM device. For the analysis of the recorded dataset a new software tool-chain has been created with the goal of high processing performance. The two most advanced components - O1Plot and ISI viewer - offer high performance visualization of time series data with event or interval annotation and visualization of inter-spike interval histograms for fast discovery of correlations between spike units on a device. The analysis of the 14 day recording revealed that signals can be recorded from day 4/5 onwards. While maximum spike amplitudes in kept rising during the 14 days and reached up to 3.16 mV, the average spike amplitudes reached their maximum of 0.1-0.3 mV within 6 to 8 days and then kept the amplitudes stable. To better understand the biophysics of signal generation in µ-channels, the influence of µ-channel length on signal amplitude was studied. A model based on the passive cable theory was developed showing that spike amplitude rises with channel length for µ-channels < 250 µm. In longer µ-channels, further growth of spike amplitude is inhibited by cancellation of positive and negative spike phase. Also, clogging of the µ-channel entrances by cells and debris helps to enhance signal amplification

    Optimising mobile laser scanning for underground mines

    Full text link
    Despite several technological advancements, underground mines are still largely relied on visual inspections or discretely placed direct-contact measurement sensors for routine monitoring. Such approaches are manual and often yield inconclusive, unreliable and unscalable results besides exposing mine personnel to field hazards. Mobile laser scanning (MLS) promises an automated approach that can generate comprehensive information by accurately capturing large-scale 3D data. Currently, the application of MLS has relatively remained limited in mining due to challenges in the post-registration of scans and the unavailability of suitable processing algorithms to provide a fully automated mapping solution. Additionally, constraints such as the absence of a spatial positioning network and the deficiency of distinguishable features in underground mining spaces pose challenges in mobile mapping. This thesis aims to address these challenges in mine inspections by optimising different aspects of MLS: (1) collection of large-scale registered point cloud scans of underground environments, (2) geological mapping of structural discontinuities, and (3) inspection of structural support features. Firstly, a spatial positioning network was designed using novel three-dimensional unique identifiers (3DUID) tags and a 3D registration workflow (3DReG), to accurately obtain georeferenced and coregistered point cloud scans, enabling multi-temporal mapping. Secondly, two fully automated methods were developed for mapping structural discontinuities from point cloud scans – clustering on local point descriptors (CLPD) and amplitude and phase decomposition (APD). These methods were tested on both surface and underground rock mass for discontinuity characterisation and kinematic analysis of the failure types. The developed algorithms significantly outperformed existing approaches, including the conventional method of compass and tape measurements. Finally, different machine learning approaches were used to automate the recognition of structural support features, i.e. roof bolts from point clouds, in a computationally efficient manner. Roof bolts being mapped from a scanned point cloud provided an insight into their installation pattern, which underpinned the applicability of laser scanning to inspect roof supports rapidly. Overall, the outcomes of this study lead to reduced human involvement in field assessments of underground mines using MLS, demonstrating its potential for routine multi-temporal monitoring
    corecore