2,833 research outputs found
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
Opportunistic timing signals for pervasive mobile localization
MenciĂłn Internacional en el tĂtulo de doctorThe proliferation of handheld devices and the pressing need of location-based services call for
precise and accurate ubiquitous geographic mobile positioning that can serve a vast set of devices.
Despite the large investments and efforts in academic and industrial communities, a pin-point solution
is however still far from reality. Mobile devices mainly rely on Global Navigation Satellite
System (GNSS) to position themselves. GNSS systems are known to perform poorly in dense urban
areas and indoor environments, where the visibility of GNSS satellites is reduced drastically.
In order to ensure interoperability between the technologies used indoor and outdoor, a pervasive
positioning system should still rely on GNSS, yet complemented with technologies that can
guarantee reliable radio signals in indoor scenarios. The key fact that we exploit is that GNSS signals
are made of data with timing information. We then investigate solutions where opportunistic
timing signals can be extracted out of terrestrial technologies. These signals can then be used as
additional inputs of the multi-lateration problem. Thus, we design and investigate a hybrid system
that combines range measurements from the Global Positioning System (GPS), the world’s
most utilized GNSS system, and terrestrial technologies; the most suitable one to consider in our
investigation is WiFi, thanks to its large deployment in indoor areas. In this context, we first start
investigating standalone WiFi Time-of-flight (ToF)-based localization. Time-of-flight echo techniques
have been recently suggested for ranging mobile devices overWiFi radios. However, these
techniques have yielded only moderate accuracy in indoor environments because WiFi ToF measurements
suffer from extensive device-related noise which makes it challenging to differentiate
between direct path from non-direct path signal components when estimating the ranges. Existing
multipath mitigation techniques tend to fail at identifying the direct path when the device-related
Gaussian noise is in the same order of magnitude, or larger than the multipath noise. In order to
address this challenge, we propose a new method for filtering ranging measurements that is better
suited for the inherent large noise as found in WiFi radios. Our technique combines statistical
learning and robust statistics in a single filter. The filter is lightweight in the sense that it does not
require specialized hardware, the intervention of the user, or cumbersome on-site manual calibration.
This makes the method we propose as the first contribution of the present work particularly
suitable for indoor localization in large-scale deployments using existing legacy WiFi infrastructures.
We evaluate our technique for indoor mobile tracking scenarios in multipath environments,
and, through extensive evaluations across four different testbeds covering areas up to 1000m2, the filter is able to achieve a median ranging error between 1:7 and 2:4 meters.
The next step we envisioned towards preparing theoretical and practical basis for the aforementioned
hybrid positioning system is a deep inspection and investigation of WiFi and GPS ToF
ranges, and initial foundations of single-technology self-localization. Self-localization systems
based on the Time-of-Flight of radio signals are highly susceptible to noise and their performance
therefore heavily rely on the design and parametrization of robust algorithms. We study the noise
sources of GPS and WiFi ToF ranging techniques and compare the performance of different selfpositioning
algorithms at a mobile node using those ranges. Our results show that the localization
error varies greatly depending on the ranging technology, algorithm selection, and appropriate
tuning of the algorithms. We characterize the localization error using real-world measurements
and different parameter settings to provide guidance for the design of robust location estimators
in realistic settings.
These tools and foundations are necessary to tackle the problem of hybrid positioning system
providing high localization capabilities across indoor and outdoor environments. In this context,
the lack of a single positioning system that is able the fulfill the specific requirements of
diverse indoor and outdoor applications settings has led the development of a multitude of localization
technologies. Existing mobile devices such as smartphones therefore commonly rely on
a multi-RAT (Radio Access Technology) architecture to provide pervasive location information
in various environmental contexts as the user is moving. Yet, existing multi-RAT architectures
consider the different localization technologies as monolithic entities and choose the final navigation
position from the RAT that is foreseen to provide the highest accuracy in the particular
context. In contrast, we propose in this work to fuse timing range (Time-of-Flight) measurements
of diverse radio technologies in order to circumvent the limitations of the individual radio access
technologies and improve the overall localization accuracy in different contexts. We introduce
an Extended Kalman filter, modeling the unique noise sources of each ranging technology. As a
rich set of multiple ranges can be available across different RATs, the intelligent selection of the
subset of ranges with accurate timing information is critical to achieve the best positioning accuracy.
We introduce a novel geometrical-statistical approach to best fuse the set of timing ranging
measurements. We also address practical problems of the design space, such as removal of WiFi
chipset and environmental calibration to make the positioning system as autonomous as possible.
Experimental results show that our solution considerably outperforms the use of monolithic
technologies and methods based on classical fault detection and identification typically applied in
standalone GPS technology.
All the contributions and research questions described previously in localization and positioning
related topics suppose full knowledge of the anchors positions. In the last part of this work, we
study the problem of deriving proximity metrics without any prior knowledge of the positions of
the WiFi access points based on WiFi fingerprints, that is, tuples of WiFi Access Points (AP) and
respective received signal strength indicator (RSSI) values. Applications that benefit from proximity
metrics are movement estimation of a single node over time, WiFi fingerprint matching for localization systems and attacks on privacy. Using a large-scale, real-world WiFi fingerprint data
set consisting of 200,000 fingerprints resulting from a large deployment of wearable WiFi sensors,
we show that metrics from related work perform poorly on real-world data. We analyze the
cause for this poor performance, and show that imperfect observations of APs with commodity
WiFi clients in the neighborhood are the root cause. We then propose improved metrics to provide
such proximity estimates, without requiring knowledge of location for the observed AP. We
address the challenge of imperfect observations of APs in the design of these improved metrics.
Our metrics allow to derive a relative distance estimate based on two observed WiFi fingerprints.
We demonstrate that their performance is superior to the related work metrics.This work has been supported by IMDEA Networks InstitutePrograma Oficial de Doctorado en IngenierĂa TelemáticaPresidente: Francisco BarcelĂł Arroyo.- Secretario: Paolo Casari.- Vocal: Marco Fior
Quantifizierung struktureller Veränderungen im kortikalen Knochen durch Abschätzung von Dicke, Schallgeschwindigkeit und Porengrößenverteilung
Quantitative bone ultrasound (QUS) method has been introduced as a promising alternative for diagnosing osteoporosis and assessing fracture risk. The latest QUS technologies aim to quantitatively assess structural cortical bone characteristics, e.g, cortical porosity, cortical thickness (Ct.Th) and cortical speed of sound at cortical measurement regions. Large cortical pores and reduced Ct.Th in the tibia have been proposed as an indication of reduced hip strength and structural deterioration.
In this work two novel ultrasound methods were studied using a conventional ultrasound transducer to measure cortical bone properties at the tibia. The first method is a refraction and phase aberration corrected multifocus (MF) imaging approach that measures Ct.Th and the compressional sound velocity traveling in the radial bone direction (Ct.ν11). The second method is a novel cortical backscatter (CortBS) method that assesses microstructural properties in cortical bone. Both methods were validated in silico on bone models, ex vivo on bone samples and in vivo on 55 postmenopausal women at the anteromedial tibia midshaft. The aim of this work was to study the precision, accuracy, and fragility fracture discrimination performance of CortBS and MF parameters in comparison to clinical High-resolution peripheral quantitative computed tomography (HR-pQCT) and Dual-energy X-ray absorptiometry (DXA) measurements.
The results of the MF approach show precise and accurate estimation of Ct.Th and Ct.ν11. The comparison of the measured Ct.Th with reference thicknesses from HR-pQCT measurement have also shown accurate determination of Ct.Th (R2=0.94, RMSE=0.17 mm). Future simulation studies with real bone structures from HR-pQCT measurements should target the validation of accurate Ct.ν11 estimation. For the first time, CortBS assessed the distribution of cortical pore size and viscoelastic properties of cortical bone in vivo. The short- term in vivo precision was observed between 1.7% and 13.9%. Fragility fracture discrimination performance was retrieved using multivariate partial least squares regression. The combination of CortBS+MF showed superior fracture discrimination performance compared with DXA and similar fracture discrimination performance compared with HR-pQCT. Further clinical studies with larger cohort size should target the potential to demonstrate the ability of CortBS and MF parameters for individual fracture risk assessment.
In conclusion, alteration in cortical microstructure and viscoelasticity caused by the aging process and the progression of osteoporosis can be measured by CortBS and MF. These methods have high potential to identify patients at high risk for fragility fractures.Die quantitative Knochenultraschallmethode (QUS) wurde als vielversprechende Alternative für die Diagnose von Osteoporose und die Bewertung des Frakturrisikos eingeführt. Die neuesten QUS-Technologien zielen darauf ab, strukturelle kortikale Knochenmerkmale, z. B. kortikale Porosität, kortikale Dicke (Ct.Th) und kortikale Schallgeschwindigkeit in kortikalen Messregionen quantitativ zu bewerten. Große kortikale Poren und eine verringerte Ct.Th in der Tibia wurden als Anzeichen für eine verringerte Festigkeit der Hüfte und eine strukturelle Verschlechterung vorgeschlagen.
In dieser Arbeit wurden zwei neuartige Ultraschallmethoden unter Verwendung eines herkömmlichen Ultraschallwandlers zur Messung der Eigenschaften am kortikalen Knochen des Schienbeins untersucht. Bei der ersten Methode handelt es sich um einen brechungs- und phasenaberrationskorrigierten multifokalen (MF) Bildgebungsansatz, der Ct.Th und die Kompressionsschallgeschwindigkeit in radialer Knochenrichtung (Ct.ν11) misst. Die zweite Methode ist eine neuartige kortikale Rückstreumethode (CortBS), die die mikrostrukturellen Eigenschaften des kortikalen Knochens misst. Beide Methoden wurden in silico an Knochenmodellen, ex vivo an Knochenproben und in vivo an 55 postmenopausalen Frauen am anteromedialen Tibia-Mittelschaft validiert. Ziel dieser Arbeit war es, die Präzision, Genauigkeit und Fragilitätsfraktur-Diskriminierungsleistung von CortBS- und MF-Parametern im Vergleich zur klinischen hochauflösenden peripheren quantitativen Computertomographie (HR-pQCT) und Dualen-Energie-Röntgenabsorptiometrie (DXA) zu untersuchen.
Die Ergebnisse des MF-Ansatzes zeigen eine präzise und genaue Schätzung von Ct.Th und Ct.ν11. Der Vergleich der gemessenen Ct.Th mit Referenzdicken aus HR-pQCT-Messungen hat ebenfalls eine genaue Bestimmung der Ct.Th gezeigt (R2=0,94, RMSE=0,17 mm). Zukünftige Simulationsstudien mit realen Knochenstrukturen aus HR-pQCT-Messungen sollten die genauen Schätzung der Ct.ν11 validieren. Zum ersten Mal hat CortBS die kortikale Porengrößenverteilung und die viskoelastischen Eigenschaften des kortikalen Knochens in vivo untersucht. Die kurzfristige In-vivo-Präzision lag zwischen 1,7% und 13,9%. Die Fragilitätsfraktur-Diskriminierungsleistung wurde mittels multivarianter Regression der partiellen kleinsten Quadrate bewertet. Die Kombination von CortBS+MF zeigte im Vergleich zur DXA eine überlegene Leistung bei der Frakturerkennung und eine ähnliche Leistung wie die bei HR-pQCT. Weitere klinische Studien mit größerer Kohortengröße sollten die Fähigkeit von CortBS- und MF-Parametern zur individuellen Frakturrisikobewertung nachweisen.
Zusammenfassend lässt sich sagen, dass Veränderungen der kortikalen Mikrostruktur und Viskoelastizität, die durch den Alterungsprozess und das Fortschreiten der Osteoporose verursacht werden, mit CortBS und MF gemessen werden können. Diese Methoden haben ein hohes Potenzial zur Identifizierung von Patienten mit hohem Risiko für Fragilitätsfrakturen
Kinematic GPS survey as validation of LIDAR strips accuracy
As a result of the catastrophic hydrogeological events which occurred in May 1998 in Campania, in the south
of Italy, the distinctive features of airborne laser scanning mounted on a helicopter were used to survey the
landslides at Sarno and Quindici. In order to survey the entire zone of interest, approximately 21 km2, it was
necessary to scan 12 laser strips. Many problems arose during the survey: difficulties in receiving the GPS
signal, complex terrain features and unfavorable atmospheric conditions. These problems were investigated
and it emerged that one of the most influential factors is the quality of GPS signals. By analysing the original
GPS data, the traces obtained by fixing phase ambiguity with an On The Fly (OTF) algorithm were isolated
from those with smoothed differential GPS solution (DGPS). Processing and analysis of laser data
showed that not all the overlapping laser strips were congruent with each other. Since an external survey to
verify the laser data accuracy was necessary, it was decided to utilize the kinematic GPS technique. The laser
strips were subsequently adjusted, using the kinematic GPS data as reference points. Bearing in mind that in
mountainous areas like the one studied here it is not possible to obtain nominal precision and accuracy, a
good result was nevertheless obtained with a Digital Terrain Model (DTM) of all the zones of interest
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Advances in Waveform and Photon Counting Lidar Processing for Forest Vegetation Applications
Full waveform (FW) and photon counting LiDAR (PCL) data have garnered greater attention due to increasing data availability, a wealth of information they contain and promising prospects for large scale vegetation mapping. However, many factors such as complex processing steps and scarce non-proprietary tools preclude extensive and practical uses of these data for vegetation characterization. Therefore, the overall goal of this study is to develop algorithms to process FW and PCL data and to explore their potential in real-world applications.
Study I explored classical waveform decomposition methods such as the Gaussian decomposition, Richardson–Lucy (RL) deconvolution and a newly introduced optimized Gold deconvolution to process FW LiDAR data. Results demonstrated the advantages of the deconvolution and decomposition method, and the three approaches generated satisfactory results, while the best performances varied when different criteria were used.
Built upon Study I, Study II applied the Bayesian non-linear modeling concepts for waveform decomposition and quantified the propagation of error and uncertainty along the processing steps. The performance evaluation and uncertainty analysis at the parameter, derived point cloud and surface model levels showed that the Bayesian decomposition could enhance the credibility of decomposition results in a probabilistic sense to capture the true error of estimates and trace the uncertainty propagation along the processing steps.
In study III, we exploited FW LiDAR data to classify tree species through integrating machine learning methods (the Random forests (RF) and Conditional inference forests (CF)) and Bayesian inference method. Results of classification accuracy highlighted that the Bayesian method was a superior alternative to machine learning methods, and rendered users with more confidence for interpreting and applying classification results to real-world tasks such as forest inventory.
Study IV focused on developing a framework to derive terrain elevation and vegetation canopy height from test-bed sensor data and to pre-validate the capacity of the upcoming Ice, Cloud and Land Elevation Satellite-2 (ICESat-2) mission. The methodology developed in this study illustrates plausible ways of processing the data that are structurally similar to expected ICESat-2 data and holds the potential to be a benchmark for further method adjustment once genuine ICESat-2 are available
Calibration of full-waveform airborne laser scanning data for 3D object segmentation
Phd ThesisAirborne Laser Scanning (ALS) is a fully commercial technology, which has seen rapid uptake from the photogrammetry and remote sensing community to classify surface features and enhance automatic object recognition and extraction processes. 3D object segmentation is considered as one of the major research topics in the field of laser scanning for feature recognition and object extraction applications. The demand for automatic segmentation has significantly increased with the emergence of full-waveform (FWF) ALS, which potentially offers an unlimited number of return echoes. FWF has shown potential to improve available segmentation and classification techniques through exploiting the additional physical observables which are provided alongside the standard geometric information. However, use of the FWF additional information is not recommended without prior radiometric calibration, taking into consideration all the parameters affecting the backscattered energy.
The main focus of this research is to calibrate the additional information from FWF to develop the potential of point clouds for segmentation algorithms. Echo amplitude normalisation as a function of local incidence angle was identified as a particularly critical aspect, and a novel echo amplitude normalisation approach, termed the Robust Surface Normal (RSN) method, has been developed. Following the radar equation, a comprehensive radiometric calibration routine is introduced to account for all variables affecting the backscattered laser signal. Thereafter, a segmentation algorithm is developed, which utilises the raw 3D point clouds to estimate the normal for individual echoes based on the RSN method. The segmentation criterion is selected as the normal vector augmented by the calibrated backscatter signals. The developed segmentation routine aims to fully integrate FWF data to improve feature recognition and 3D object segmentation applications. The routine was tested over various feature types from two datasets with different properties to assess its potential. The results are compared to those delivered through utilizing only geometric information, without the additional FWF radiometric information, to assess performance over existing methods. The results approved the potential of the FWF additional observables to improve segmentation algorithms. The new approach was validated against manual segmentation results, revealing a successful automatic implementation and achieving an accuracy of 82%
- …