102 research outputs found
VGC 2023 - Unveiling the dynamic Earth with digital methods: 5th Virtual Geoscience Conference: Book of Abstracts
Conference proceedings of the 5th Virtual Geoscience Conference, 21-22 September 2023, held in Dresden. The VGC is a multidisciplinary forum for researchers in geoscience, geomatics and related disciplines to share their latest developments and applications.:Short Courses 9
Workshops Stream 1 10
Workshop Stream 2 11
Workshop Stream 3 12
Session 1 – Point Cloud Processing: Workflows, Geometry & Semantics 14
Session 2 – Visualisation, communication & Teaching 27
Session 3 – Applying Machine Learning in Geosciences 36
Session 4 – Digital Outcrop Characterisation & Analysis 49
Session 5 – Airborne & Remote Mapping 58
Session 6 – Recent Developments in Geomorphic Process and Hazard Monitoring 69
Session 7 – Applications in Hydrology & Ecology 82
Poster Contributions 9
Very High Resolution (VHR) Satellite Imagery: Processing and Applications
Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing
Ship recognition on the sea surface using aerial images taken by Uav : a deep learning approach
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesOceans are very important for mankind, because they are a very important source of
food, they have a very large impact on the global environmental equilibrium, and it is
over the oceans that most of the world commerce is done. Thus, maritime surveillance
and monitoring, in particular identifying the ships used, is of great importance to
oversee activities like fishing, marine transportation, navigation in general, illegal
border encroachment, and search and rescue operations. In this thesis, we used images
obtained with Unmanned Aerial Vehicles (UAVs) over the Atlantic Ocean to identify
what type of ship (if any) is present in a given location. Images generated from UAV
cameras suffer from camera motion, scale variability, variability in the sea surface and
sun glares. Extracting information from these images is challenging and is mostly done
by human operators, but advances in computer vision technology and development of
deep learning techniques in recent years have made it possible to do so automatically.
We used four of the state-of-art pretrained deep learning network models, namely
VGG16, Xception, ResNet and InceptionResNet trained on ImageNet dataset, modified
their original structure using transfer learning based fine tuning techniques and then
trained them on our dataset to create new models. We managed to achieve very high
accuracy (99.6 to 99.9% correct classifications) when classifying the ships that appear
on the images of our dataset. With such a high success rate (albeit at the cost of high
computing power), we can proceed to implement these algorithms on maritime patrol
UAVs, and thus improve Maritime Situational Awareness
Continuous Modeling of 3D Building Rooftops From Airborne LIDAR and Imagery
In recent years, a number of mega-cities have provided 3D photorealistic virtual models to support the decisions making process for maintaining the cities' infrastructure and environment more effectively. 3D virtual city models are static snap-shots of the environment and represent the status quo at the time of their data acquisition. However, cities are dynamic system that continuously change over time. Accordingly, their virtual representation need to be regularly updated in a timely manner to allow for accurate analysis and simulated results that decisions are based upon. The concept of "continuous city modeling" is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. However, developing a universal intelligent machine enabling continuous modeling still remains a challenging task. Therefore, this thesis proposes a novel research framework for continuously reconstructing 3D building rooftops using multi-sensor data. For achieving this goal, we first proposes a 3D building rooftop modeling method using airborne LiDAR data. The main focus is on the implementation of an implicit regularization method which impose a data-driven building regularity to noisy boundaries of roof planes for reconstructing 3D building rooftop models. The implicit regularization process is implemented in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). Secondly, we propose a context-based geometric hashing method to align newly acquired image data with existing building models. The novelty is the use of context features to achieve robust and accurate matching results. Thirdly, the existing building models are refined by newly proposed sequential fusion method. The main advantage of the proposed method is its ability to progressively refine modeling errors frequently observed in LiDAR-driven building models. The refinement process is conducted in the framework of MDL combined with HAT. Markov Chain Monte Carlo (MDMC) coupled with Simulated Annealing (SA) is employed to perform a global optimization. The results demonstrates that the proposed continuous rooftop modeling methods show a promising aspects to support various critical decisions by not only reconstructing 3D rooftop models accurately, but also by updating the models using multi-sensor data
Spectral-spatial Feature Extraction for Hyperspectral Image Classification
As an emerging technology, hyperspectral imaging provides huge
opportunities in both remote sensing and computer vision. The
advantage of hyperspectral imaging comes from the high resolution
and wide range in the electromagnetic spectral domain which
reflects the intrinsic properties of object materials. By
combining spatial and spectral information, it is possible to
extract more comprehensive and discriminative representation for
objects of interest than traditional methods, thus facilitating
the basic pattern recognition tasks, such as object detection,
recognition, and classification. With advanced imaging
technologies gradually available for universities and industry,
there is an increased demand to develop new methods which can
fully explore the information embedded in hyperspectral images.
In this thesis, three spectral-spatial feature extraction methods
are developed for salient object detection, hyperspectral face
recognition, and remote sensing image classification.
Object detection is an important task for many applications based
on hyperspectral imaging. While most traditional methods rely on
the pixel-wise spectral response, many recent efforts have been
put on extracting spectral-spatial features. In the first
approach, we extend Itti's visual saliency model to the spectral
domain and introduce the spectral-spatial distribution based
saliency model for object detection. This procedure enables the
extraction of salient spectral features in the scale space, which
is related to the material property and spatial layout of
objects.
Traditional 2D face recognition has been studied for many years
and achieved great success. Nonetheless, there is high demand to
explore unrevealed information other than structures and textures
in spatial domain in faces. Hyperspectral imaging meets such
requirements by providing additional spectral information on
objects, in completion to the traditional spatial features
extracted in 2D images. In the second approach, we propose a
novel 3D high-order texture pattern descriptor for hyperspectral
face recognition, which effectively exploits both spatial and
spectral features in hyperspectral images. Based on the local
derivative pattern, our method encodes hyperspectral faces with
multi-directional derivatives and binarization function in
spectral-spatial space. Compared to traditional face recognition
methods, our method can describe distinctive micro-patterns which
integrate the spatial and spectral information of faces.
Mathematical morphology operations are limited to extracting
spatial feature in two-dimensional data and cannot cope with
hyperspectral images due to so-called ordering problem. In the
third approach, we propose a novel multi-dimensional morphology
descriptor, tensor morphology profile~(TMP), for hyperspectral
image classification. TMP is a general framework to extract
multi-dimensional structures in high-dimensional data. The
n-order morphology profile is proposed to work with the n-order
tensor, which can capture the inner high order structures. By
treating a hyperspectral image as a tensor, it is possible to
extend the morphology to high dimensional data so that powerful
morphological tools can be used to analyze hyperspectral images
with fused spectral-spatial information.
At last, we discuss the sampling strategy for the evaluation of
spectral-spatial methods in remote sensing hyperspectral image
classification. We find that traditional pixel-based random
sampling strategy for spectral processing will lead to unfair or
biased performance evaluation in the spectral-spatial processing
context. When training and testing samples are randomly drawn
from the same image, the dependence caused by overlap between
them may be artificially enhanced by some spatial processing
methods. It is hard to determine whether the improvement of
classification accuracy is caused by incorporating spatial
information into the classifier or by increasing the overlap
between training and testing samples. To partially solve this
problem, we propose a novel controlled random sampling strategy
for spectral-spatial methods. It can significantly reduce the
overlap between training and testing samples and provides more
objective and accurate evaluation
- …