5,913 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Ureilite meteorites and the unknown proto-planet: using EBSD to construct a geological history
The ureilites are a group of ultramafic achondrite meteorites composed primarily of olivine and pigeonite, with accessory minerals and a high abundance of carbon in the form of graphite and diamond. There are many hypotheses as to how the ureilite group formed, but the majority of authors are now in agreement that they represent a mantle restite of a now destroyed planetesimal that may have been as large as Mercury (Nabiei et al., 2018). This planetesimal was large enough for the ureilites to form through igneous processing, but not large enough to become a full planet. At some point, possibly within the first 10 million years (Rai et al., 2020) of its life the ureilite parent body (UPB) was subjected to a catastrophic impact which destroyed the planetesimal and created daughter asteroids which are the current parent bodies of the ureilites (Goodrich et al., 2015). This study aims to construct a comprehensive geological history of the samples using Electron Backscatter Diffraction (EBSD), Energy Dispersive Spectroscopy (EDS), Raman spectroscopy and geochemical data. Here we show using Raman peak Full Width Half Maximum (FWHM) data that the majority of diamonds present in the ureilite suite are formed through shock related processes. This is combined with the EBSD data and optical microscopy data to discuss a range of shock features present within the ureilites such as mosaicism. Various slip systems are shown to be activated across the samples indicating deformation occurred during a variety of temperature and pressure conditions throughout ureilite formation. Evidence of shear processes affecting the majority of the samples studied is also presented using the EBSD datasets. A proposed geological history is presented to tie shock and shear features together. Our results agree with recent studies about diamond formation (Nestola et al., 2020) on the UPB which goes some way to negating the need for a large planetesimal to be required in order to explain ureilite formation
Soundscape in Urban Forests
This Special Issue of Forests explores the role of soundscapes in urban forested areas. It is comprised of 11 papers involving soundscape studies conducted in urban forests from Asia and Africa. This collection contains six research fields: (1) the ecological patterns and processes of forest soundscapes; (2) the boundary effects and perceptual topology; (3) natural soundscapes and human health; (4) the experience of multi-sensory interactions; (5) environmental behavior and cognitive disposition; and (6) soundscape resource management in forests
Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring
Artificially intelligent perception is increasingly present in the lives of
every one of us. Vehicles are no exception, (...) In the near future, pattern
recognition will have an even stronger role in vehicles, as self-driving cars
will require automated ways to understand what is happening around (and within)
them and act accordingly. (...) This doctoral work focused on advancing
in-vehicle sensing through the research of novel computer vision and pattern
recognition methodologies for both biometrics and wellbeing monitoring. The
main focus has been on electrocardiogram (ECG) biometrics, a trait well-known
for its potential for seamless driver monitoring. Major efforts were devoted to
achieving improved performance in identification and identity verification in
off-the-person scenarios, well-known for increased noise and variability. Here,
end-to-end deep learning ECG biometric solutions were proposed and important
topics were addressed such as cross-database and long-term performance,
waveform relevance through explainability, and interlead conversion. Face
biometrics, a natural complement to the ECG in seamless unconstrained
scenarios, was also studied in this work. The open challenges of masked face
recognition and interpretability in biometrics were tackled in an effort to
evolve towards algorithms that are more transparent, trustworthy, and robust to
significant occlusions. Within the topic of wellbeing monitoring, improved
solutions to multimodal emotion recognition in groups of people and
activity/violence recognition in in-vehicle scenarios were proposed. At last,
we also proposed a novel way to learn template security within end-to-end
models, dismissing additional separate encryption processes, and a
self-supervised learning approach tailored to sequential data, in order to
ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022
to the University of Port
Efficient 3D Reconstruction, Streaming and Visualization of Static and Dynamic Scene Parts for Multi-client Live-telepresence in Large-scale Environments
Despite the impressive progress of telepresence systems for room-scale scenes
with static and dynamic scene entities, expanding their capabilities to
scenarios with larger dynamic environments beyond a fixed size of a few
square-meters remains challenging.
In this paper, we aim at sharing 3D live-telepresence experiences in
large-scale environments beyond room scale with both static and dynamic scene
entities at practical bandwidth requirements only based on light-weight scene
capture with a single moving consumer-grade RGB-D camera. To this end, we
present a system which is built upon a novel hybrid volumetric scene
representation in terms of the combination of a voxel-based scene
representation for the static contents, that not only stores the reconstructed
surface geometry but also contains information about the object semantics as
well as their accumulated dynamic movement over time, and a point-cloud-based
representation for dynamic scene parts, where the respective separation from
static parts is achieved based on semantic and instance information extracted
for the input frames. With an independent yet simultaneous streaming of both
static and dynamic content, where we seamlessly integrate potentially moving
but currently static scene entities in the static model until they are becoming
dynamic again, as well as the fusion of static and dynamic data at the remote
client, our system is able to achieve VR-based live-telepresence at close to
real-time rates. Our evaluation demonstrates the potential of our novel
approach in terms of visual quality, performance, and ablation studies
regarding involved design choices
Precise Facial Landmark Detection by Reference Heatmap Transformer
Most facial landmark detection methods predict landmarks by mapping the input
facial appearance features to landmark heatmaps and have achieved promising
results. However, when the face image is suffering from large poses, heavy
occlusions and complicated illuminations, they cannot learn discriminative
feature representations and effective facial shape constraints, nor can they
accurately predict the value of each element in the landmark heatmap, limiting
their detection accuracy. To address this problem, we propose a novel Reference
Heatmap Transformer (RHT) by introducing reference heatmap information for more
precise facial landmark detection. The proposed RHT consists of a Soft
Transformation Module (STM) and a Hard Transformation Module (HTM), which can
cooperate with each other to encourage the accurate transformation of the
reference heatmap information and facial shape constraints. Then, a Multi-Scale
Feature Fusion Module (MSFFM) is proposed to fuse the transformed heatmap
features and the semantic features learned from the original face images to
enhance feature representations for producing more accurate target heatmaps. To
the best of our knowledge, this is the first study to explore how to enhance
facial landmark detection by transforming the reference heatmap information.
The experimental results from challenging benchmark datasets demonstrate that
our proposed method outperforms the state-of-the-art methods in the literature.Comment: Accepted by IEEE Transactions on Image Processing, March 202
NeTO:Neural Reconstruction of Transparent Objects with Self-Occlusion Aware Refraction-Tracing
We present a novel method, called NeTO, for capturing 3D geometry of solid
transparent objects from 2D images via volume rendering. Reconstructing
transparent objects is a very challenging task, which is ill-suited for
general-purpose reconstruction techniques due to the specular light transport
phenomena. Although existing refraction-tracing based methods, designed
specially for this task, achieve impressive results, they still suffer from
unstable optimization and loss of fine details, since the explicit surface
representation they adopted is difficult to be optimized, and the
self-occlusion problem is ignored for refraction-tracing. In this paper, we
propose to leverage implicit Signed Distance Function (SDF) as surface
representation, and optimize the SDF field via volume rendering with a
self-occlusion aware refractive ray tracing. The implicit representation
enables our method to be capable of reconstructing high-quality reconstruction
even with a limited set of images, and the self-occlusion aware strategy makes
it possible for our method to accurately reconstruct the self-occluded regions.
Experiments show that our method achieves faithful reconstruction results and
outperforms prior works by a large margin. Visit our project page at
\url{https://www.xxlong.site/NeTO/
Multi-view 3D Face Reconstruction Based on Flame
At present, face 3D reconstruction has broad application prospects in various
fields, but the research on it is still in the development stage. In this
paper, we hope to achieve better face 3D reconstruction quality by combining
multi-view training framework with face parametric model Flame, propose a
multi-view training and testing model MFNet (Multi-view Flame Network). We
build a self-supervised training framework and implement constraints such as
multi-view optical flow loss function and face landmark loss, and finally
obtain a complete MFNet. We propose innovative implementations of multi-view
optical flow loss and the covisible mask. We test our model on AFLW and
facescape datasets and also take pictures of our faces to reconstruct 3D faces
while simulating actual scenarios as much as possible, which achieves good
results. Our work mainly addresses the problem of combining parametric models
of faces with multi-view face 3D reconstruction and explores the implementation
of a Flame based multi-view training and testing framework for contributing to
the field of face 3D reconstruction
A Survey on Deep Multi-modal Learning for Body Language Recognition and Generation
Body language (BL) refers to the non-verbal communication expressed through
physical movements, gestures, facial expressions, and postures. It is a form of
communication that conveys information, emotions, attitudes, and intentions
without the use of spoken or written words. It plays a crucial role in
interpersonal interactions and can complement or even override verbal
communication. Deep multi-modal learning techniques have shown promise in
understanding and analyzing these diverse aspects of BL. The survey emphasizes
their applications to BL generation and recognition. Several common BLs are
considered i.e., Sign Language (SL), Cued Speech (CS), Co-speech (CoS), and
Talking Head (TH), and we have conducted an analysis and established the
connections among these four BL for the first time. Their generation and
recognition often involve multi-modal approaches. Benchmark datasets for BL
research are well collected and organized, along with the evaluation of SOTA
methods on these datasets. The survey highlights challenges such as limited
labeled data, multi-modal learning, and the need for domain adaptation to
generalize models to unseen speakers or languages. Future research directions
are presented, including exploring self-supervised learning techniques,
integrating contextual information from other modalities, and exploiting
large-scale pre-trained multi-modal models. In summary, this survey paper
provides a comprehensive understanding of deep multi-modal learning for various
BL generations and recognitions for the first time. By analyzing advancements,
challenges, and future directions, it serves as a valuable resource for
researchers and practitioners in advancing this field. n addition, we maintain
a continuously updated paper list for deep multi-modal learning for BL
recognition and generation: https://github.com/wentaoL86/awesome-body-language
A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery
Semantic segmentation (classification) of Earth Observation imagery is a
crucial task in remote sensing. This paper presents a comprehensive review of
technical factors to consider when designing neural networks for this purpose.
The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural
Networks (RNNs), Generative Adversarial Networks (GANs), and transformer
models, discussing prominent design patterns for these ANN families and their
implications for semantic segmentation. Common pre-processing techniques for
ensuring optimal data preparation are also covered. These include methods for
image normalization and chipping, as well as strategies for addressing data
imbalance in training samples, and techniques for overcoming limited data,
including augmentation techniques, transfer learning, and domain adaptation. By
encompassing both the technical aspects of neural network design and the
data-related considerations, this review provides researchers and practitioners
with a comprehensive and up-to-date understanding of the factors involved in
designing effective neural networks for semantic segmentation of Earth
Observation imagery.Comment: 145 pages with 32 figure
- …