1,211 research outputs found
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Advances in machine learning algorithms for financial risk management
In this thesis, three novel machine learning techniques are introduced to address distinct
yet interrelated challenges involved in financial risk management tasks. These approaches
collectively offer a comprehensive strategy, beginning with the precise classification of credit
risks, advancing through the nuanced forecasting of financial asset volatility, and ending
with the strategic optimisation of financial asset portfolios.
Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk
assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture
modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed
using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression
model is then applied to predict the probability of default using the heuristically balanced
datasets. The results underscore the effectiveness of our proposed technique, with superior
performance observed in comparison to other imbalanced preprocessing approaches. This
advancement in credit risk classification lays a solid foundation for understanding individual
financial behaviours, a crucial first step in the broader context of financial risk management.
Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a
Triple Discriminator Generative Adversarial Network with a continuous wavelet transform
is proposed. The proposed model has the ability to decompose volatility time series into
signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform
component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a
Generative Adversarial Network consisting of triple Discriminator and Generator networks.
The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised
loss and reconstruction loss as part of its framework. Data from nine financial assets are
employed to demonstrate the effectiveness of the proposed model. This approach not only
enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis.
Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio
optimisation using historical Low, High, and Close prices of assets as input with weights of
assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return
on investment based on deep reinforcement learning. To provide more learning stability in
an online training process, a Markov Differential Sharpe Ratio reward function has been
proposed as the reinforcement learning objective function. Additionally, a Multi-Memory
Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout
a specified trading period. The use of the insights gained from volatility forecasting into
this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving
superior results based on risk-adjusted reward performance measures.
In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the
accuracy of credit risk classification, through the improvement and understanding of market
volatility, to optimisation of investment strategies. These methodologies collectively show
the potential of the use of machine learning to improve financial risk management
In the name of status:Adolescent harmful social behavior as strategic self-regulation
Adolescent harmful social behavior is behavior that benefits the person that exhibits it but could harm (the interest of) another. The traditional perspective on adolescent harmful social behavior is that it is what happens when something goes wrong in the developmental process, classifying such behaviors as a self-regulation failure. Yet, theories drawing from evolution theory underscore the adaptiveness of harmful social behavior and argue that such behavior is enacted as a means to gain important resources for survival and reproduction; gaining a position of power This dissertation aims to examine whether adolescent harmful social behavior can indeed be strategic self-regulation, and formulated two questions: Can adolescent harmful social behavior be seen as strategic attempts to obtain social status? And how can we incorporate this status-pursuit perspective more into current interventions that aim to reduce harmful social behavior? To answer these questions, I conducted a meta-review, a meta-analysis, two experimental studies, and an individual participant data meta-analysis (IPDMA). Meta-review findings of this dissertation underscore that when engaging in particular behavior leads to the acquisition of important peer-status-related goals, harmful social behavior may also develop from adequate self-regulation. Empirical findings indicate that the prospect of status affordances can motivate adolescents to engage in harmful social behavior and that descriptive and injunctive peer norms can convey such status prospects effectively. IPDMA findings illustrate that we can reach more adolescent cooperation and collectivism than we are currently promoting via interventions. In this dissertation, I argue we can do this in two ways. One, teach adolescents how they can achieve status by behaving prosocially. And two, change peer norms that reward harmful social behavior with popularity
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
UMSL Bulletin 2022-2023
The 2022-2023 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1087/thumbnail.jp
Deep-Learning-based Fast and Accurate 3D CT Deformable Image Registration in Lung Cancer
Purpose: In some proton therapy facilities, patient alignment relies on two
2D orthogonal kV images, taken at fixed, oblique angles, as no 3D on-the-bed
imaging is available. The visibility of the tumor in kV images is limited since
the patient's 3D anatomy is projected onto a 2D plane, especially when the
tumor is behind high-density structures such as bones. This can lead to large
patient setup errors. A solution is to reconstruct the 3D CT image from the kV
images obtained at the treatment isocenter in the treatment position.
Methods: An asymmetric autoencoder-like network built with vision-transformer
blocks was developed. The data was collected from 1 head and neck patient: 2
orthogonal kV images (1024x1024 voxels), 1 3D CT with padding (512x512x512)
acquired from the in-room CT-on-rails before kVs were taken and 2
digitally-reconstructed-radiograph (DRR) images (512x512) based on the CT. We
resampled kV images every 8 voxels and DRR and CT every 4 voxels, thus formed a
dataset consisting of 262,144 samples, in which the images have a dimension of
128 for each direction. In training, both kV and DRR images were utilized, and
the encoder was encouraged to learn the jointed feature map from both kV and
DRR images. In testing, only independent kV images were used. The full-size
synthetic CT (sCT) was achieved by concatenating the sCTs generated by the
model according to their spatial information. The image quality of the
synthetic CT (sCT) was evaluated using mean absolute error (MAE) and
per-voxel-absolute-CT-number-difference volume histogram (CDVH).
Results: The model achieved a speed of 2.1s and a MAE of <40HU. The CDVH
showed that <5% of the voxels had a per-voxel-absolute-CT-number-difference
larger than 185 HU.
Conclusion: A patient-specific vision-transformer-based network was developed
and shown to be accurate and efficient to reconstruct 3D CT images from kV
images.Comment: 9 figure
BaseFold: Efficient Field-Agnostic Polynomial Commitment Schemes from Foldable Codes
Interactive Oracle Proof of Proximity (IOPPs) are a powerful tool for constructing succinct non-interactive arguments of knowledge (SNARKs) in the random oracle model, which are fast and plausibly post-quantum secure. The Fast Reed Solomon IOPP (FRI) is the most widely used in practice, while tensor-code IOPPs (such as Brakedown) achieve significantly faster prover times at the cost of much larger proofs. IOPPs are used to construct polynomial commitment schemes (PCS), which are not only an important building block for SNARKs but also have a wide range of independent applications.
This work introduces Basefold, a generalization of the FRI IOPP to a broad class of linear codes beyond Reed-Solomon, which we call . We construct a new family of foldable linear codes, which are a special type of randomly punctured Reed-Muller code, and prove tight bounds on their minimum distance. Finally, we introduce a new construction of a multilinear PCS from any foldable linear code, which is based on interleaving Basefold with the classical sumcheck protocol for multilinear polynomial evaluation. As a special case, this gives a new multilinear PCS from FRI.
In addition to these theoretical contributions, the Basefold PCS instantiated with our new foldable linear codes offers a more reasonable tradeoff between prover time, proof size, and verifier time than prior constructions. For instance, for polynomials over a -bit field with variables, the Basefold prover is faster than both Brakedown and FRI-PCS ( times faster than Brakedown and times faster than FRI-PCS), and its proof is times smaller than Brakedown\u27s. On the other hand, for polynomials with variables, Basefold\u27s prover is times faster than FRI-PCS, it\u27s proof is times smaller than Brakedown\u27s and its verifier is times faster. Using Basefold to compile the Hyperplonk PIOP [CBBZ23] results in an extremely fast implementation of Hyperplonk, which in addition to having competitive performance on general circuits, is particularly fast for circuits with high-degree custom gates (e.g., signature verification and table lookups). Hyperplonk with Basefold is approximately equivalent to the speed of Hyperplonk with Brakedown, but with a proof size that is more than times smaller. Finally, Basefold maintains performance across a wider variety of field choices than FRI, which requires FFT-friendly fields. Thus, Basefold can have an extremely fast prover compared to SNARKs from FRI for special applications. Benchmarking a circom ECDSA verification circuit with curve secp256k1, Hyperplonk with Basefold has a prover time that is more than faster than with FRI and its proof size is times smaller than Hyperplonk with Brakedown
Recommended from our members
Policy options for food system transformation in Africa and the role of science, technology and innovation
As recognized by the Science, Technology and Innovation Strategy for Africa – 2024 (STISA-2024), science, technology and innovation (STI) offer many opportunities for addressing the main constraints to embracing transformation in Africa, while important lessons can be learned from successful interventions, including policy and institutional innovations, from those African countries that have already made significant progress towards food system transformation. This chapter identifies opportunities for African countries and the region to take proactive steps to harness the potential of the food and agriculture sector so as to ensure future food and nutrition security by applying STI solutions and by drawing on transformational policy and institutional innovations across the continent. Potential game-changing solutions and innovations for food system transformation serving people and ecology apply to (a) raising production efficiency and restoring and sustainably managing degraded resources; (b) finding innovation in the storage, processing and packaging of foods; (c) improving human nutrition and health; (d) addressing equity and vulnerability at the community and ecosystem levels; and (e) establishing preparedness and accountability systems. To be effective in these areas will require institutional coordination; clear, food safety and health-conscious regulatory environments; greater and timely access to information; and transparent monitoring and accountability systems
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery
Semantic segmentation (classification) of Earth Observation imagery is a
crucial task in remote sensing. This paper presents a comprehensive review of
technical factors to consider when designing neural networks for this purpose.
The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural
Networks (RNNs), Generative Adversarial Networks (GANs), and transformer
models, discussing prominent design patterns for these ANN families and their
implications for semantic segmentation. Common pre-processing techniques for
ensuring optimal data preparation are also covered. These include methods for
image normalization and chipping, as well as strategies for addressing data
imbalance in training samples, and techniques for overcoming limited data,
including augmentation techniques, transfer learning, and domain adaptation. By
encompassing both the technical aspects of neural network design and the
data-related considerations, this review provides researchers and practitioners
with a comprehensive and up-to-date understanding of the factors involved in
designing effective neural networks for semantic segmentation of Earth
Observation imagery.Comment: 145 pages with 32 figure
- …