2,250 research outputs found
Applications of Deep Learning Models in Financial Forecasting
In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting.
The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with
approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data.
The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to
financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided
On the Generation of Realistic and Robust Counterfactual Explanations for Algorithmic Recourse
This recent widespread deployment of machine learning algorithms presents many new challenges. Machine learning algorithms are usually opaque and can be particularly difficult to interpret. When humans are involved, algorithmic and automated decisions can negatively impact people’s lives. Therefore, end users would like to be insured against potential harm. One popular way to achieve this is to provide end users access to algorithmic recourse, which gives end users negatively affected by algorithmic decisions the opportunity to reverse unfavorable decisions, e.g., from a loan denial to a loan acceptance. In this thesis, we design recourse algorithms to meet various end user needs. First, we propose methods for the generation of realistic recourses. We use generative models to suggest recourses likely to occur under the data distribution. To this end, we shift the recourse action from the input space to the generative model’s latent space, allowing to generate counterfactuals that lie in regions with data support. Second, we observe that small changes applied to the recourses prescribed to end users likely invalidate the suggested recourse after being nosily implemented in practice. Motivated by this observation, we design methods for the generation of robust recourses and for assessing the robustness of recourse algorithms to data deletion requests. Third, the lack of a commonly used code-base for counterfactual explanation and algorithmic recourse algorithms and the vast array of evaluation measures in literature make it difficult to compare the per formance of different algorithms. To solve this problem, we provide an open source benchmarking library that streamlines the evaluation process and can be used for benchmarking, rapidly developing new methods, and setting up new
experiments. In summary, our work contributes to a more reliable interaction of end users and machine learned models by covering fundamental aspects of the recourse process and suggests new solutions towards generating realistic and robust counterfactual explanations for algorithmic recourse
Bayesian inference for challenging scientific models
Advances in technology and computation have led to ever more complicated
scientific models of phenomena across a wide variety of fields. Many of these
models present challenges for Bayesian inference, as a result of computationally
intensive likelihoods, high-dimensional parameter spaces or large dataset sizes.
In this thesis we show how we can apply developments in probabilistic machine
learning and statistics to do inference with examples of these types of models.
As a demonstration of an applied inference problem involving a non-trivial
likelihood computation, we show how a combination of optimisation and
MCMC methods along with careful consideration of priors can be used to infer
the parameters of an ODE model of the cardiac action potential.
We then consider the problem of pileup, a phenomenon that occurs in
astronomy when using CCD detectors to observe bright sources. It complicates
the fitting of even simple spectral models by introducing an observation model
with a large number of continuous and discrete latent variables that scales with
the size of the dataset. We develop an MCMC-based method that can work in
the presence of pileup by explicitly marginalising out discrete variables and
using adaptive HMC on the remaining continuous variables. We show with
synthetic experiments that it allows us to fit spectral models in the presence
of pileup without biasing the results. We also compare it to neural Simulation-
Based Inference approaches, and find that they perform comparably to the
MCMC-based approach whilst being able to scale to larger datasets.
As an example of a problem where we wish to do inference with extremely
large datasets, we consider the Extreme Deconvolution method. The method
fits a probability density to a dataset where each observation has Gaussian
noise added with a known sample-specific covariance, originally intended
for use with astronomical datasets. The existing fitting method is batch EM,
which would not normally be applied to large datasets such as the Gaia catalog
containing noisy observations of a billion stars. In this thesis we propose two
minibatch variants of extreme deconvolution, based on an online variation of
the EM algorithm, and direct gradient-based optimisation of the log-likelihood,
both of which can run on GPUs. We demonstrate that these methods provide
faster fitting, whilst being able to scale to much larger models for use with
larger datasets.
We then extend the extreme deconvolution approach to work with non-
Gaussian noise, and to use more flexible density estimators such as normalizing
flows. Since both adjustments lead to an intractable likelihood, we resort to
amortized variational inference in order to fit them. We show that for some
datasets that flows can outperform Gaussian mixtures for extreme deconvolution,
and that fitting with non-Gaussian noise is now possible
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Taking Politics at Face Value: How Features Expose Ideology
Previous studies using computer vision neural networks to analyze facial images have uncovered patterns in the feature extracted output that are indicative of individual dispositions. For example, Wang and Kosinski (2018) were able to predict the sexual orientation of a target from his or her facial image with surprising accuracy, while Kosinski (2021) was able to do the same in regards to political orientation. These studies suggest that computer vision neural networks can be used to classify people into categories using only their facial images.However, there is some ambiguity in regards to the degree to which these features extracted from facial images incorporate facial morphology when used to make predictions. Critics have suggested that a subject’s transient facial features, such as using makeup, having a tan, donning a beard, or wearing glasses, might be subtly indicative of group belonging (Agüera y Arcas et al., 2018). Further, previous research in this domain has found that accurate image categorization can occur without utilizing facial morphology at all, instead relying upon image brightness, color dominance, or the background of the image to make successful classifications (Leuner, 2019; Wang, 2022).
This dissertation seeks to bring some clarity to this domain. Using an application programming interface (API) for the popular social networking site Twitter, a sample of nearly a quarter million images of ideological organization followers was created. These images were followers of organizations supportive of, or oppositional to, the polarizing political issues of gun control and immigration. Through a series of strong comparisons, this research tests for the influence of facial morphology in image categorization. Facial images were converted into point and mesh coordinate representations of the subjects’ faces, thus eliminating the influence of transient facial features. Images were able to be classified using facial morphology alone at rates well above chance (64% accuracy across all models utilizing only facial points, 62% using facial mesh). These results provide the strongest evidence to date that images can be categorized into social categories by their facial morphology alone
Duet: efficient and scalable hybriD neUral rElation undersTanding
Learned cardinality estimation methods have achieved high precision compared
to traditional methods. Among learned methods, query-driven approaches face the
data and workload drift problem for a long time. Although both query-driven and
hybrid methods are proposed to avoid this problem, even the state-of-the-art of
them suffer from high training and estimation costs, limited scalability,
instability, and long-tailed distribution problem on high cardinality and
high-dimensional tables, which seriously affects the practical application of
learned cardinality estimators. In this paper, we prove that most of these
problems are directly caused by the widely used progressive sampling. We solve
this problem by introducing predicates information into the autoregressive
model and propose Duet, a stable, efficient, and scalable hybrid method to
estimate cardinality directly without sampling or any non-differentiable
process, which can not only reduces the inference complexity from O(n) to O(1)
compared to Naru and UAE but also achieve higher accuracy on high cardinality
and high-dimensional tables. Experimental results show that Duet can achieve
all the design goals above and be much more practical and even has a lower
inference cost on CPU than that of most learned methods on GPU
Automated identification and behaviour classification for modelling social dynamics in group-housed mice
Mice are often used in biology as exploratory models of human conditions, due to their similar genetics and physiology. Unfortunately, research on behaviour has traditionally been limited to studying individuals in isolated environments and over short periods of time. This can miss critical time-effects, and, since mice are social creatures, bias results.
This work addresses this gap in research by developing tools to analyse the individual behaviour of group-housed mice in the home-cage over several days and with minimal disruption. Using data provided by the Mary Lyon Centre at MRC Harwell we designed an end-to-end system that (a) tracks and identifies mice in a cage, (b) infers their behaviour, and subsequently (c) models the group dynamics as functions of individual activities. In support of the above, we also curated and made available a large dataset of mouse localisation and behaviour classifications (IMADGE), as well as two smaller annotated datasets for training/evaluating the identification (TIDe) and behaviour inference (ABODe) systems. This research constitutes the first of its kind in terms of the scale and challenges addressed. The data source (side-view single-channel video with clutter and no identification markers for mice) presents challenging conditions for analysis, but has the potential to give richer information while using industry standard housing.
A Tracking and Identification module was developed to automatically detect, track and identify the (visually similar) mice in the cluttered home-cage using only single-channel IR video and coarse position from RFID readings. Existing detectors and trackers were combined with a novel Integer Linear Programming formulation to assign anonymous tracks to mouse identities. This utilised a probabilistic weight model of affinity between detections and RFID pickups.
The next task necessitated the implementation of the Activity Labelling module that classifies the behaviour of each mouse, handling occlusion to avoid giving unreliable classifications when the mice cannot be observed. Two key aspects of this were (a) careful feature-selection, and (b) judicious balancing of the errors of the system in line with the repercussions for our setup.
Given these sequences of individual behaviours, we analysed the interaction dynamics between mice in the same cage by collapsing the group behaviour into a sequence of interpretable latent regimes using both static and temporal (Markov) models. Using a permutation matrix, we were able to automatically assign mice to roles in the HMM, fit a global model to a group of cages and analyse abnormalities in data from a different demographic
AI: Limits and Prospects of Artificial Intelligence
The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence
BayesDLL: Bayesian Deep Learning Library
We release a new Bayesian neural network library for PyTorch for large-scale
deep networks. Our library implements mainstream approximate Bayesian inference
algorithms: variational inference, MC-dropout, stochastic-gradient MCMC, and
Laplace approximation. The main differences from other existing Bayesian neural
network libraries are as follows: 1) Our library can deal with very large-scale
deep networks including Vision Transformers (ViTs). 2) We need virtually zero
code modifications for users (e.g., the backbone network definition codes do
not neet to be modified at all). 3) Our library also allows the pre-trained
model weights to serve as a prior mean, which is very useful for performing
Bayesian inference with the large-scale foundation models like ViTs that are
hard to optimise from scratch with the downstream data alone. Our code is
publicly available at: \url{https://github.com/SamsungLabs/BayesDLL}\footnote{A
mirror repository is also available at:
\url{https://github.com/minyoungkim21/BayesDLL}.}
- …