8,738 research outputs found

    Semi-Federated Learning of an Embedding Space Across Multiple Machine Clusters

    Get PDF
    Provided are systems and methods for privacy-preserving learning of a shared embedding space for data split across multiple separate clusters of computing machines. In one example, the multiple separate clusters of computing machines can correspond to multiple separate data silos

    Applications of Deep Learning Models in Financial Forecasting

    Get PDF
    In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting. The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data. The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided

    On the Generation of Realistic and Robust Counterfactual Explanations for Algorithmic Recourse

    Get PDF
    This recent widespread deployment of machine learning algorithms presents many new challenges. Machine learning algorithms are usually opaque and can be particularly difficult to interpret. When humans are involved, algorithmic and automated decisions can negatively impact people’s lives. Therefore, end users would like to be insured against potential harm. One popular way to achieve this is to provide end users access to algorithmic recourse, which gives end users negatively affected by algorithmic decisions the opportunity to reverse unfavorable decisions, e.g., from a loan denial to a loan acceptance. In this thesis, we design recourse algorithms to meet various end user needs. First, we propose methods for the generation of realistic recourses. We use generative models to suggest recourses likely to occur under the data distribution. To this end, we shift the recourse action from the input space to the generative model’s latent space, allowing to generate counterfactuals that lie in regions with data support. Second, we observe that small changes applied to the recourses prescribed to end users likely invalidate the suggested recourse after being nosily implemented in practice. Motivated by this observation, we design methods for the generation of robust recourses and for assessing the robustness of recourse algorithms to data deletion requests. Third, the lack of a commonly used code-base for counterfactual explanation and algorithmic recourse algorithms and the vast array of evaluation measures in literature make it difficult to compare the per formance of different algorithms. To solve this problem, we provide an open source benchmarking library that streamlines the evaluation process and can be used for benchmarking, rapidly developing new methods, and setting up new experiments. In summary, our work contributes to a more reliable interaction of end users and machine learned models by covering fundamental aspects of the recourse process and suggests new solutions towards generating realistic and robust counterfactual explanations for algorithmic recourse

    Assessing the Role and Regulatory Impact of Digital Assets in Decentralizing Finance

    Get PDF
    This project will explore the development of decentralized financial (DeFi) markets since the first introduction of digital assets created through the application of a form of distributed ledger technology (DLT), known as blockchain, in 2008. More specifically, a qualitative inquiry of the role of digital assets in relation to traditional financial markets infrastructure will be conducted in order to answer the following questions: (i) can the digital asset and decentralized financial markets examined in this thesis co-exist with traditional assets and financial markets, and, if so, (ii) are traditional or novel forms of regulation (whether financial or otherwise) needed or desirable for the digital asset and decentralized financial markets examined herein? The aim of this project will be to challenge a preliminary hypothesis that traditional and decentralized finance can be compatible; provided, that governments and other centralized authorities approach market innovations as an opportunity to improve existing monetary infrastructure and delivery of financial services (both in the public and private sector), rather than as an existential threat. Thus, this thesis seeks to establish that, through collaborating with private markets to identify the public good to which DeFi markets contribute, the public sector can foster an appropriate environment which is both promotive and protective of the public interest without unduly stifling innovation and progress

    Improved stacking ensemble learning based on feature selection to accurately predict warfarin dose

    Get PDF
    BackgroundWith the rapid development of artificial intelligence, prediction of warfarin dose via machine learning has received more and more attention. Since the dose prediction involve both linear and nonlinear problems, traditional machine learning algorithms are ineffective to solve such problems at one time.ObjectiveBased on the characteristics of clinical data of Chinese warfarin patients, an improved stacking ensemble learning can achieve higher prediction accuracy.MethodsInformation of 641 patients from southern China who had reached a steady state on warfarin was collected, including demographic information, medical history, genotype, and co-medication status. The dataset was randomly divided into a training set (90%) and a test set (10%). The predictive capability is evaluated on a new test set generated by stacking ensemble learning. Additional factors associated with warfarin dose were discovered by feature selection methods.ResultsA newly proposed heuristic-stacking ensemble learning performs better than traditional-stacking ensemble learning in key metrics such as accuracy of ideal dose (73.44%, 71.88%), mean absolute errors (0.11 mg/day, 0.13 mg/day), root mean square errors (0.18 mg/day, 0.20 mg/day) and R2 (0.87, 0.82).ConclusionsThe developed heuristic-stacking ensemble learning can satisfactorily predict warfarin dose with high accuracy. A relationship between hypertension, a history of severe preoperative embolism, and warfarin dose is found, which provides a useful reference for the warfarin dose administration in the future

    Assessing the feasibility of applying machine learning to diagnosing non-effusive feline infectious peritonitis

    Get PDF
    Feline infectious peritonitis (FIP) is a severe feline coronavirus-associated syndrome in cats, which is invariably fatal without anti-viral treatment. In the majority of non-effusive FIP cases encountered in practice, confirmatory diagnostic testing is not undertaken and reliance is given to the interpretation of valuable, but essentially non-specific, clinical signs and laboratory markers. We hypothesised that it may be feasible to develop a machine learning (ML) approach which may be applied to the analysis of clinical data to aid in the diagnosis of disease. A dataset encompassing 1939 suspected FIP cases was scored for clinical suspicion of FIP on the basis of history, signalment, clinical signs and laboratory results, using published guidelines, comprising 683 FIP (35.2%), and 1256 non-FIP (64.8%) cases. This dataset was used to train, validate and evaluate two diagnostic machine learning ensemble models. These models, which analysed signalment and laboratory data alone, allowed the accurate discrimination of FIP and non-FIP cases in line with expert opinion. To evaluate whether these models may have value as a diagnostic tool, they were applied to a collection of 80 cases for which the FIP status had been confirmed (FIP: n = 58 (72.5%), non–FIP: n = 22 (27.5%)). Both ensemble models detected FIP with an accuracy of 97.5%, an area under the curve (AUC) of 0.969, sensitivity of 95.45% and specificity of 98.28%. This work demonstrates that, in principle, ML can be usefully applied to the diagnosis of non-effusive FIP. Further work is required before ML may be deployed in the laboratory as a diagnostic tool, such as training models on datasets of confirmed cases and accounting for inter-laboratory variation. Nevertheless, these results illustrate the potential benefit of applying ML to standardising and accelerating the interpretation of clinical pathology data, thereby improving the diagnostic utility of existing laboratory tests

    Reading Greek and Hellenistic-Roman Spolia:Objects, Appropriation and Cultural Change

    Get PDF
    Plundering and taking home precious objects from a defeated enemy was a widespread activity in the Greek and Hellenistic-Roman world. In this volume literary critics, historians and archaeologists join forces in investigating this phenomenon in terms of appropriation and cultural change. In-depth interpretations of famous ancient spoliations, like that of the Greeks after Plataea or the Romans after the capture of Jerusalem, reveal a fascinating paradox: while the material record shows an eager incorporation of new objects, the texts display abhorrence of the negative effects they were thought to bring along. As this volume demonstrates, both reactions testify to the crucial innovative impact objects from abroad may have

    How Do We Learn What We Cannot Say?

    Full text link
    The contributions of this thesis are two-fold. First, this thesis presents UDTube, an easily usable software developed to perform morphological analysis in a multi-task fashion. This work shows the strong performance of UDTube versus the current state-of-the-art, UDPipe, across eight languages, primarily in the annotation of morphological features. The second contribution of this thesis is a exploration into the study of defectivity. UDTube is used to annotate a large amount of data in Greek and Russian which is ultimately used to investigate the plausibility of Indirect Negative Evidence (INE), a popular approach to the acquisition of morphological defectivity. The reported findings raise a challenge to INE
    • …
    corecore