431 research outputs found

    Advances in machine learning algorithms for financial risk management

    Get PDF
    In this thesis, three novel machine learning techniques are introduced to address distinct yet interrelated challenges involved in financial risk management tasks. These approaches collectively offer a comprehensive strategy, beginning with the precise classification of credit risks, advancing through the nuanced forecasting of financial asset volatility, and ending with the strategic optimisation of financial asset portfolios. Firstly, a Hybrid Dual-Resampling and Cost-Sensitive technique has been proposed to combat the prevalent issue of class imbalance in financial datasets, particularly in credit risk assessment. The key process involves the creation of heuristically balanced datasets to effectively address the problem. It uses a resampling technique based on Gaussian mixture modelling to generate a synthetic minority class from the minority class data and concurrently uses k-means clustering on the majority class. Feature selection is then performed using the Extra Tree Ensemble technique. Subsequently, a cost-sensitive logistic regression model is then applied to predict the probability of default using the heuristically balanced datasets. The results underscore the effectiveness of our proposed technique, with superior performance observed in comparison to other imbalanced preprocessing approaches. This advancement in credit risk classification lays a solid foundation for understanding individual financial behaviours, a crucial first step in the broader context of financial risk management. Building on this foundation, the thesis then explores the forecasting of financial asset volatility, a critical aspect of understanding market dynamics. A novel model that combines a Triple Discriminator Generative Adversarial Network with a continuous wavelet transform is proposed. The proposed model has the ability to decompose volatility time series into signal-like and noise-like frequency components, to allow the separate detection and monitoring of non-stationary volatility data. The network comprises of a wavelet transform component consisting of continuous wavelet transforms and inverse wavelet transform components, an auto-encoder component made up of encoder and decoder networks, and a Generative Adversarial Network consisting of triple Discriminator and Generator networks. The proposed Generative Adversarial Network employs an ensemble of unsupervised loss derived from the Generative Adversarial Network component during training, supervised loss and reconstruction loss as part of its framework. Data from nine financial assets are employed to demonstrate the effectiveness of the proposed model. This approach not only enhances our understanding of market fluctuations but also bridges the gap between individual credit risk assessment and macro-level market analysis. Finally the thesis ends with a novel proposal of a novel technique or Portfolio optimisation. This involves the use of a model-free reinforcement learning strategy for portfolio optimisation using historical Low, High, and Close prices of assets as input with weights of assets as output. A deep Capsules Network is employed to simulate the investment strategy, which involves the reallocation of the different assets to maximise the expected return on investment based on deep reinforcement learning. To provide more learning stability in an online training process, a Markov Differential Sharpe Ratio reward function has been proposed as the reinforcement learning objective function. Additionally, a Multi-Memory Weight Reservoir has also been introduced to facilitate the learning process and optimisation of computed asset weights, helping to sequentially re-balance the portfolio throughout a specified trading period. The use of the insights gained from volatility forecasting into this strategy shows the interconnected nature of the financial markets. Comparative experiments with other models demonstrated that our proposed technique is capable of achieving superior results based on risk-adjusted reward performance measures. In a nut-shell, this thesis not only addresses individual challenges in financial risk management but it also incorporates them into a comprehensive framework; from enhancing the accuracy of credit risk classification, through the improvement and understanding of market volatility, to optimisation of investment strategies. These methodologies collectively show the potential of the use of machine learning to improve financial risk management

    Cerebrovascular dysfunction in cerebral small vessel disease

    Get PDF
    INTRODUCTION: Cerebral small vessel disease (SVD) is the cause of a quarter of all ischaemic strokes and is postulated to have a role in up to half of all dementias. SVD pathophysiology remains unclear but cerebrovascular dysfunction may be important. If confirmed many licensed medications have mechanisms of action targeting vascular function, potentially enabling new treatments via drug repurposing. Knowledge is limited however, as most studies assessing cerebrovascular dysfunction are small, single centre, single imaging modality studies due to the complexities in measuring cerebrovascular dysfunctions in humans. This thesis describes the development and application of imaging techniques measuring several cerebrovascular dysfunctions to investigate SVD pathophysiology and trial medications that may improve small blood vessel function in SVD. METHODS: Participants with minor ischaemic strokes were recruited to a series of studies utilising advanced MRI techniques to measure cerebrovascular dysfunction. Specifically MRI scans measured the ability of different tissues in the brain to change blood flow in response to breathing carbon dioxide (cerebrovascular reactivity; CVR) and the flow and pulsatility through the cerebral arteries, venous sinuses and CSF spaces. A single centre observational study optimised and established feasibility of the techniques and tested associations of cerebrovascular dysfunctions with clinical and imaging phenotypes. Then a randomised pilot clinical trial tested two medications’ (cilostazol and isosorbide mononitrate) ability to improve CVR and pulsatility over a period of eight weeks. The techniques were then expanded to include imaging of blood brain barrier permeability and utilised in multi-centre studies investigating cerebrovascular dysfunction in both sporadic and monogenetic SVDs. RESULTS: Imaging protocols were feasible, consistently being completed with usable data in over 85% of participants. After correcting for the effects of age, sex and systolic blood pressure, lower CVR was associated with higher white matter hyperintensity volume, Fazekas score and perivascular space counts. Lower CVR was associated with higher pulsatility of blood flow in the superior sagittal sinus and lower CSF flow stroke volume at the foramen magnum. Cilostazol and isosorbide mononitrate increased CVR in white matter. The CVR, intra-cranial flow and pulsatility techniques, alongside blood brain barrier permeability and microstructural integrity imaging were successfully employed in a multi-centre observational study. A clinical trial assessing the effects of drugs targeting blood pressure variability is nearing completion. DISCUSSION: Cerebrovascular dysfunction in SVD has been confirmed and may play a more direct role in disease pathogenesis than previously established risk factors. Advanced imaging measures assessing cerebrovascular dysfunction are feasible in multi-centre studies and trials. Identifying drugs that improve cerebrovascular dysfunction using these techniques may be useful in selecting candidates for definitive clinical trials which require large sample sizes and long follow up periods to show improvement against outcomes of stroke and dementia incidence and cognitive function

    Breaking Implicit Assumptions of Physical Delay-Feedback Reservoir Computing

    Get PDF
    The Reservoir Computing (RC) paradigm is a supervised machine learning scheme using the natural computational ability of dynamical systems. Such dynamical systems incorporate time delays showcasing intricate dynamics. This richness in dynamics, particularly the system's transient response to external stimuli makes them suitable for RC. A subset of RCs, Delay-Feedback Reservoir Computing (DFRC), is distinguished by its unique features: a system that consists of a single nonlinear node and a delay-line, with `virtual' nodes defined along the delay-line by time-multiplexing procedure of the input. These characteristics render DFRC particularly useful for hardware integration. In this thesis, the aim is to break the implicit assumptions made in the design of physical DFRC based on Mackey-Glass dynamical system. The first assumption we address is the performance of DFRC is not affected by the attenuation in physcial delay-line as the nodes defined along it are 'virtual'. However, our experimental results contradict this. To mitigate the impact of losses along the delay line, we propose a methodology `Devirtualisation', which describes the procedure of directly tapping into the delay lines at the position of a `virtual' node, rather than at the delay line's end. It trade-offs the DFRC system's read-out frequency and the quantity of output lines. Masking plays a crucial role in DFRC, as it defines `virtual' nodes along the delay-line. The second assumption is that the mask used should randomly generated numbers uniformly distributed between [-u,u]. We experimentally compare Binary Weight Mask (BWM) vs. Random Weight Mask (RWM) under different scenarios; and investigate the randomness of BWM signal distribution's impact. The third implicit assumption is that, DFRC is designed to solve time series prediction tasks involving a single input and output with no external feedback. To break this assumption, we propose two approaches to mix multi-input signals into DFRC; to validate these approaches, a novel task for DFRC that inherently necessitates multiple inputs: the control of a forced Van der Pol oscillator system, is proposed

    Towards a tricorder: clinical, health economic, and ethical investigation of point-of-care artificial intelligence electrocardiogram for heart failure

    Get PDF
    Heart failure (HF) is an international public health priority and a focus of the NHS Long Term Plan. There is a particular need in primary care for screening and early detection of heart failure with reduced ejection fraction (HFrEF) – the most common and serious HF subtype, and the only one with an abundant evidence base for effective therapies. Digital health technologies (DHTs) integrating artificial intelligence (AI) could improve diagnosis of HFrEF. Specifically, through a convergence of DHTs and AI, a single-lead electrocardiogram (ECG) can be recorded by a smart stethoscope and interrogated by AI (AI-ECG) to potentially serve as a point-of-care HFrEF test. However, there are concerning evidence gaps for such DHTs applying AI; across intersecting clinical, health economic, and ethical considerations. My thesis therefore investigates hypotheses that AI-ECG is 1.) Reliable, accurate, unbiased, and can be patient self-administered, 2.) Of justifiable health economic impact for primary care deployment, and 3.) Appropriate across ethical domains for deployment as a tool for patient self-administered screening. The theoretical basis for this work is presented in the Introduction (Chapter 1). Chapter 2 describes the first large-scale, multi-centre independent external validation study of AI-ECG, prospectively recruiting 1,050 patients and highlighting impressive performance: area under the curve, sensitivity, and specificity up to 0·91 (95% confidence interval: 0·88–0·95), 91·9% (78·1–98·3), and 80·2% (75·5–84·3) respectively; and absence of bias by age, sex, and ethnicity. Performance was independent of operator, and usability of the tool extended to patients being able to self-examine. Chapter 3 presents a clinical and health economic outcomes analysis using a contemporary digital repository of 2.5 million NHS patient records. A propensity-matched cohort was derived using all patients diagnosed with HF from 2015-2020 (n = 34,208). Novel findings included the unacceptable reality that 70% of index HF diagnoses are made through hospitalisation; where index diagnosis through primary care conferred a medium-term survival advantage and long-term cost saving (£2,500 per patient). This underpins a health economic model for the deployment of AI-ECG across primary care. Chapter 4 approaches a normative ethical analysis focusing on equity, agency, data rights, and responsibility for safe, effective, and trustworthy implementation of an unprecedented at-home patient self-administered AI-ECG screening programme. I propose approaches to mitigating any potential harms, towards preserving and promoting trust, patient engagement, and public health. Collectively, this thesis marks novel work highlighting AI-ECG as tool with the potential to address major cardiovascular public health priorities. Scrutiny through complimentary clinical, health economic, and ethical considerations can directly serve patients and health systems by blueprinting best-practice for the evaluation and implementation of DHTs integrating AI – building the conviction needed to realise the full potential of such technologies.Open Acces

    Revolutionizing Healthcare through Health Monitoring Applications with Wearable Biomedical Devices

    Get PDF
    The Internet of Things (IoT) has revolutionized the connectivity and communication of tangible objects, and it serves as a versatile and cost-effective solution in the healthcare sector, particularly in regions with limited healthcare infrastructure. This research explores the application of sensors such as LM35, AD8232, and MAX30100 for the detection of vital health indicators, including body temperature, pulse rate, electrocardiogram (ECG), and oxygen saturation levels, with data transmission through IoT cloud, offering real-time parameter access via an Android application for non-invasive remote patient monitoring. The study aims to expand healthcare services to various settings, such as hospitals, commercial areas, educational institutions, workplaces, and residential neighborhoods. After the COVID-19 pandemic, IoT-enabled continuous monitoring of critical health metrics such as temperature and pulse rate has become increasingly crucial for early illness detection and efficient communication with healthcare providers. Our low-cost wearable device, which includes ECG monitoring, aims to bridge the accessibility gap for people with limited financial resources, with the primary goal of providing efficient healthcare solutions to underserved rural areas while also contributing valuable data to future medical research. Our proposed system is a low-cost, high-efficiency solution that outperforms existing systems in healthcare data collection and patient monitoring. It improves access to vital health data and shows economic benefits, indicating a significant advancement in healthcare technology

    Determining the Optimal Intervention Time for Degenerative Mitral Regurgitation Using Left Ventricle Mechanics

    Get PDF
    Background: Uncertainty remains about the timing for intervention in primary MR. The incremental and clinical value of newer techniques including LV and LA deformation, 3D LV volumes, myocardial work and cardiac biomarkers are poorly understood. We aimed to examine whether advanced echocardiographic imaging techniques, exercise tests and blood biomarkers may be able to identify the earliest signs of LV dysfunction, predict post-operative outcomes and objectively detect symptoms in patients with primary MR which may help guide the timing of intervention. Method: In the asymptomatic cohort, resting and exercise echocardiography combined with CPET were performed prospectively in 97 asymptomatic patients with moderate to severe or severe primary MR. In the surgery cohort, echocardiography was performed at baseline and one year after MV surgery in 98 patients with severe degenerative MR. Results: In the asymptomatic cohort, 54% of patients had reduced exercise capacity, i.e. VO2 peak < 84% of the predicted value. 18% of patients stopped the exercise test because of dyspnoea. Higher rest PASP was a predictor of dyspnoea during exercise testing. LV end-diastolic volume was a better predictor of the subsequent mitral surgery. In the surgery cohort, after mitral surgery, 6 (6%) patients died, and LV dysfunction developed in 12 (12%) patients, i.e. LVEF <50%. Reservoir LA strain and global work index were associated with post-operative LVEF. However, pre-operative GLS and NT-proBNP were independent predictors of post- operative LVEF. LA strain parameters and NT-proBNP were associated with the presence of symptoms However, GWI and PASP were independently associated with the occurrence of symptoms. Conclusion: This thesis demonstrated that presence of adverse features markers such as impaired myocardial deformation, reduced myocardial work index, pulmonary hypertension and high NT-proBNP was associated with poor prognosis. These markers are non-invasive, safe and relatively easy to obtain. The combination of CPET and exercise echocardiography provides unique data in the assessment of symptomatic status and it should be used much more frequently in the assessment of MR and perhaps even incorporated as standard part of clinical practice

    2023-2024 Boise State University Undergraduate Catalog

    Get PDF
    This catalog is primarily for and directed at students. However, it serves many audiences, such as high school counselors, academic advisors, and the public. In this catalog you will find an overview of Boise State University and information on admission, registration, grades, tuition and fees, financial aid, housing, student services, and other important policies and procedures. However, most of this catalog is devoted to describing the various programs and courses offered at Boise State

    Deep Multi Temporal Scale Networks for Human Motion Analysis

    Get PDF
    The movement of human beings appears to respond to a complex motor system that contains signals at different hierarchical levels. For example, an action such as ``grasping a glass on a table'' represents a high-level action, but to perform this task, the body needs several motor inputs that include the activation of different joints of the body (shoulder, arm, hand, fingers, etc.). Each of these different joints/muscles have a different size, responsiveness, and precision with a complex non-linearly stratified temporal dimension where every muscle has its temporal scale. Parts such as the fingers responds much faster to brain input than more voluminous body parts such as the shoulder. The cooperation we have when we perform an action produces smooth, effective, and expressive movement in a complex multiple temporal scale cognitive task. Following this layered structure, the human body can be described as a kinematic tree, consisting of joints connected. Although it is nowadays well known that human movement and its perception are characterised by multiple temporal scales, very few works in the literature are focused on studying this particular property. In this thesis, we will focus on the analysis of human movement using data-driven techniques. In particular, we will focus on the non-verbal aspects of human movement, with an emphasis on full-body movements. The data-driven methods can interpret the information in the data by searching for rules, associations or patterns that can represent the relationships between input (e.g. the human action acquired with sensors) and output (e.g. the type of action performed). Furthermore, these models may represent a new research frontier as they can analyse large masses of data and focus on aspects that even an expert user might miss. The literature on data-driven models proposes two families of methods that can process time series and human movement. The first family, called shallow models, extract features from the time series that can help the learning algorithm find associations in the data. These features are identified and designed by domain experts who can identify the best ones for the problem faced. On the other hand, the second family avoids this phase of extraction by the human expert since the models themselves can identify the best set of features to optimise the learning of the model. In this thesis, we will provide a method that can apply the multi-temporal scales property of the human motion domain to deep learning models, the only data-driven models that can be extended to handle this property. We will ask ourselves two questions: what happens if we apply knowledge about how human movements are performed to deep learning models? Can this knowledge improve current automatic recognition standards? In order to prove the validity of our study, we collected data and tested our hypothesis in specially designed experiments. Results support both the proposal and the need for the use of deep multi-scale models as a tool to better understand human movement and its multiple time-scale nature
    • …
    corecore