6,434 research outputs found

    Bayesian Forecasting in Economics and Finance: A Modern Review

    Full text link
    The Bayesian statistical paradigm provides a principled and coherent approach to probabilistic forecasting. Uncertainty about all unknowns that characterize any forecasting problem -- model, parameters, latent states -- is able to be quantified explicitly, and factored into the forecast distribution via the process of integration or averaging. Allied with the elegance of the method, Bayesian forecasting is now underpinned by the burgeoning field of Bayesian computation, which enables Bayesian forecasts to be produced for virtually any problem, no matter how large, or complex. The current state of play in Bayesian forecasting in economics and finance is the subject of this review. The aim is to provide the reader with an overview of modern approaches to the field, set in some historical context; and with sufficient computational detail given to assist the reader with implementation.Comment: The paper is now published online at: https://doi.org/10.1016/j.ijforecast.2023.05.00

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    GeschĂ€tzt mehr als 6.000 Erkrankungen werden durch VerĂ€nderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begĂŒnstigen. All diese Prozesse mĂŒssen ĂŒberprĂŒft werden, um die zum beschriebenen PhĂ€notyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer PathogenitĂ€t. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier prĂ€sentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells fĂŒr das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf AllelhĂ€ufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfĂŒgbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Runway Safety Improvements Through a Data Driven Approach for Risk Flight Prediction and Simulation

    Get PDF
    Runway overrun is one of the most frequently occurring flight accident types threatening the safety of aviation. Sensors have been improved with recent technological advancements and allow data collection during flights. The recorded data helps to better identify the characteristics of runway overruns. The improved technological capabilities and the growing air traffic led to increased momentum for reducing flight risk using artificial intelligence. Discussions on incorporating artificial intelligence to enhance flight safety are timely and critical. Using artificial intelligence, we may be able to develop the tools we need to better identify runway overrun risk and increase awareness of runway overruns. This work seeks to increase attitude, skill, and knowledge (ASK) of runway overrun risks by predicting the flight states near touchdown and simulating the flight exposed to runway overrun precursors. To achieve this, the methodology develops a prediction model and a simulation model. During the flight training process, the prediction model is used in flight to identify potential risks and the simulation model is used post-flight to review the flight behavior. The prediction model identifies potential risks by predicting flight parameters that best characterize the landing performance during the final approach phase. The predicted flight parameters are used to alert the pilots for any runway overrun precursors that may pose a threat. The predictions and alerts are made when thresholds of various flight parameters are exceeded. The flight simulation model simulates the final approach trajectory with an emphasis on capturing the effect wind has on the aircraft. The focus is on the wind since the wind is a relatively significant factor during the final approach; typically, the aircraft is stabilized during the final approach. The flight simulation is used to quickly assess the differences between fight patterns that have triggered overrun precursors and normal flights with no abnormalities. The differences are crucial in learning how to mitigate adverse flight conditions. Both of the models are created with neural network models. The main challenges of developing a neural network model are the unique assignment of each model design space and the size of a model design space. A model design space is unique to each problem and cannot accommodate multiple problems. A model design space can also be significantly large depending on the depth of the model. Therefore, a hyperparameter optimization algorithm is investigated and used to design the data and model structures to best characterize the aircraft behavior during the final approach. A series of experiments are performed to observe how the model accuracy change with different data pre-processing methods for the prediction model and different neural network models for the simulation model. The data pre-processing methods include indexing the data by different frequencies, by different window sizes, and data clustering. The neural network models include simple Recurrent Neural Networks, Gated Recurrent Units, Long Short Term Memory, and Neural Network Autoregressive with Exogenous Input. Another series of experiments are performed to evaluate the robustness of these models to adverse wind and flare. This is because different wind conditions and flares represent controls that the models need to map to the predicted flight states. The most robust models are then used to identify significant features for the prediction model and the feasible control space for the simulation model. The outcomes of the most robust models are also mapped to the required landing distance metric so that the results of the prediction and simulation are easily read. Then, the methodology is demonstrated with a sample flight exposed to an overrun precursor, and high approach speed, to show how the models can potentially increase attitude, skill, and knowledge of runway overrun risk. The main contribution of this work is on evaluating the accuracy and robustness of prediction and simulation models trained using Flight Operational Quality Assurance (FOQA) data. Unlike many studies that focused on optimizing the model structures to create the two models, this work optimized both data and model structures to ensure that the data well capture the dynamics of the aircraft it represents. To achieve this, this work introduced a hybrid genetic algorithm that combines the benefits of conventional and quantum-inspired genetic algorithms to quickly converge to an optimal configuration while exploring the design space. With the optimized model, this work identified the data features, from the final approach, with a higher contribution to predicting airspeed, vertical speed, and pitch angle near touchdown. The top contributing features are altitude, angle of attack, core rpm, and air speeds. For both the prediction and the simulation models, this study goes through the impact of various data preprocessing methods on the accuracy of the two models. The results may help future studies identify the right data preprocessing methods for their work. Another contribution from this work is on evaluating how flight control and wind affect both the prediction and the simulation models. This is achieved by mapping the model accuracy at various levels of control surface deflection, wind speeds, and wind direction change. The results saw fairly consistent prediction and simulation accuracy at different levels of control surface deflection and wind conditions. This showed that the neural network-based models are effective in creating robust prediction and simulation models of aircraft during the final approach. The results also showed that data frequency has a significant impact on the prediction and simulation accuracy so it is important to have sufficient data to train the models in the condition that the models will be used. The final contribution of this work is on demonstrating how the prediction and the simulation models can be used to increase awareness of runway overrun.Ph.D

    Applications of Nonlinear Dynamics in Semiconductor Lasers With Time-Delayed Feedback in Microwave Photonics

    Get PDF
    The main objective of this research is to investigate the rich nonlinear dynamics of a semiconductor \gls{LD} subjected to time-delayed optoelectronic (OE) feedback, emphasizing applications in microwave photonics and communications. A semiconductor LD based OE feedback constitutes an oscillator that produces self-sustained optical output modulation through the intrinsic nonlinearities of the system without needing any external modulators. To explore the wide variety of dynamics in the optical intensity, the LD needs to be perturbed out of the steady-state free-running behavior, so the photodetected optical signal is appropriately amplified prior to feeding it back into the LD injection terminal. The complex dynamics of such an oscillator have been studied theoretically and experimentally in recent decades. In this work, however, we report several novel dynamical effects by re\"{e}xamining this rich nonlinear system with state-of-the-art experiments and supported that by comprehensive modelling. In particular, we have identified operating conditions that exhibit high-order locking between LD relaxation oscillations with harmonics of the feedback delay frequency for a OE feedback with large delay. We also observe that this system exhibits a stepwise change in LD oscillation frequency as the feedback level is varied. Further, upon varying the injection current near threshold, we also can generate a periodic pulse train with repetition rate at the feedback delay frequency arising from gain-switching between the on and off states of LD. This pulse train grows into pulse clusters as we increase the current. In addition, driving an LD at very high currents and strong feedback results in square-wave pulses whose repetition rate is determined by the feedback delay of the OE loop. The square-waves at a fixed current have been shown to exhibit a double-peaked optical spectrum that depends on the feedback level. These interesting discoveries advance the understanding of the nonlinear OE oscillator and could find applications in communications, sensing, measurement, and spectroscopy.Ph.D

    Modelling, Monitoring, Control and Optimization for Complex Industrial Processes

    Get PDF
    This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors

    On noise, uncertainty and inference for computational diffusion MRI

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis. First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries. Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data. In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates

    Adaptive measurement filter: efficient strategy for optimal estimation of quantum Markov chains

    Full text link
    Continuous-time measurements are instrumental for a multitude of tasks in quantum engineering and quantum control, including the estimation of dynamical parameters of open quantum systems monitored through the environment. However, such measurements do not extract the maximum amount of information available in the output state, so finding alternative optimal measurement strategies is a major open problem. In this paper we solve this problem in the setting of discrete-time input-output quantum Markov chains. We present an efficient algorithm for optimal estimation of one-dimensional dynamical parameters which consists of an iterative procedure for updating a `measurement filter' operator and determining successive measurement bases for the output units. A key ingredient of the scheme is the use of a coherent quantum absorber as a way to post-process the output after the interaction with the system. This is designed adaptively such that the joint system and absorber stationary state is pure at a reference parameter value. The scheme offers an exciting prospect for optimal continuous-time adaptive measurements, but more work is needed to find realistic practical implementations.Comment: 25 pages 7 figure

    An aluminum optical clock setup and its evaluation using Ca+

    Get PDF
    This thesis reports about the progress of the aluminum ion clock that is set up at the German National Metrological Institute, Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig. All known relevant systematic frequency shifts are discussed. The systematic shifts were measured on the co-trapped logic ion 40Ca+, which is advantageous due to its higher sensitivity to external fields compared to 27Al+. The observation of the clock transition of 27Al+ and an analysis of the detection error is described.DFG/DQ-mat/Project-ID 274200144 – SFB 1227/E
    • 

    corecore