2,324 research outputs found

    The History of the Quantitative Methods in Finance Conference Series. 1992-2007

    Get PDF
    This report charts the history of the Quantitative Methods in Finance (QMF) conference from its beginning in 1993 to the 15th conference in 2007. It lists alphabetically the 1037 speakers who presented at all 15 conferences and the titles of their papers.

    Seizure-onset mapping based on time-variant multivariate functional connectivity analysis of high-dimensional intracranial EEG : a Kalman filter approach

    Get PDF
    The visual interpretation of intracranial EEG (iEEG) is the standard method used in complex epilepsy surgery cases to map the regions of seizure onset targeted for resection. Still, visual iEEG analysis is labor-intensive and biased due to interpreter dependency. Multivariate parametric functional connectivity measures using adaptive autoregressive (AR) modeling of the iEEG signals based on the Kalman filter algorithm have been used successfully to localize the electrographic seizure onsets. Due to their high computational cost, these methods have been applied to a limited number of iEEG time-series (< 60). The aim of this study was to test two Kalman filter implementations, a well-known multivariate adaptive AR model (Arnold et al. 1998) and a simplified, computationally efficient derivation of it, for their potential application to connectivity analysis of high-dimensional (up to 192 channels) iEEG data. When used on simulated seizures together with a multivariate connectivity estimator, the partial directed coherence, the two AR models were compared for their ability to reconstitute the designed seizure signal connections from noisy data. Next, focal seizures from iEEG recordings (73-113 channels) in three patients rendered seizure-free after surgery were mapped with the outdegree, a graph-theory index of outward directed connectivity. Simulation results indicated high levels of mapping accuracy for the two models in the presence of low-to-moderate noise cross-correlation. Accordingly, both AR models correctly mapped the real seizure onset to the resection volume. This study supports the possibility of conducting fully data-driven multivariate connectivity estimations on high-dimensional iEEG datasets using the Kalman filter approach

    Action Functional Gradient Descent algorithm for estimating escape paths in Stochastic Chemical Reaction Networks

    Full text link
    We first derive the Hamilton-Jacobi theory underlying continuous-time Markov processes, and then use the construction to develop a variational algorithm for estimating escape (least improbable or first passage) paths for a generic stochastic chemical reaction network that exhibits multiple fixed points. The design of our algorithm is such that it is independent of the underlying dimensionality of the system, the discretization control parameters are updated towards the continuum limit, and there is an easy-to-calculate measure for the correctness of its solution. We consider several applications of the algorithm and verify them against computationally expensive means such as the shooting method and stochastic simulation. While we employ theoretical techniques from mathematical physics, numerical optimization and chemical reaction network theory, we hope that our work finds practical applications with an inter-disciplinary audience including chemists, biologists, optimal control theorists and game theorists.Comment: 38 pages, 21 figure

    Exploring Two Novel Features for EEG-based Brain-Computer Interfaces: Multifractal Cumulants and Predictive Complexity

    Get PDF
    In this paper, we introduce two new features for the design of electroencephalography (EEG) based Brain-Computer Interfaces (BCI): one feature based on multifractal cumulants, and one feature based on the predictive complexity of the EEG time series. The multifractal cumulants feature measures the signal regularity, while the predictive complexity measures the difficulty to predict the future of the signal based on its past, hence a degree of how complex it is. We have conducted an evaluation of the performance of these two novel features on EEG data corresponding to motor-imagery. We also compared them to the most successful features used in the BCI field, namely the Band-Power features. We evaluated these three kinds of features and their combinations on EEG signals from 13 subjects. Results obtained show that our novel features can lead to BCI designs with improved classification performance, notably when using and combining the three kinds of feature (band-power, multifractal cumulants, predictive complexity) together.Comment: Updated with more subjects. Separated out the band-power comparisons in a companion article after reviewer feedback. Source code and companion article are available at http://nicolas.brodu.numerimoire.net/en/recherche/publication

    Materials genes of heterogeneous catalysis from clean experiments and artificial intelligence

    Get PDF
    The performance in heterogeneous catalysis is an example of a complex materials function, governed by an intricate interplay of several processes (e.g., the different surface chemical reactions, and the dynamic restructuring of the catalyst material at reaction conditions). Modeling the full catalytic progression via first-principles statistical mechanics is impractical, if not impossible. Instead, we show here how a tailored artificial-intelligence approach can be applied, even to a small number of materials, to model catalysis and determine the key descriptive parameters (“materials genes”) reflecting the processes that trigger, facilitate, or hinder catalyst performance. We start from a consistent experimental set of “clean data,” containing nine vanadium-based oxidation catalysts. These materials were synthesized, fully characterized, and tested according to standardized protocols. By applying the symbolic-regression SISSO approach, we identify correlations between the few most relevant materials properties and their reactivity. This approach highlights the underlying physicochemical processes, and accelerates catalyst design

    Simulation methods with extended stability for stiff biochemical Kinetics

    Get PDF
    Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows.Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems

    Stochastic modelling of new phenomena in financial markets

    Full text link
    University of Technology Sydney. Faculty of Business.The Global Financial Crisis (GFC) has revealed a number of new phenomena in financial markets, to which stochastic models have to be adapted. This dissertation presents two new methodologies, one for modeling the “basis spread”, and the other for “rough volatility”. The former gained prominence during the GFC and continues to persist, while the latter has become increasingly evident since 2014. The dissertation commences with a study of the interest rate market. Since 2008, in this market we have observed “basis spreads” added to one side of the single-currency floating-for-floating swaps. The persistence of these spreads indicates that the market is pricing a risk that is not captured by existing models. These risks driving the spreads are closely related to the risks affecting the funding of banks participating in benchmark interest rate panels, specifically “roll-over” risk, this being the risk of not being able to refinance borrowing at the benchmark interest rate. We explicitly model funding liquidity and credit risk, as these are the two components of roll-over risk, developing first a model framework and then considering a specific instance of this framework based on affine term structure models. Subsequently, another specific instance of the model of roll-over risk is constructed sing polynomial processes. Instead of pricing options through closed-form expressions for conditional moments with respect to observed process, the price of a zero-coupon bond is expressed as a polynomial of a finite degree in the sense of Cheng & Tehranchi (2015). A formula for discrete-tenor benchmark interest rates (e.g., LIBOR) under roll-over risk is constructed, which depends on the quotient of polynomial processes. It is shown how such a model can be calibrated to market data for the discount factor bootstrapped from the overnight index swap (OIS) rate curve. This is followed by a chapter in which a numerical method for the valuation of financial derivatives with a two-dimensional underlying risk is considered, in particular as applied to the problem of pricing spread options. As is common, analytically closed-form solutions for pricing these payoffs are unavailable, and numerical pricing methods turn out to be non-trivial. We price spread options in a model where asset prices are driven by a multivariate normal inverse Gaussian (NIG) process. We consider a pricing problem in the fixed-income market, specifically, on cross-currency interest rate spreads and on LIBOR-OIS spreads. The final contribution in this dissertation tackles regime switching in a rough-volatility Heston model, which incorporates two important features. The regime switching is motivated by fundamental economic changes, and a Markov chain to model the switches in the long-term mean of the volatility is proposed. The rough behaviour is a more local property and is motivated by the stylized fact that volatility is less regular than a standard Brownian motion. The instantaneous volatility process is endowed with a kernel that induces rough behaviour in the model. Pricing formulae are derived and implemented for call and put options using the Fourier-inversion formula of Gil-Pelaez (1951)
    • 

    corecore