81 research outputs found

    High order approximation theory for Banach space valued functions

    Get PDF
    Here we study quantitatively the high degree of approximation of sequences of linear operators acting on Banach space valued di§erentiable functions to the unit operator. These operators are bounded by real positive linear companion operators. The Banach spaces considered here are general and no positivity assumption is made on the initial linear operators whose we study their approximation properties. We derive pointwise and uniform estimates which imply the approximation of these operators to the unit assuming di§erentiability of functions. At the end we study the special case where the high order derivative of the on hand function fulÖlls a convexity condition resulting into sharper estimates. MR372463

    Computational Characterization of the Cellular Origins of Electroencephalography

    Get PDF
    Electroencephalography (EEG) is a non-invasive technique used to measure brain activity. Despite its near ubiquitous presence in neuroscience, very little research has gone into connecting the electrical potentials it measures on the scalp to the underlying network activity which generates those signals. This results in most EEG analyses being more macroscopically focused (e.g. coherence and correlation analyses). Despite the many uses of macroscopically focuses analyses, limiting research to only these analyses neglects the insights which can be gained from studying network and microcircuit architecture. The ability to study these things through non-invasive techniques like EEG depends upon the ability to understand how the activity of individual neurons affect the electrical potentials recorded by EEG electrodes on the scalp. The research presented here is designed to take the first steps towards providing that link.Current dipole moments generated by multiple multi-compartment, morphologically accurate, three-dimensional neuron models were characterized into a single time series called a dipole response function (DRF). We found that when the soma of a neuron is directly stimulated to threshold, the resulting action potential caused an excess of current which backpropagated up the dendritic tree activating voltage gated ion channels along the way. This backpropigation created a dipole which had a magnitude and duration greater than the current dipoles created by neurons that were synaptically activated to near threshold. Additionally, we presented a novel technique, where, through the combination of the DRFs with point source network activity via convolution, dipoles generated by populations of neurons can be simulated. We validated this technique at multiple spatial scales using data from both animal models and human subjects. Our results show that this technique can provide a reasonable representation of the extracellular fields and EEG signals generated in their physiological counterparts. Finally, analysis of a simulated evoked potential generated via the convolutional methodology proposed showed that ∼ 98% of the variability of simulated signal could be accounted for by the dipoles originating from DRFs of spiking pyramidal cells

    Calendar Time Sampling of High Frequency Financial Asset Price and the Verdict on Jumps

    Get PDF
    In the current paper, we investigate the bias introduced through the calendar time sampling of the price process of financial assets. We analyze results from a Monte Carlo simulation which point to the conclusion that the multitude of jumps reported in the literature might be, to a large extent, an artifact of the bias introduced through the previous tick sampling scheme, used for the time homogenization the price series. We advocate the use of Akima cubic splines as an alternative to the popular previous tick method. Monte Carlo simulation results confirm the suitability of Akima cubic splines in high frequency applications and the advantages of these over other calendar time sampling schemes, such as the linear interpolation and the previous tick method. Empirical results from the FX market complement the analysis.Sampling schemes, previous tick method, quadratic variation, jumps, stochastic volatility,realized measures, high-frequency data

    Functional quantization-based stratified sampling methods

    Get PDF
    In this article, we propose several quantization-based stratified sampling methods to reduce the variance of a Monte Carlo simulation. Theoretical aspects of stratification lead to a strong link between optimal quadratic quantization and the variance reduction that can be achieved with stratified sampling. We first put the emphasis on the consistency of quantization for partitioning the state space in stratified sampling methods in both finite and infinite dimensional cases. We show that the proposed quantization-based strata design has uniform efficiency among the class of Lipschitz continuous functionals. Then a stratified sampling algorithm based on product functional quantization is proposed for path-dependent functionals of multi-factor diffusions. The method is also available for other Gaussian processes such as Brownian bridge or Ornstein-Uhlenbeck processes. We derive in detail the case of Ornstein-Uhlenbeck processes. We also study the balance between the algorithmic complexity of the simulation and the variance reduction facto

    Aleatoric Uncertainty Modelling in Regression Problems using Deep Learning

    Get PDF
    [eng] Nowadays, we live in an intrinsically uncertain world from our perspective. We do not know what will happen in the future but, to infer it, we build the so-called models. These models are abstractions of the world we live which allow us to conceive how the world works and that are, essentially, validated from our previous experience and discarded if their predictions prove to be incorrect in the future. This common scientific process of inference has several non-deterministic steps. First of all, our measuring instruments could be inaccurate. That is, the information we use a priori to know what will happen may already contain some irreducible error. Besides, our past experience in building the model could be biased (and, therefore, we would incorrectly infer the future, as the models would be based on unrepresentative data). On the other hand, our model itself may be an oversimplification of the reality (which would lead us to unrealistic generalizations). Furthermore, the overall task of inferring the future may be downright non-deterministic. This often happens when the information we have a priori to infer the future is incomplete or partial for the task to be performed (i.e. it depends on factors we cannot observe at the time of prediction) and we are, consequently, obliged to consider that what we want to predict is not a deterministic value. One way to model all of these uncertainties is through a probabilistic approach that mathematically formalizes these sources of uncertainty in order to create specific methods that capture them. Accordingly, the general aim of this thesis is to define a probabilistic approach that contributes to artificial intelligence-based systems (specifically, deep learning) becoming robust and reliable systems capable of being applied to high-risk problems, where having generic good performance is not enough but also to ensure that critical errors with high costs are avoided. In particular, the thesis shows the current divergence in the literature - when it comes to dividing and naming the different types of uncertainty - by proposing a procedure to follow. In addition, based on a real problem case arising from the industrial nature of the current thesis, the importance of investigating the last type of uncertainty is emphasized, which arises from the lack of a priori information in order to infer deterministically the future, the so-called aleatoric uncertainty. The current thesis delves into different literature models in order to capture aleatoric uncertainty using deep learning and analyzes their limitations. In addition, it proposes new state-of-the-art approaches that allow to solve the limitations exposed during the thesis. As a result of applying the aleatoric uncertainty modelling in real-world problems, the uncertainty modelling of a black box systems problem arises. Generically, a Black box system is a pre-existing predictive system which originally do not model uncertainty and where no requirements or assumptions are made about its internals. Therefore, the goal is to build a new system that wrappers the black box and models the uncertainty of this original system. In this scenario, not all previously introduced aleatoric uncertainty modelling approaches can be considered and this implies that flexible methods such as Quantile Regression ones need to be modified in order to be applied in this context. Subsequently, the Quantile Regression study brings the need to solve one critical literature problem in the QR literature, the so-called crossing quantile, which motivates the proposal of new additional models to solve it. Finally, all of the above research will be summarized in visualization and evaluation methods for the predicted uncertainty to produce uncertainty-tailored methods.[cat] Estem rodejats d’incertesa. Cada decisió que prenem té una probabilitat de sortir com un espera i, en funció d’aquesta, molts cops condicionem les nostres decisions. De la mateixa manera, els sistemes autònoms han de saber interpretar aquests escenaris incerts. Tot i això, actualment, malgrat els grans avenços en el camp de la intel·ligència artificial, ens trobem en un moment on la incapacitat d'aquests sistemes per poder identificar a priori un escenari de major risc impedeix la seva inclusió com a part de solucions que podrien revolucionar la societat tal i com la coneixem. El repte és significatiu i, per això, és essencial que aquests sistemes aprenguin a modelar i gestionar totes les fonts de la incertesa. Partint d'un enfocament probabilístic, aquesta tesi proposa formalitzar els diferents tipus d'incerteses i, en particular, centra la seva recerca en un tipus anomenada com incertesa aleatòrica, ja que va ser detectada com la principal incertesa decisiva a tractar en el problema financer original que va motivar el present doctorat industrial. A partir d'aquesta investigació, la tesi proposa nous models per millorar l'estat de l'art en la modelització de la incertesa aleatòrica, així com introdueix un nou problema, a partir d’una necessitat real industrial, que apareix quan hi ha un sistema predictiu en producció que no modela la incertesa i es vol modelar la incertesa a posteriori de forma independent. Aquest problema es denotarà com la modelització de la incertesa d'un sistema de caixa negra i motivarà la proposta de nous models especialitzats en mantenir els avantatges predictius, com ara la Regressió Quantílica (RQ), adaptant-los al problema de la caixa negra. Posteriorment, la investigació en RQ motivarà la proposta de nous models per resoldre un problema fonamental de la literatura en RQ conegut com el fenomen del creuament de quantils, que apareix quan, a l’hora de predir simultàniament diferents quantils, l’ordre entre quantils no es conserva. Finalment, tota la investigació anterior es resumirà en mètodes de visualització i avaluació de la incertesa reportada per tal de produir mètodes que mitjançant aquesta informació extra prenguin decisions més robustes
    corecore