140 research outputs found

    Pricing options and computing implied volatilities using neural networks

    Full text link
    This paper proposes a data-driven approach, by means of an Artificial Neural Network (ANN), to value financial options and to calculate implied volatilities with the aim of accelerating the corresponding numerical methods. With ANNs being universal function approximators, this method trains an optimized ANN on a data set generated by a sophisticated financial model, and runs the trained ANN as an agent of the original solver in a fast and efficient way. We test this approach on three different types of solvers, including the analytic solution for the Black-Scholes equation, the COS method for the Heston stochastic volatility model and Brent's iterative root-finding method for the calculation of implied volatilities. The numerical results show that the ANN solver can reduce the computing time significantly

    On the data-driven COS method

    Get PDF
    In this paper, we present the data-driven COS method, ddCOS. It is a Fourier-based financial option valuation method which assumes the availability of asset data samples: a characteristic function of the underlying asset probability density function is not required. As such, the presented technique represents a generalization of the well-known COS method [1]. The convergence of the proposed method is in line with Monte Carlo methods for pricing financial derivatives. The ddCOS method is then particularly interesting for density recovery and also for the efficient computation of the option's sensitivities Delta and Gamma. These are often used in risk management, and can be obtained at a higher accuracy with ddCOS than with plain Monte Carlo methods

    Supervised Learning in Multilayer Spiking Neural Networks

    Get PDF
    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.Comment: 38 pages, 4 figure

    Temporal Convolution in Spiking Neural Networks: a Bio-mimetic Paradigm

    Get PDF
    Abstract Recent spectacular advances in Artificial Intelligence (AI), in large, be attributed to developments in Deep Learning (DL). In essence, DL is not a new concept. In many respects, DL shares characteristics of “traditional” types of Neural Network (NN). The main distinguishing feature is that it uses many more layers in order to learn increasingly complex features. Each layer convolutes into the previous by simplifying and applying a function upon a subsection of that layer. Deep Learning’s fantastic success can be attributed to dedicated researchers experimenting with many different groundbreaking techniques, but also some of its triumph can also be attributed to fortune. It was the right technique at the right time. To function effectively, DL mainly requires two things: (a) vast amounts of training data and (b) a very specific type of computational capacity. These two respective requirements have been amply met with the growth of the internet and the rapid development of GPUs. As such DL is an almost perfect fit for today’s technologies. However, DL is only a very rough approximation of how the brain works. More recently, Spiking Neural Networks (SNNs) have tried to simulate biological phenomena in a more realistic way. In SNNs information is transmitted as discreet spikes of data rather than a continuous weight or a differentiable activation function. In practical terms this means that far more nuanced interactions can occur between neurons and that the network can run far more efficiently (e.g. in terms of calculations needed and therefore overall power requirements). Nevertheless, the big problem with SNNs is that unlike DL it does not “fit” well with existing technologies. Worst still is that no one has yet come up with definitive way to make SNNs function at a “deep” level. The difficulty is that in essence "deep" and "spiking" refer to fundamentally different characteristics of a neural network: "spiking" focuses on the activation of individual neurons, whereas "deep" concerns itself to the network architecture itself [1]. However, these two methods are in fact not contradictory, but have so far been developed in isolation from each other due to the prevailing technology driving each technique and the fundamental conceptual distance between each of the two biological paradigms. If advances in AI are to continue at the present rate that new technologies are going to be developed and the contradictory aspects of DL and SNN are going to have to be reconciled. Very recently, there have been a handful of attempts to amalgamate DL and SNN in a variety of ways [2]-one of the most exciting being the creation of a specific hierarchical learning paradigm in Recurrent SNN (RSNNs) called e-prop [3]. However, this paper posits that this has been made problematic because a fundamental agent in the way the biological brain functions has been missing from each paradigm, and that if this is included in a new model then the union between DL and RSNN can be made in a more harmonious manner. The missing piece to the jigsaw, in fact, is the glial cell and the unacknowledged function it plays in neural processing. In this context, this paper examines how DL and SNN can be combined, and how glial dynamics cannot only address outstanding issues with the existing individual paradigms - for example the “weight transport” problem - but also act as the “glue” – e.g. pun intended - between these two paradigms. This idea has direct parallel with the idea of convolution in DL but has the added dimension of time. It is important not only where events happen but also when events occur in this new paradigm. The synergy between these two powerful paradigms give hints at the direction and potential of what could be an important part of the next wave of development in AI

    Image Processing with Spiking Neuron Networks

    Full text link
    International audienceArtificial neural networks have been well developed so far. First two generations of neural networks have had a lot of successful applications. Spiking Neuron Networks (SNNs) are often referred to as the third generation of neural networks which have potential to solve problems related to biological stimuli. They derive their strength and interest from an accurate modeling of synaptic interactions between neurons, taking into account the time of spike emission. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation.Moreover, SNNs add a new dimension, the temporal axis, to the representation capacity and the processing abilities of neural networks. In this chapter, we present how SNN can be applied with efficacy in image clustering, segmentation and edge detection. Results obtained confirm the validity of the approach

    How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation

    Get PDF
    This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials ("spike trains") produced by neuronal networks ? and; (ii) what are the effects of synaptic plasticity on these statistics ? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure

    Cell Microscopic Segmentation with Spiking Neuron Networks

    Full text link
    International audienceSpiking Neuron Networks (SNNs) overcome the computational power of neural networks made of thresholds or sigmoidal units. Indeed, SNNs add a new dimension, the temporal axis, to the representation capacity and the processing abilities of neural networks. In this paper, we present how SNN can be applied with efficacy for cell microscopic image segmentation. Results obtained confirm the validity of the approach. The strategy is performed on cytological color images. Quantitative measures are used to evaluate the resulting segmentations
    corecore