27,807 research outputs found

    Asymptotic analysis of first passage time in complex networks

    Full text link
    The first passage time (FPT) distribution for random walk in complex networks is calculated through an asymptotic analysis. For network with size NN and short relaxation time τâ‰ȘN\tau\ll N, the computed mean first passage time (MFPT), which is inverse of the decay rate of FPT distribution, is inversely proportional to the degree of the destination. These results are verified numerically for the paradigmatic networks with excellent agreement. We show that the range of validity of the analytical results covers networks that have short relaxation time and high mean degree, which turn out to be valid to many real networks.Comment: 6 pages, 4 figures, 1 tabl

    Testing for Neglected Nonlinearity in Cointegrating Relationships

    Get PDF
    This paper proposes pure significance tests for the absence of nonlinearity in cointegrating relationships. No assumption of the functional form of the nonlinearity is made. It is envisaged that the application of such tests could form the first step towards specifying a nonlinear cointegrating relationship for empirical modelling. The asymptotic and small sample properties of our tests are investigated, where special attention is paid to the role of nuisance parameters and a potential resolution using the bootstrap.Cointegration, Nonlinearity, Neural networks, Bootstrap

    Nonparametric Neural Network Estimation of Lyapunov Exponents and a Direct Test for Chaos

    Get PDF
    This paper derives the asymptotic distribution of the nonparametric neural network estimator of the Lyapunov exponent in a noisy system. Positivity of the Lyapunov exponent is an operational definition of chaos. We introduce a statistical framework for testing the chaotic hypothesis based on the estimated Lyapunov exponents and a consistent variance estimator. A simulation study to evaluate small sample performance is reported. We also apply our procedures to daily stock return data. In most cases, the hypothesis of chaos in the stock return series is rejected at the 1% level with an exception in some higher power transformed absolute returns.Artificial neural networks, nonlinear dynamics, nonlinear time series, nonparametric regression, sieve estimation

    Incorporating prior financial domain knowledge into neural networks for implied volatility surface prediction

    Full text link
    In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.Comment: 8 pages, SIGKDD 202

    Magnification Control in Self-Organizing Maps and Neural Gas

    Get PDF
    We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case.Comment: 24 pages, 4 figure

    Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2006 Elsevier Ltd.This Letter is concerned with the global asymptotic stability analysis problem for a class of uncertain stochastic Hopfield neural networks with discrete and distributed time-delays. By utilizing a Lyapunov–Krasovskii functional, using the well-known S-procedure and conducting stochastic analysis, we show that the addressed neural networks are robustly, globally, asymptotically stable if a convex optimization problem is feasible. Then, the stability criteria are derived in terms of linear matrix inequalities (LMIs), which can be effectively solved by some standard numerical packages. The main results are also extended to the multiple time-delay case. Two numerical examples are given to demonstrate the usefulness of the proposed global stability condition.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of Germany

    On the number of limit cycles in asymmetric neural networks

    Full text link
    The comprehension of the mechanisms at the basis of the functioning of complexly interconnected networks represents one of the main goals of neuroscience. In this work, we investigate how the structure of recurrent connectivity influences the ability of a network to have storable patterns and in particular limit cycles, by modeling a recurrent neural network with McCulloch-Pitts neurons as a content-addressable memory system. A key role in such models is played by the connectivity matrix, which, for neural networks, corresponds to a schematic representation of the "connectome": the set of chemical synapses and electrical junctions among neurons. The shape of the recurrent connectivity matrix plays a crucial role in the process of storing memories. This relation has already been exposed by the work of Tanaka and Edwards, which presents a theoretical approach to evaluate the mean number of fixed points in a fully connected model at thermodynamic limit. Interestingly, further studies on the same kind of model but with a finite number of nodes have shown how the symmetry parameter influences the types of attractors featured in the system. Our study extends the work of Tanaka and Edwards by providing a theoretical evaluation of the mean number of attractors of any given length LL for different degrees of symmetry in the connectivity matrices.Comment: 35 pages, 12 figure
    • 

    corecore