1,307 research outputs found

    The use of neural networks for fitting complex kinetic data

    Get PDF
    Congrès ESCAPE-3: European Symposium on Computer Aided Process Engineering n.3, Graz , Autriche, 1993In this paper the use of neural networks for fitting complex kinetic data is discussed. To assess the validity of the approach two different neural network architectures are compared with the traditional kinetic identification methods for two cases: the homogeneous esterification reaction between propionic anhydride and 2-butanol. catalysed by sulphuric acid and the heterogeneous liquid-liquid toluene mononitration by mixed acid. A large set of experimental data obtained by adiabatic and heat flux calorimetry and by gas chromatography is used for the training of the neural networks. The results indicate that the neural network approach can be used to deal with the fitting of complex kinetic data to obtain an approximate reaction rate function in a limited amount of time which can be used for design improvement or optimisation when owing to small production levels or time constraints. it is not possible to develop a detailed kinetic analysis.Publicad

    A study of the classification of low-dimensional data with supervised manifold learning

    Full text link
    Supervised manifold learning methods learn data representations by preserving the geometric structure of data while enhancing the separation between data samples from different classes. In this work, we propose a theoretical study of supervised manifold learning for classification. We consider nonlinear dimensionality reduction algorithms that yield linearly separable embeddings of training data and present generalization bounds for this type of algorithms. A necessary condition for satisfactory generalization performance is that the embedding allow the construction of a sufficiently regular interpolation function in relation with the separation margin of the embedding. We show that for supervised embeddings satisfying this condition, the classification error decays at an exponential rate with the number of training samples. Finally, we examine the separability of supervised nonlinear embeddings that aim to preserve the low-dimensional geometric structure of data based on graph representations. The proposed analysis is supported by experiments on several real data sets

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks

    Wavelet probabilistic neural networks

    Get PDF
    In this article, a novel wavelet probabilistic neural network (WPNN), which is a generative-learning wavelet neural network that relies on the wavelet-based estimation of class probability densities, is proposed. In this new neural network approach, the number of basis functions employed is independent of the number of data inputs, and in that sense, it overcomes the well-known drawback of traditional probabilistic neural networks (PNNs). Since the parameters of the proposed network are updated at a low and constant computational cost, it is particularly aimed at data stream classification and anomaly detection in off-line settings and online environments where the length of data is assumed to be unconstrained. Both synthetic and real-world datasets are used to assess the proposed WPNN. Significant performance enhancements are attained compared to state-of-the-art algorithms

    Universal Approximators for Direct Policy Search in Multi-Purpose Water Reservoir Management: A Comparative Analysis

    Get PDF
    open5This study presents a novel approach which combines direct policy search and multi-objective evolutionary algorithms to solve high-dimensional state and control space water resources problems involving multiple, conflicting, and non-commensurable objectives. In such a multi-objective context, the use of universal function approximators is generally suggested to provide flexibility to the shape of the control policy. In this paper, we comparatively analyze Artificial Neural Networks (ANN) and Radial Basis Functions (RBF) under different sets of input to estimate their scalability to high-dimensional state space problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower production and flood control, is used as a case study. Results show that the RBF policy parametrization is more effective than the ANN one. In particular, the approximated Pareto front obtained with RBF control policies successfully explores the full tradeoff space between the two conflicting objectives, while the ANN solutions are often Pareto-dominated by the RBF ones.Matteo Giuliani; Emanuele Mason; Andrea Castelletti; Francesca Pianosi; Rodolfo Soncini SessaGiuliani, Matteo; Mason, Emanuele; Castelletti, ANDREA FRANCESCO; Pianosi, Francesca; SONCINI SESSA, Rodolf

    A Nonparametric Approach to Pricing and Hedging Derivative Securities via Learning Networks

    Get PDF
    We propose a nonparametric method for estimating derivative financial asset pricing formulae using learning networks. To demonstrate feasibility, we first simulate Black-Scholes option prices and show that learning networks can recover the Black-Scholes formula from a two-year training set of daily options prices, and that the resulting network formula can be used successfully to both price and delta-hedge options out-of-sample. For comparison, we estimate models using four popular methods: ordinary least squares, radial basis functions, multilayer perceptrons, and projection pursuit. To illustrate practical relevance, we also apply our approach to S&P 500 futures options data from 1987 to 1991
    • …
    corecore