1,984 research outputs found

    JDNN: Jacobi Deep Neural Network for Solving Telegraph Equation

    Full text link
    In this article, a new deep learning architecture, named JDNN, has been proposed to approximate a numerical solution to Partial Differential Equations (PDEs). The JDNN is capable of solving high-dimensional equations. Here, Jacobi Deep Neural Network (JDNN) has demonstrated various types of telegraph equations. This model utilizes the orthogonal Jacobi polynomials as the activation function to increase the accuracy and stability of the method for solving partial differential equations. The finite difference time discretization technique is used to overcome the computational complexity of the given equation. The proposed scheme utilizes a Graphics Processing Unit (GPU) to accelerate the learning process by taking advantage of the neural network platforms. Comparing the existing methods, the numerical experiments show that the proposed approach can efficiently learn the dynamics of the physical problem

    Data-driven Soft Sensors in the Process Industry

    Get PDF
    In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work

    A uniform estimation framework for state of health of lithium-ion batteries considering feature extraction and parameters optimization

    Get PDF
    State of health is one of the most critical parameters to characterize inner status of lithium-ion batteries in electric vehicles. In this study, a uniform estimation framework is proposed to simultaneously achieve the estimation of state of health and optimize the healthy features therein, which are excavated based on the charging voltage curves within a fixed range. The fixed size least squares-support vector machine is employed to estimate the state of health with less computation intensity, and the genetic algorithm is applied to search the optimal charging voltage range and parameters of fixed size least squares-support vector machine. By this manner, the measured raw data during the charging process can be directly fed into the estimation model without any pretreatment. The estimation performance of proposed algorithm is validated in terms of different voltage ranges and sampling time, and also compared with other three traditional machine learning algorithms. The experimental results highlight that the presented estimation framework cannot only restrict the prediction error of state of health within 2%, but also feature high robustness and universality

    Modeling and Optimization of the Microwave PCB Interconnects Using Macromodel Techniques

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    corecore