248 research outputs found

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Deep learning methods for protein torsion angle prediction

    Get PDF
    Background: Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. Results: We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Conclusions: Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy

    Market-based transmission congestion management using extended optimal power flow techniques

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 5/9/2001This thesis describes research into the problem of transmission congestion management. The causes, remedies, pricing methods, and other issues of transmission congestion are briefly reviewed. This research is to develop market-based approaches to cope with transmission congestion in real-time, short-run and long-run efficiently, economically and fairly. Extended OPF techniques have been playing key roles in many aspects of electricity markets. The Primal-Dual Interior Point Linear Programming and Quadratic Programming are applied to solve various optimization problems of congestion management proposed in the thesis. A coordinated real-time optimal dispatch method for unbundled electricity markets is proposed for system balancing and congestion management. With this method, almost all the possible resources in different electricity markets, including operating reserves and bilateral transactions, can be used to eliminate the real-time congestion according to their bids into the balancing market. Spot pricing theory is applied to real-time congestion pricing. Under the same framework, a Lagrangian Relaxation based region decomposition OPF algorithm is presented to deal with the problems of real-time active power congestion management across multiple regions. The inter/intra-regional congestion can be relieved without exchanging any information between regional ISOs but the Lagrangian Multipliers. In day-ahead spot market, a new optimal dispatch method is proposed for congestion and price risk management, particularly for bilateral transaction curtailment. Individual revenue adequacy constraints, which include payments from financial instruments, are involved in the original dispatch problem. An iterative procedure is applied to solve this special optimization problem with both primal and dual variables involved in its constraints. An optimal Financial Transmission Rights (FTR) auction model is presented as an approach to the long-term congestion management. Two types of series F ACTS devices are incorporated into this auction problem using the Power Injection Model to maximize the auction revenue. Some new treatment has been done on TCSC's operating limits to keep the auction problem linear

    Review of Neural Network Algorithms

    Get PDF
    The artificial neural network is the core tool of machine learning to realize intelligence. It has shown its advantages in the fields of sound, image, sound, picture, and so on. Since entering the 21st century, the progress of science and technology and people\u27s pursuit of artificial intelligence have introduced the research of artificial neural networks into an upsurge. Firstly, this paper introduces the application background and development process of the artificial neural network in order to clarify the research context of neural networks. Five branches and related applications of single-layer perceptron, linear neural network, BP neural network, Hopfield neural network, and depth neural network are analyzed in detail. The analysis shows that the development trend of the artificial neural network is developing towards a more general, flexible, and intelligent direction. Finally, the future development of the artificial neural network in training mode, learning mode, function expansion, and technology combination has prospected

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    corecore