62,954 research outputs found

    Model migration neural network for predicting battery aging trajectories

    Get PDF
    Accurate prediction of batteries’ future degradation is a key solution to relief users’ anxiety on battery lifespan and electric vehicle’s driving range. Technical challenges arise from the highly nonlinear dynamics of battery aging. In this paper, a feed-forward migration neural network is proposed to predict the batteries’ aging trajectories. Specifically, a base model that describes the capacity decay over time is first established from the existed battery aging dataset. This base model is then transformed by an input-output slope-and-bias-correction (SBC) method structure to capture the degradation of target cell. To enhance the model’s nonlinear transfer capability, the SBC-model is further integrated into a four-layer neural network, and easily trained via the gradient correlation algorithm. The proposed migration neural network is experimentally verified with four different commercial batteries. The predicted RMSEs are all lower than 2.5% when using only the first 30% of aging trajectories for neural network training. In addition, illustrative results demonstrate that a small size feed-forward neural network (down to 1-5-5-1) is sufficient for battery aging trajectory prediction

    Parameters Identification for a Composite Piezoelectric Actuator Dynamics

    Get PDF
    This work presents an approach for identifying the model of a composite piezoelectric (PZT) bimorph actuator dynamics, with the objective of creating a robust model that can be used under various operating conditions. This actuator exhibits nonlinear behavior that can be described using backlash and hysteresis. A linear dynamic model with a damping matrix that incorporates the Bouc–Wen hysteresis model and the backlash operators is developed. This work proposes identifying the actuator’s model parameters using the hybrid master-slave genetic algorithm neural network (HGANN). In this algorithm, the neural network exploits the ability of the genetic algorithm to search globally to optimize its structure, weights, biases and transfer functions to perform time series analysis efficiently. A total of nine datasets (cases) representing three different voltage amplitudes excited at three different frequencies are used to train and validate the model. Four cases are considered for training the NN architecture, connection weights, bias weights and learning rules. The remaining five cases are used to validate the model, which produced results that closely match the experimental ones. The analysis shows that damping parameters are inversely proportional to the excitation frequency. This indicates that the suggested hysteresis model is too general for the PZT model in this work. It also suggests that backlash appears only when dynamic forces become dominant

    BAMBI: blind accelerated multimodal Bayesian inference

    Full text link
    In this paper we present an algorithm for rapid Bayesian analysis that combines the benefits of nested sampling and artificial neural networks. The blind accelerated multimodal Bayesian inference (BAMBI) algorithm implements the MultiNest package for nested sampling as well as the training of an artificial neural network (NN) to learn the likelihood function. In the case of computationally expensive likelihoods, this allows the substitution of a much more rapid approximation in order to increase significantly the speed of the analysis. We begin by demonstrating, with a few toy examples, the ability of a NN to learn complicated likelihood surfaces. BAMBI's ability to decrease running time for Bayesian inference is then demonstrated in the context of estimating cosmological parameters from Wilkinson Microwave Anisotropy Probe and other observations. We show that valuable speed increases are achieved in addition to obtaining NNs trained on the likelihood functions for the different model and data combinations. These NNs can then be used for an even faster follow-up analysis using the same likelihood and different priors. This is a fully general algorithm that can be applied, without any pre-processing, to other problems with computationally expensive likelihood functions.Comment: 12 pages, 8 tables, 17 figures; accepted by MNRAS; v2 to reflect minor changes in published versio

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    Pairwise meta-rules for better meta-learning-based algorithm ranking

    Get PDF
    In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset

    Neural Network Local Navigation of Mobile Robots in a Moving Obstacles Environment

    Get PDF
    IF AC Intelligent Components and Instruments for Control Applications, Budapest, Hungary, 1994This paper presents a local navigation method based on generalized predictive control. A modified cost function to avoid moving and static obstacles is presented. An Extended Kaiman Filter is proposed to predict the motions of the obstacles. A Neural Network implementation of this method is analysed. Simulation results are shown.Ministerio de Ciencia y TecnologĂ­a TAP93-0408Ministerio de Ciencia y TecnologĂ­a TAP93-058

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions
    • …
    corecore