7 research outputs found

    Volumetric efficiency modelling of internal combustion engines based on a novel adaptive learning algorithm of artificial neural networks

    Full text link
    [EN] Air mass flow determination is one of the main variables on the control of internal combustion engines. Effectiveness of intake air systems is evaluated through the volumetric efficiency coefficient. Intake air systems characterization by means of physical models needs either significant amount of input data or notable calculation times. Because of these drawbacks, empirical approaches are often used by means of black-box models based on Artificial Neural Networks. As alternative to the standard gradient descendent method an adaptive learning algorithm is developed based on the increase of hidden layer weight update speed. The results presented in this paper show that the proposed adaptive learning method performs with higher learning speed, reduced computational resources and lower network complexities. A parametric study of several Multiple Layer Perceptron (MLP) networks is carried out with the variation of the number of epochs, number of hidden neurons, momentum coefficient and learning algorithm. The training and validation data are obtained from steady state tests carried out in an automotive turbocharged diesel engine. (C) 2017 Elsevier Ltd. All rights reserved.Authors want to acknowledge the "Apoyo para la investigacion y Desarrollo (PAID)", grant for doctoral studies (FPI S1 2015 2512), of Universitat Politecnica de Valencia.Luján, JM.; Climent, H.; García-Cuevas González, LM.; Moratal-Martínez, AA. (2017). Volumetric efficiency modelling of internal combustion engines based on a novel adaptive learning algorithm of artificial neural networks. Applied Thermal Engineering. 123:625-634. https://doi.org/10.1016/j.applthermaleng.2017.05.087S62563412

    Extending Memory for Language Modelling

    Full text link
    Breakthroughs in deep learning and memory networks have made major advances in natural language understanding. Language is sequential and information carried through the sequence can be captured through memory networks. Learning the sequence is one of the key aspects in learning the language. However, memory networks are not capable of holding infinitely long sequences in their memories and are limited by various constraints such as the vanishing or exploding gradient problem. Therefore, natural language understanding models are affected when presented with long sequential text. We introduce Long Term Memory network (LTM) to learn from infinitely long sequences. LTM gives priority to the current inputs to allow it to have a high impact. Language modeling is an important factor in natural language understanding. LTM was tested in language modeling, which requires long term memory. LTM is tested on Penn Tree bank dataset, Google Billion Word dataset and WikiText-2 dataset. We compare LTM with other language models which require long term memory

    Handbook of Computational Intelligence in Manufacturing and Production Management

    Get PDF
    Artificial intelligence (AI) is simply a way of providing a computer or a machine to think intelligently like human beings. Since human intelligence is a complex abstraction, scientists have only recently began to understand and make certain assumptions on how people think and to apply these assumptions in order to design AI programs. It is a vast knowledge base discipline that covers reasoning, machine learning, planning, intelligent search, and perception building. Traditional AI had the limitations to meet the increasing demand of search, optimization, and machine learning in the areas of large, biological, and commercial database information systems and management of factory automation for different industries such as power, automobile, aerospace, and chemical plants. The drawbacks of classical AI became more pronounced due to successive failures of the decade long Japanese project on fifth generation computing machines. The limitation of traditional AI gave rise to development of new computational methods in various applications of engineering and management problems. As a result, these computational techniques emerged as a new discipline called computational intelligence (CI)

    Collapse warning system using LSTM neural networks for construction disaster prevention in extreme wind weather

    Get PDF
    Strong wind during extreme weather conditions (e.g., strong winds during typhoons) is one of the natural factors that cause the collapse of frame-type scaffolds used in façade work. This study developed an alert system for use in determining whether the scaffold structure could withstand the stress of the wind force. Conceptually, the scaffolds collapsed by the warning system developed in the study contains three modules. The first module involves the establishment of wind velocity prediction models. This study employed various deep learning and machine learning techniques, namely deep neural networks, long short-term memory neural networks, support vector regressions, random forest, and k-nearest neighbors. Then, the second module contains the analysis of wind force on the scaffolds. The third module involves the development of the scaffold collapse evaluation approach. The study area was Taichung City, Taiwan. This study collected meteorological data from the ground stations from 2012 to 2019. Results revealed that the system successfully predicted the possible collapse time for scaffolds within 1 to 6 h, and effectively issued a warning time. Overall, the warning system can provide practical warning information related to the destruction of scaffolds to construction teams in need of the information to reduce the damage risk

    Learning long-term dependencies in segmented-memory recurrent neural networks with backpropagation of error

    No full text
    In general, recurrent neural networks have difficulties in learning long-term dependencies. The segmented-memory recurrent neural network (SMRNN) architecture together with the extended real-time recurrent learning (eRTRL) algorithm was proposed to circumvent this problem. Due to its computational complexity eRTRL becomes impractical with increasing network size. Therefore, we introduce the less complex extended backpropagation through time (eBPTT) for SMRNN together with a layer-local unsupervised pre-training procedure. A comparison on the information latching problem showed that eRTRL is better able to handle the latching of information over longer periods of time, even though eBPTT guaranteed a better generalisation when training was successful. Further, pre-training significantly improved the ability to learn long-term dependencies with eBPTT. Therefore, the proposed eBPTT algorithm is suited for tasks that require big networks where eRTRL is impractical. The pre-training procedure itself is independent of the supervised learning algorithm and can improve learning in SMRNN in general
    corecore