361 research outputs found

    Selective Neuron Re-Computation (SNRC) for Error-Tolerant Neural Networks

    Get PDF
    Artificial Neural networks (ANNs) are widely used to solve classification problems for many machine learning applications. When errors occur in the computational units of an ANN implementation due to for example radiation effects, the result of an arithmetic operation can be changed, and therefore, the predicted classification class may be erroneously affected. This is not acceptable when ANNs are used in many safety-critical applications, because the incorrect classification may result in a system failure. Existing error-tolerant techniques usually rely on physically replicating parts of the ANN implementation or incurring in a significant computation overhead. Therefore, efficient protection schemes are needed for ANNs that are run on a processor and used in resource-limited platforms. A technique referred to as Selective Neuron Re-Computation (SNRC), is proposed in this paper. As per the ANN structure and algorithmic properties, SNRC can identify the cases in which the errors have no impact on the outcome; therefore, errors only need to be handled by re-computation when the classification result is detected as unreliable. Compared with existing temporal redundancy-based protection schemes, SNRC saves more than 60 percent of the re-computation (more than 90 percent in many cases) overhead to achieve complete error protection as assessed over a wide range of datasets. Different activation functions are also evaluated.This research was supported by the National Science Foundation Grants CCF-1953961 and 1812467, by the ACHILLES project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Science and Innovation and by the Madrid Community research project TAPIR-CM P2018/TCS-4496.Publicad

    Flood. An open source neural networks C++ library

    Get PDF
    The multilayer perceptron is an important model of neural network, and much of the literature in the eld is referred to that model. The multilayer perceptron has found a wide range of applications, which include function re- gression, pattern recognition, time series prediction, optimal control, optimal shape design or inverse problems. All these problems can be formulated as variational problems. That neural network can learn either from databases or from mathematical models. Flood is a comprehensive class library which implements the multilayer perceptron in the C++ programming language. It has been developed follow- ing the functional analysis and calculus of variations theories. In this regard, this software tool can be used for the whole range of applications mentioned above. Flood also provides a workaround for the solution of function opti- mization problems

    A multivariate approach for multi-step demand forecasting in assembly industries: Empirical evidence from an automotive supply chain

    Get PDF
    PreprintDemand forecasting works as a basis for operating, business and production planning decisions in many supply chain contexts. Yet, how to accurately predict the manufacturer's demand for components in the presence of end-customer demand uncertainty remains poorly understood. Assigning the proper order quantities of components to suppliers thus becomes a nontrivial task, with a significant impact on planning, capacity and inventory-related costs. This paper introduces a multivariate approach to predict manufacturer's demand for components throughout multiple forecast horizons using different leading indicators of demand shifts. We compare the autoregressive integrated moving average model with exogenous inputs (ARIMAX) with Machine Learning (ML) models. Using a real case study, we empirically evaluate the forecasting and supply chain performance of the multivariate regression models over the component's life-cycle. The experiments show that the proposed multivariate approach provides superior forecasting and inventory performance compared with traditional univariate benchmarks. Moreover, it reveals applicable throughout the component's life-cycle, not just to a single stage. Particularly, we found that demand signals at the beginning of the life-cycle are predicted better by the ARIMAX model, but it is outperformed by ML-based models in later life-cycle stages.INCT-EN - Instituto Nacional de Ciência e Tecnologia para Excitotoxicidade e Neuroproteção(UIDB/00319/2020

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    Quality analysis modelling for development of a process controller in resistance spot welding using neural networks techniques

    Get PDF
    Student Number : 9811923K - PhD thesis - School of Mechanical Engineering - Faculty of Engineering and the Built EnvironmentMethods are presented for obtaining models used for predicting welded sample resistance and effective weld current (RMS) for desired weld diameter (weld quality) in the resistance spot welding process. These models were used to design predictive controllers for the welding process. A suitable process model forms an important step in the development and design of process controllers for achieving good weld quality with good reproducibility. Effective current, dynamic resistance and applied electrode force are identified as important input parameters necessary to predict the output weld diameter. These input parameters are used for the process model and design of a predictive controller. A three parameter empirical model with dependent and independent variables was used for curve fitting the nonlinear halfwave dynamic resistance. The estimates of the parameters were used to develop charts for determining overall resistance of samples for any desired weld diameter. Estimating resistance for samples welded in the machines from which dataset obtained were used to plot the chart yielded accurate results. However using these charts to estimate sample resistance for new and unknown machines yielded high estimation error. To improve the prediction accuracy the same set of data generated from the model were used to train four different neural network types. These were the Generalised Feed Forward (GFF) neural network, Multilayer Perceptron (MLP) network, Radial Basis Function (RBF) and Recurrent neural network (RNN). Of the four network types trained, the MLP had the least mean square error for training and cross validation of 0.00037 and 0.00039 respectively with linear correlation coefficient in testing of 0.999 and maximum estimation error range from 0.1% to 3%. A prediction accuracy of about 97% to 99.9%. This model was selected for the design and implementation of the controller for predicting overall sample resistance. Using this predicted overall sample resistance, and applied electrode force, a second model was developed for predicting required effective weld current for any desired weld diameter. The prediction accuracy of this model was in the range of 94% to 99%. The neural network predictive controller was designed using the MLP neural network models. The controller outputs effective current for any desired weld diameter and is observed to track the desired output accurately with same prediction accuracy of the model used which was about 94% to 99%. The controller works by utilizing the neural network output embedded in Microsoft Excel as a digital link library and is able to generate outputs for given inputs on activating the process by the push of a command button

    Statistical modelling by neural networks

    Get PDF
    In this thesis the two disciplines of Statistics and Artificial Neural Networks are combined into an integrated study of a data set of a weather modification Experiment. An extensive literature study on artificial neural network methodology has revealed the strongly interdisciplinary nature of the research and the applications in this field. An artificial neural networks are becoming increasingly popular with data analysts, statisticians are becoming more involved in the field. A recursive algoritlun is developed to optimize the number of hidden nodes in a feedforward artificial neural network to demonstrate how existing statistical techniques such as nonlinear regression and the likelihood-ratio test can be applied in innovative ways to develop and refine neural network methodology. This pruning algorithm is an original contribution to the field of artificial neural network methodology that simplifies the process of architecture selection, thereby reducing the number of training sessions that is needed to find a model that fits the data adequately. [n addition, a statistical model to classify weather modification data is developed using both a feedforward multilayer perceptron artificial neural network and a discriminant analysis. The two models are compared and the effectiveness of applying an artificial neural network model to a relatively small data set assessed. The formulation of the problem, the approach that has been followed to solve it and the novel modelling application all combine to make an original contribution to the interdisciplinary fields of Statistics and Artificial Neural Networks as well as to the discipline of meteorology.Mathematical SciencesD. Phil. (Statistics

    Recent Advances and Applications of Machine Learning in Metal Forming Processes

    Get PDF
    Machine learning (ML) technologies are emerging in Mechanical Engineering, driven by the increasing availability of datasets, coupled with the exponential growth in computer performance. In fact, there has been a growing interest in evaluating the capabilities of ML algorithms to approach topics related to metal forming processes, such as: Classification, detection and prediction of forming defects; Material parameters identification; Material modelling; Process classification and selection; Process design and optimization. The purpose of this Special Issue is to disseminate state-of-the-art ML applications in metal forming processes, covering 10 papers about the abovementioned and related topics
    corecore