41,013 research outputs found

    Some thoughts on neural network modelling of micro-abrasion-corrosion processes

    Get PDF
    There is increasing interest in the interactions of microabrasion, involving small particles of less than 10 ÎĽm in size, with corrosion. This is because such interactions occur in many environments ranging from the offshore to health care sectors. In particular, micro-abrasion-corrosion can occur in oral processing, where the abrasive components of food interacting with the acidic environment, can lead to degradation of the surface dentine of teeth. Artificial neural networks (ANNs) are computing mechanisms based on the biological brain. They are very effective in various areas such as modelling, classification and pattern recognition. They have been successfully applied in almost all areas of engineering and many practical industrial applications. Hence, in this paper an attempt has been made to model the data obtained in microabrasion-corrosion experiments on polymer/steel couple and a ceramic/lasercarb coating couple using ANN. A multilayer perceptron (MLP) neural network is applied and the results obtained from modelling the tribocorrosion processes will be compared with those obtained from a relatively new class of neural networks namely resource allocation network

    A Deep Reinforcement Learning-Based Model for Optimal Resource Allocation and Task Scheduling in Cloud Computing

    Get PDF
    The advent of cloud computing has dramatically altered how information is stored and retrieved. However, the effectiveness and speed of cloud-based applications can be significantly impacted by inefficiencies in the distribution of resources and task scheduling. Such issues have been challenging, but machine and deep learning methods have shown great potential in recent years. This paper suggests a new technique called Deep Q-Networks and Actor-Critic (DQNAC) models that enhance cloud computing efficiency by optimizing resource allocation and task scheduling. We evaluate our approach using a dataset of real-world cloud workload traces and demonstrate that it can significantly improve resource utilization and overall performance compared to traditional approaches. Furthermore, our findings indicate that deep reinforcement learning (DRL)-based methods can be potent and effective for optimizing cloud computing, leading to improved cloud-based application efficiency and flexibility

    Lazy learning in radial basis neural networks: A way of achieving more accurate models

    Get PDF
    Radial Basis Neural Networks have been successfully used in a large number of applications having in its rapid convergence time one of its most important advantages. However, the level of generalization is usually poor and very dependent on the quality of the training data because some of the training patterns can be redundant or irrelevant. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be approximated. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains an artificial regression problem and two time series prediction problems. Results have been compared to standard training method using the complete training data set and the new method shows better generalization abilities.Publicad
    • …
    corecore