3,850 research outputs found
Selective Neuron Re-Computation (SNRC) for Error-Tolerant Neural Networks
Artificial Neural networks (ANNs) are widely used to solve classification problems for many machine learning applications. When errors occur in the computational units of an ANN implementation due to for example radiation effects, the result of an arithmetic operation can be changed, and therefore, the predicted classification class may be erroneously affected. This is not acceptable when ANNs are used in many safety-critical applications, because the incorrect classification may result in a system failure. Existing error-tolerant techniques usually rely on physically replicating parts of the ANN implementation or incurring in a significant computation overhead. Therefore, efficient protection schemes are needed for ANNs that are run on a processor and used in resource-limited platforms. A technique referred to as Selective Neuron Re-Computation (SNRC), is proposed in this paper. As per the ANN structure and algorithmic properties, SNRC can identify the cases in which the errors have no impact on the outcome; therefore, errors only need to be handled by re-computation when the classification result is detected as unreliable. Compared with existing temporal redundancy-based protection schemes, SNRC saves more than 60 percent of the re-computation (more than 90 percent in many cases) overhead to achieve complete error protection as assessed over a wide range of datasets. Different activation functions are also evaluated.This research was supported by the National Science Foundation Grants CCF-1953961 and 1812467, by the ACHILLES project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Science and Innovation and by the Madrid Community research project TAPIR-CM P2018/TCS-4496.Publicad
Identification of control chart patterns using neural networks
To produce products with consistent quality, manufacturing processes need to be closely monitored for any deviations in the process. Proper analysis of control charts that are used to determine the state of the process not only requires a thorough knowledge and understanding of the underlying distribution theories associated with control charts, but also the experience of an expert in decision making. The present work proposes a modified backpropagation neural network methodology to identify and interpret various patterns of variations that can occur in a manufacturing process. Control charts primarily in the form of X-bar chart are widely used to identify the situations when control actions will be needed for manufacturing systems. Various types of patterns are observed in control charts. Identification of these control chart patterns (CCPs) can provide clues to potential quality problems in the manufacturing process. Each type of control chart pattern has its own geometric shape and various related features can represent this shape. This project formulates Shewhart mean (X-bar) and range (R) control charts for diagnosis and interpretation by artificial neural networks. Neural networks are trained to discriminate between samples from probability distributions considered within control limits and those which have shifted in both location and variance. Neural networks are also trained to recognize samples and predict future points from processes which exhibit long term or cyclical drift. The advantages and disadvantages of neural control charts compared to traditional statistical process control are iscussed. In processes, the causes of variations may be categorized as chance (unassignable) causes and special (assignable) causes. The variations due to chance causes are inevitable, and difficult to detect and identify. On the other hand, the variations due to special causes prevent the process being a stable and predictable. Such variations should be determined effectively and eliminated from the process by taking the necessary corrective actions to maintain the process in control and improve the quality of the products as well. In this study, a multilayered neural network trained with a back propagation algorithm was applied to pattern recognition on control charts. The neural network was experimented on a set of generated data
Developing An Automated Forecasting Framework For Predicting Operation Room Block Time
Operating rooms are the most important part of the hospitals, since they have highest influence on financial state of the hospital. Because of high uncertainty in surgery cases demands and their durations, the scheduling of the surgeries becomes a very challenging and critical issue in hospitals. One of the most common approaches to overcome this uncertainty is applying block times which is the time intervals allocated to surgery groups in the hospital. Assigning sufficient amount of the time to each block, is very important, since overestimating lead to wasting resources and on the other hand underestimation causes the overtime staffing and probably surgery cancellation. The objective of this study is developing an automatic forecasting framework with applying a high performance forecasting methods to predict the future block time intervals for surgical groups. The main property of proposed forecasting framework is elimination of the human intervention which means the system follows the certain algorithms to perform the forecasting. In this framework we have applied four different methods include exponential smoothing, ARMA, artificial neural network and hybrid ANN-ARMA methodology, then by applying multi-criteria decision analysis, the most effective method can be selected. The accurate forecasting can result in reductions in total waiting time, idle time, and overtime costs. We illustrate this with results of a case study which conducted by real world data at John D. Dingell Detroit VA Medical Center
Reclaiming Fault Resilience and Energy Efficiency With Enhanced Performance in Low Power Architectures
Rapid developments of the AI domain has revolutionized the computing industry by the introduction of state-of-art AI architectures. This growth is also accompanied by a massive increase in the power consumption. Near-Theshold Computing (NTC) has emerged as a viable solution by offering significant savings in power consumption paving the way for an energy efficient design paradigm. However, these benefits are accompanied by a deterioration in performance due to the severe process variation and slower transistor switching at Near-Threshold operation. These problems severely restrict the usage of Near-Threshold operation in commercial applications. In this work, a novel AI architecture, Tensor Processing Unit, operating at NTC is thoroughly investigated to tackle the issues hindering system performance. Research problems are demonstrated in a scientific manner and unique opportunities are explored to propose novel design methodologies
- …