8,773 research outputs found
Evidential-EM Algorithm Applied to Progressively Censored Observations
Evidential-EM (E2M) algorithm is an effective approach for computing maximum
likelihood estimations under finite mixture models, especially when there is
uncertain information about data. In this paper we present an extension of the
E2M method in a particular case of incom-plete data, where the loss of
information is due to both mixture models and censored observations. The prior
uncertain information is expressed by belief functions, while the
pseudo-likelihood function is derived based on imprecise observations and prior
knowledge. Then E2M method is evoked to maximize the generalized likelihood
function to obtain the optimal estimation of parameters. Numerical examples
show that the proposed method could effectively integrate the uncertain prior
infor-mation with the current imprecise knowledge conveyed by the observed
data
A Survey of Prediction and Classification Techniques in Multicore Processor Systems
In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems
Modified generalized linear failure rate distribution: Properties and reliability analysis
This paper introduces a new comprehensive four-parameter distribution called the modified generalized linear failure rate (MGLFR) distribution. The method generalizes some well-known and most commonly used distributions in reliability such as exponential, Rayleigh, linear failure rate, generalized linear failure rate and modified Weibull distribution. The study also investigates some essential properties of this new distribution and considers the problem of the evaluation of system reliability by describing the lifetimes of components based on a fuzzy MGLFR distribution and by developing fuzzy reliability characteristics. The results can be applied to determine the reliability of real objects where parameters of lifetime variable are subject to uncertainty
Interval reliability inference for multi-component systems
This thesis is a collection of investigations on applications of imprecise probability theory to system reliability engineering with emphasis on using survival signatures for modelling complex systems. Survival signatures provide efficient representation of system structure and facilitate several reliability assessments by separating the computationally expensive combinatorial part from the subsequent evaluations submitted to only polynomial complexity. This proves useful for situations which also account for the statistical inference on system component lifetime distributions where Bayesian methods require repeated numerical propagation for the samples from the posterior distribution. Similarly, statistical methods involving imprecise probabilistic models composed of sets of precise probability distributions also benefit from the simplification by the signature representation. We will argue the pragmatic benefits of using statistical models based on imprecise probability models in reliability engineering from the perspective of inferential validity and provision of objective guarantees for the statistical procedures. Imprecise probability methods generally require solving an optimization problem to obtain bounds on the assessments of interest, but monotone system structures simplify them without much additional complexity. This simplification extends to survival signature models, therefore many reliability assessments with imprecise (interval) component lifetime models tend to be tractable as will be demonstrated on several examples
Uncertainty in Engineering
This open access book provides an introduction to uncertainty quantification in engineering. Starting with preliminaries on Bayesian statistics and Monte Carlo methods, followed by material on imprecise probabilities, it then focuses on reliability theory and simulation methods for complex systems. The final two chapters discuss various aspects of aerospace engineering, considering stochastic model updating from an imprecise Bayesian perspective, and uncertainty quantification for aerospace flight modelling. Written by experts in the subject, and based on lectures given at the Second Training School of the European Research and Training Network UTOPIAE (Uncertainty Treatment and Optimization in Aerospace Engineering), which took place at Durham University (United Kingdom) from 2 to 6 July 2018, the book offers an essential resource for students as well as scientists and practitioners
Automatic programming methodologies for electronic hardware fault monitoring
This paper presents three variants of Genetic Programming (GP) approaches for intelligent online performance monitoring of electronic circuits and systems. Reliability modeling of electronic circuits can be best performed by the Stressor - susceptibility interaction model. A circuit or a system is considered to be failed once the stressor has exceeded the susceptibility limits. For on-line prediction, validated stressor vectors may be obtained by direct measurements or sensors, which after pre-processing and standardization are fed into the GP models. Empirical results are compared with artificial neural networks trained using backpropagation algorithm and classification and regression trees. The performance of the proposed method is evaluated by comparing the experiment results with the actual failure model values. The developed model reveals that GP could play an important role for future fault monitoring systems.This research was supported by the International Joint Research Grant of the IITA (Institute of Information Technology Assessment) foreign professor invitation program of the MIC (Ministry of Information and Communication), Korea
NEURAL NETWORKS AND EVOLUTIONARY OPTIMIZATION TECHNIQUES AND THEIR APPLICATIONS IN FATIGUE LIFE ASSESSMENT OF COMPOSITE MATERIALS-A BRIEF REVIEW
Modeling of fatigue life of composite materials under various loading and environment conditions becomes important and challenging task from viewpoint of performance and reliability as it forms a basis for lifetime assessment of composite structures under complex variable state of stress. Application of soft computing techniques as new approach and route for modelling of composite material fatigue lives has attracted a great interest recently. The applications of soft computing techniques in fatigue life assessment of composite materials are reviewed and discussed in this paper
- …