117,759 research outputs found
Validation of Expert System Performance
Most definitions of an expert system include some reference to the ability of the system to perform at a level close to human expert performance. Yet the validation of expert systems, that is, the testing of systems so as to ascertain that they achieve an acceptable level of performance, has (with a few exceptions) been ad-hoc, informal, and in some cases of dubious value. This paper attempts to establish validation as an important concern in expert systems research and development. The problems in validating an expert system are discussed, and a number of methods for validating expert systems, both qualitative and quantitative, are presented
Implementation of Combat Simulation Through Expert Support Systems
The battlefield simulation is often faced with a bewildering array of conflicting stresses and challenges. Communication is currently slower and more costly than computation. Expert System technologies such as production rule systems allow one to acquire and represent the collection of heuristic rules in computer compatible form. The system also include master control programs that determine the order in which these rules should be applied against the monitored system performance to arrive at appropriate system control. These expert systems are used in two nodes, both as an intelligence assistant to the expert, amplifying the capacity and quality of his work, and as a surrogate for an expert when he is not available. An Expert support System (ESS) designed and developed for combat simulation has been enumerated in this article. The quality and the reliability of the inferred tactical situation is improved by using PROLOG. This formal AI language is used for validating and checking sensor detections for consistency and logical plausibility. The supremacy of PROLOG for creating and interrogating a data base helps maintaining a reasonably coherent feature of the tactical situation. The perils and pitfalls of tackling with expert systems have also been underscored
COMPARING THE VALIDITY OF ALTERNATIVE BELIEF LANGUAGES: AN EXPERIMENTAL APPROACH
The problem of modeling uncertainty and inexact reasoning in
rule-based expert systems is challenging on nonnative as well on
cognitive grounds. First, the modular structure of the rule-based
architecture does not lend itself to standard Bayesian
inference techniques. Second, there is no consensus on how to
model human (expert) judgement under uncertainty. These factors
have led to a proliferation of quasi-probabilistic belief calculi
which are widely-used in practice. This paper investigates the
descriptive and external validity of three well-known "belief
languages:" the Bayesian, ad-hoc Bayesian, and the certainty
factors languages. These models are implemented in many
commercial expert system shells, and their validity is clearly an
important issue for users and designers of expert systems. The
methodology consists of a controlled, within-subject experiment
designed to measure the relative performance of alternative
belief languages. The experiment pits the judgement of human
experts with the recommendations generated by their simulated
expert systems, each using a different belief language. Special
emphasis is given to the general issues of validating belief
languages and expert systems at large.Information Systems Working Papers Serie
Certainty Factor-based Expert System for Meat Classification within an Enterprise Resource Planning Framework
The demand for halal products in the Islamic context continues to be high, requiring adherence to halal and haram laws in consuming food and beverages. However, individuals face the challenge of distinguishing between haram meat and permissible halal meat. This study aims to answer these challenges by designing an expert system application within the ERP framework to increase the usability functionality of the system that can differentiate between beef, pork, or a mixture of both based on the physical characteristics of the meat. The aim is to determine halal products permissible for consumption by Muslims. The research methodology includes a data collection process that involves taking 30 meat samples from various sources, and the criteria used to classify the meat will be determined based on an analysis of the physical characteristics of the meat. System administrators use expert systems to ensure proper treatment of meat during administration processes, including separating halal beef from pork and implementing different inventory procedures. The Certainty Factor (CF) inference engine deals with uncertainty even though the expert system's accuracy level is relatively good with several rules. However, these results must be studied further because the plan relies on expert opinion. Therefore, it is necessary to set the correct CF value for accurate height classification. The CF inference engine facilitates reasoned conclusions in meat classification. Functional testing confirms the smooth running of the system, validating its reliability and performance. In addition, the expert system accuracy assessment produces a commendable accuracy rate of 90%. In addition, the expert system works powerfully on various meat samples, accurately classifying meat types with high precision. This study explicitly highlights the expert system's design for meat classification in determining halal products by using the Expert System Certainty Factor. In conclusion, this expert system provides an efficient and reliable approach to classifying meat and supports the production and consumption of Halal products according to Islamic principles
Novel Rule Base Development from IED-Resident Big Data for Protective Relay Analysis Expert System
Many Expert Systems for intelligent electronic device (IED) performance analyses such as those for protective relays have been developed to ascertain operations, maximize availability, and subsequently minimize misoperation risks. However, manual handling of overwhelming volume of relay resident big data and heavy dependence on the protection experts’ contrasting knowledge and inundating relay manuals have hindered the maintenance of the Expert Systems. Thus, the objective of this chapter is to study the design of an Expert System called Protective Relay Analysis System (PRAY), which is imbedded with a rule base construction module. This module is to provide the facility of intelligently maintaining the knowledge base of PRAY through the prior discovery of relay operations (association) rules from a novel integrated data mining approach of Rough-Set-Genetic-Algorithm-based rule discovery and Rule Quality Measure. The developed PRAY runs its relay analysis by, first, validating whether a protective relay under test operates correctly as expected by way of comparison between hypothesized and actual relay behavior. In the case of relay maloperations or misoperations, it diagnoses presented symptoms by identifying their causes. This study illustrates how, with the prior hybrid-data-mining-based knowledge base maintenance of an Expert System, regular and rigorous analyses of protective relay performances carried out by power utility entities can be conveniently achieved
Validation of Expert Systems: Personal Choice Expert -- A Flexible Employee Benefit System
A method for validating expert systems, based on psychological validation literature and Turing\u27s imitation game, is applied to a flexible benefits expert system. Expert system validation entails determining if a difference exists between expert and novice decisions (construct validity), if the system uses the same inputs and processes to make its decisions as experts (content validity), and if the system produces the same results as experts (criterionrelated validity). If these criteria are satisfied, then the system is indistinguishable from experts for its domain and satisfies Turing\u27s imitation game.
The methods developed in this paper are applied to a human resource expert system, Personal Choice Expert (PCE), designed to help employees choose a benefits package in a flexible benefits system. Expert and novice recommendations are compared to those generated by PCE. PCE\u27s recommendations do not significantly differ from those given by experts. High inter-expert agreement exists for some benefit recommendations (e.g. Dental Care and Long-Term Disability) but not for others (e.g. Short-Term Disability and Life Insurance). Insights offered by this method are illustrated and examined
Rule Based Forecasting [RBF] - Improving Efficacy of Judgmental Forecasts Using Simplified Expert Rules
Rule-based Forecasting (RBF) has emerged to be an effective forecasting model compared to well-accepted benchmarks. However, the original RBF model, introduced in1992, incorporates 99 production rules and is, therefore, difficult to apply judgmentally. In this research study, we present a core rule-set from RBF that can be used to inform both judgmental forecasting practice and pedagogy. The simplified rule-set, called coreRBF, is validated by asking forecasters to judgmentally apply the rules to time series forecasting tasks. Results demonstrate that forecasting accuracy from judgmental use of coreRBF is not statistically different from that reported from similar applications of RBF. Further, we benchmarked these coreRBF forecasts against forecasts from (a) untrained forecasters, (b) an expert system based on RBF, and (c) the original 1992 RBF study. Forecast accuracies were in the hypothesized direction, arguing for the generalizability and validity of the coreRBF rules
Diabetes Prediction Using Artificial Neural Network
Diabetes is one of the most common diseases worldwide where a cure is not found for it yet. Annually it cost a lot of money to care for people with diabetes. Thus the most important issue is the prediction to be very accurate and to use a reliable method for that. One of these methods is using artificial intelligence systems and in particular is the use of Artificial Neural Networks (ANN). So in this paper, we used artificial neural networks to predict whether a person is diabetic or not. The criterion was to minimize the error function in neural network training using a neural network model. After training the ANN model, the average error function of the neural network was equal to 0.01 and the accuracy of the prediction of whether a person is diabetics or not was 87.3
- …