1,543,883 research outputs found
An hierarchical approach to performance evaluation of expert systems
The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana
Building an Expert System for Evaluation of Commercial Cloud Services
Commercial Cloud services have been increasingly supplied to customers in
industry. To facilitate customers' decision makings like cost-benefit analysis
or Cloud provider selection, evaluation of those Cloud services are becoming
more and more crucial. However, compared with evaluation of traditional
computing systems, more challenges will inevitably appear when evaluating
rapidly-changing and user-uncontrollable commercial Cloud services. This paper
proposes an expert system for Cloud evaluation that addresses emerging
evaluation challenges in the context of Cloud Computing. Based on the knowledge
and data accumulated by exploring the existing evaluation work, this expert
system has been conceptually validated to be able to give suggestions and
guidelines for implementing new evaluation experiments. As such, users can
conveniently obtain evaluation experiences by using this expert system, which
is essentially able to make existing efforts in Cloud services evaluation
reusable and sustainable.Comment: 8 page, Proceedings of the 2012 International Conference on Cloud and
Service Computing (CSC 2012), pp. 168-175, Shanghai, China, November 22-24,
201
Expert systems built by the Expert: An evaluation of OPS5
Two expert systems were written in OPS5 by the expert, a Ph.D. astronomer with no prior experience in artificial intelligence or expert systems, without the use of a knowledge engineer. The first system was built from scratch and uses 146 rules to check for duplication of scientific information within a pool of prospective observations. The second system was grafted onto another expert system and uses 149 additional rules to estimate the spacecraft and ground resources consumed by a set of prospective observations. The small vocabulary, the IF this occurs THEN do that logical structure of OPS5, and the ability to follow program execution allowed the expert to design and implement these systems with only the data structures and rules of another OPS5 system as an example. The modularity of the rules in OPS5 allowed the second system to modify the rulebase of the system onto which it was grafted without changing the code or the operation of that system. These experiences show that experts are able to develop their own expert systems due to the ease of programming and code reusability in OPS5
Measuring and comparing the reliability of the structured walkthrough evaluation method with novices and experts
Effective evaluation of websites for accessibility remains problematic. Automated evaluation tools still require a significant manual element. There is also a significant expertise and evaluator effect. The Structured Walkthrough method is the translation of a manual, expert accessibility evaluation process adapted for use by novices. The method is embedded in the Accessibility Evaluation Assistant (AEA), a web accessibility knowledge management tool. Previous trials examined the pedagogical potential of the tool when incorporated into an undergraduate computing curriculum. The results of the evaluations carried out by novices yielded promising, consistent levels of validity and reliability. This paper presents the results of an empirical study that compares the reliability of accessibility evaluations produced by two groups (novices and experts). The main results of this study indicate that overall reliability of expert evaluations was 76% compared to 65% for evaluations produced by novices. The potential of the Structured Walkthrough method as a useful and viable tool for expert evaluators is also examined. Copyright 2014 ACM
Expert system verification and validation study. Delivery 1: Survey and interview questions
The NASA funded questionnaire is presented to help define the state-of-the-practice in the formal evaluation of Expert Systems on current NASA and industry applications. The answers to this questionnaire, together with follow-up interviews, will provide realistic answers to the following questions: (1) How much evaluation is being performed; (2) What evaluation techniques are in use; and (3) What, if any, are the unique issues in evaluating Expert Systems
ORGAP Project – Evaluation toolbox for the evaluation of action plans for organic food and farming
The ORGAP-Project has developed an evaluation toolbox for the evaluation of the European and/or national action plans based on analysis of national action plans and expert/stakeholder consultation
Reliability and performance evaluation of systems containing embedded rule-based expert systems
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system
Developing Mathematics-Students Worksheet Based On Realistic Approach For Junior High School In Bilingual Program
The aim of this study is to develop mathematics-student worksheet based on realistic approach for junior high school in bilingual program. The criteria used to generate product are valid and practical.
This research was applying the Research and Development design as adapts the development model of Borg and Gall (1983). Those are (1) need analysis, (2) product develompemnt (3) organization of learning material prototype (4) trial run, (5) product revision and (6 ) final result.
In its development, mathematics-student worksheet has been tested through formative evaluation that covers several stages, i.e. content or material expert review, instructional design expert review, instructional media expert review, individual test, small group test and field test. The formative evaluation results in form of suggestion, response or assessment used as feedback in revising and finishing the mathematics-student worksheet.
The content or material experts review, the instructional design expert review and the instructional media expert review of mathematics-student worksheet result in " very good" category with percentage of 90%. According to the students’ responses, it was found that students worksheet was very interesting, exciting and can help the students to comprehend the concept.
Key Words: Student worksheet, Realistic Mathematic
- …
