364 research outputs found
SHELL-BASED EXPERT SYSTEMS IN BUSINESS: A RETURN ON INVESTMENT PERSPECTIVE
This paper examines an important issue emerging in information systems management--the decision to proceed with an expert system application in a business setting. The focus is knowledge based systems at the lower end of the complexity spectrum--small, very focused systems that can be implemented by the use of shell-based development environments. This group represents the majority of expert systems that are currently being implemented and has some characteristics quite different from the larger systems. A classification scheme is suggested to differentiate three levels of ES development, from multi-million dollar life cycle cost ES environments to those that are in the low five figure range. The Low End segment of the range, the focus of this paper, is characterized by lower unit costs, powerful development tools and a large number of small, successful applications. The important role of Low End systems is discussed, with particular emphasis on their relatively high yield in standalone applications. Such systems do not meet the AI demands of moderately or very complex problems but there is a surprising breadth in their use. A group of key success factors for Low End systems is proposed, based on a synthesis of the applications literature. To operationalize these factors, three actual cases using Low End technology--from marketing, government and agribusiness-- are briefly described. Low End systems are not all gain. Their low unit costs can often mask the risks of proceeding headlong into an application without careful examination of the variables that can predict successful results. An agenda for action is offered for specific management policies for the planning of knowledge-based applications
Application of knowledge-based techniques to fault diagnosis of 16 QAM digital microwave radio equipment
SIGLEAvailable from British Library Document Supply Centre- DSC:D86372 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Recommended from our members
The development of an expert systems approach to the statistical analysis of experimental data
This thesis is concerned with the application of expert systems techniques in the field of statistics. An expert statistician in industry has a twofold role; undertaking the design and analysis of data from complex experiments and providing supervision and help for research workers who analyse data from simpler designs. There is, therefore, a potential role for a statistical expert system which could be used by research workers to enable them to carry out valid analyses. The expert statistician would be freed from the more straightforward analyses and would only need to deal with referrals from the system and to initially 'tune' the system to their own application area. The design and development of such a prototype expert system, THESEUS, is the basis of this work.
The area of application chosen for the prototype system is completely randomised designs with one trial factor. It was initially important to limit the area of study so that knowledge acquisition for the system would be a manageable task. However, once the difficulties in developing an expert system have been tackled, much of the expertise used in analysing this simple type of study could be readily extended to more complex designs.
The knowledge acquisition phase, the most time consuming part of developing any expert system, concentrated on developing a rational prototype rule base by reviewing the available literature, interviewing practising statisticians and undertaking workshops where the analysis of particular data sets was discussed.
The prototype software is a production rule system and is written in Turbo Pascal on an IBM-AT. Pascal was chosen because of the need to access statistical routines during the consultation process. The prototype uses a combination of forward and backward chaining to process the rules. Information required by the system can come from the user, the data or the rules.
The overall system design also includes facilities for entering and editing data, altering and adding knowledge and a report generator. Implementation of these facilities is not incorporated as part of this thesis.
A small number of trial sites were selected for industrial trials in order to validate the system and evaluate the results of the local experts 'tuning' of the rule base to their own particular application area
Recommended from our members
Logic, parallelism and semantic networks : the binary predicate execution model
This thesis develops the Binary Predicate Execution Model; a distributed, massively-parallel system for semantic networks and knowledge bases that is built on a subset of first-order predicate logic. The use of logic gives the model an easily-understood programming paradigm and a well-defined semantics of execution. When expressed in binary predicates, a simple graphical interpretation can be used. All program facts are represented in an assertion graph. Each vertex is associated with a term appearing in a fact and the edges are labeled with the predicate names. Similar graphs are also associated with each rule body and the query. Finding all possible solutions corresponds to finding all possible matches between the query graph and the assertion graph. Invoking a rule corresponds to substituting the graph of its body constrained by the dependencies between its arguments. This can be implemented in a parallel, message-passing fashion where the assertion graph vertices are active processing elements which asynchronously exchange messages identifying different parts of the query that remain to be matched and containing any binding information from previous matching required to accomplish this. The model is data-driven since every message can be immediately processed without the need for any centralized control or centralized memory. By restricting how functional terms can occur, distributed data structures and remote data look-ups for unification are eliminated. Thus, the model's performance on increasingly larger problems scales-up given increasingly larger machines in most cases. Architectural support for the model is investigated and simulation results of a relatively simple software implementation are reported. This suggests performance on the order of 10^5 logical inferences per second for 256 processing elements in an n-cube configuration. Further research directions, including that of increasing efficiency, are discussed
- …