411 research outputs found
Second CLIPS Conference Proceedings, volume 1
Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems
Enabling flexibility through strategic management of complex engineering systems
”Flexibility is a highly desired attribute of many systems operating in changing or uncertain conditions. It is a common theme in complex systems to identify where flexibility is generated within a system and how to model the processes needed to maintain and sustain flexibility. The key research question that is addressed is: how do we create a new definition of workforce flexibility within a human-technology-artificial intelligence environment?
Workforce flexibility is the management of organizational labor capacities and capabilities in operational environments using a broad and diffuse set of tools and approaches to mitigate system imbalances caused by uncertainties or changes. We establish a baseline reference for managers to use in choosing flexibility methods for specific applications and we determine the scope and effectiveness of these traditional flexibility methods.
The unique contributions of this research are: a) a new definition of workforce flexibility for a human-technology work environment versus traditional definitions; b) using a system of systems (SoS) approach to create and sustain that flexibility; and c) applying a coordinating strategy for optimal workforce flexibility within the human- technology framework. This dissertation research fills the gap of how we can model flexibility using SoS engineering to show where flexibility emerges and what strategies a manager can use to manage flexibility within this technology construct”--Abstract, page iii
Third Conference on Artificial Intelligence for Space Applications, part 1
The application of artificial intelligence to spacecraft and aerospace systems is discussed. Expert systems, robotics, space station automation, fault diagnostics, parallel processing, knowledge representation, scheduling, man-machine interfaces and neural nets are among the topics discussed
State-of-the-Art Report on Systems Analysis Methods for Resolution of Conflicts in Water Resources Management
Water is an important factor in conflicts among stakeholders at the local, regional, and even international level. Water conflicts have taken many forms, but they almost always arise from the fact that the freshwater resources of the world are not partitioned to match the political borders, nor are they evenly distributed in space and time. Two or more countries share the watersheds of 261 major rivers and nearly half of the land area of the wo rld is in international river basins. Water has been used as a military and political goal. Water has been a weapon of war. Water systems have been targets during the war. A role of systems approach has been investigated in this report as an approach for resolution of conflicts over water. A review of systems approach provides some basic knowledge of tools and techniques as they apply to water management and conflict resolution. Report provides a classification and description of water conflicts by addressing issues of scale, integrated water management and the role of stakeholders. Four large-scale examples are selected to illustrate the application of systems approach to water conflicts: (a) hydropower development in Canada; (b) multipurpose use of Danube river in Europe; (c) international water conflict between USA and Canada; and (d) Aral See in Asia. Water conflict resolution process involves various sources of uncertainty. One section of the report provides some examples of systems tools that can be used to address objective and subjective uncertainties with special emphasis on the utility of the fuzzy set theory. Systems analysis is known to be driven by the development of computer technology. Last section of the report provides one view of the future and systems tools that will be used for water resources management. Role of the virtual databases, computer and communication networks is investigated in the context of water conflicts and their resolution.https://ir.lib.uwo.ca/wrrr/1005/thumbnail.jp
Fault isolation detection expert (FIDEX). Part 1: Expert system diagnostics for a 30/20 Gigahertz satellite transponder
LeRC has recently completed the design of a Ka-band satellite transponder system, as part of the Advanced Communication Technology Satellite (ACTS) System. To enhance the reliability of this satellite, NASA funded the University of Akron to explore the application of an expert system to provide the transponder with an autonomous diagnosis capability. The results of this research was the development of a prototype diagnosis expert system called FIDEX (fault-isolation and diagnosis expert). FIDEX is a frame-based expert system that was developed in the NEXPERT Object development environment by Neuron Data, Inc. It is a MicroSoft Windows version 3.0 application, and was designed to operate on an Intel i80386 based personal computer system
Knowledge-based automatic tolerance analysis system
Tolerance measure is an important part of engineering, however, to date the system of
applying this important technology has been left to the assessment of the engineer using
appropriate guidelines. This work offers a major departure from the trial and error or random
number generation techniques that have been used previously by using a knowledge-based
system to ensure the intelligent optimisation within the manufacturing system. A system to
optimise manufacturing tolerance allocation to a part known as Knowledge-based Automatic
Tolerance Analysis (KATA) has been developed. KATA is a knowledge-based system shell
built within AutoCAD. It has the ability for geometry creation in CAD and the capability to
optimise the tolerance heuristically as an expert system. Besides the worst-case tolerancing
equation to optimise the tolerance allocation, KATA's algorithm is supported by actual
production information such as machine capability, types of cutting tools, materials, process
capabilities etc. KATA's prototype is currently able to analyse a cylindrical shape workpiece
and a simple prismatic part. Analyses of tolerance include dimensional tolerance and
geometrical tolerance. KATA is also able to do angular cuts such as tapers and chamfers. The
investigation has also led to the significant development of the single tolerance reference
technique. This method departs from the common practice of multiple tolerance referencing
technique to optimise tolerance allocation. Utilisation of this new technique has eradicated
the error of tolerance stackup. The retests have been undertaken, two of which are cylindrical
parts meant to test dimensional tolerance and an angular cut. The third is a simple prismatic
part to experiment with the geometrical tolerance analysis.
The ability to optimise tolerance allocation is based on real production data and not imaginary
or random number generation and has improved the accuracy of the expected result after
manufacturing. Any failure caused by machining parameters is cautioned at an early stage
before an actual production run has commenced. Thus, the manufacturer is assured that the
product manufactured will be within the required tolerance limits. Being the central database
for all production capability information enables KATA to opt for several approaches and
techniques of processing. Hence, giving the user flexibility of selecting the process plan best
suited for any required situation
A Comparative Analysis of Design Techniques for the Construction of an Expert System for Aircraft Engine Diagnostics
The lack of knowledge and understanding of diagnostic aircraft propulsion systems causes inappropriate problem diagnosis. Because of increasing complexity, technicians are incapable of performing the necessary tasks in accordance with standard regulations. More sophisticated systems are needed today to assist the user technician in decision-making. This work provided a study of rule-based and frame-based expert system techniques to determine the most appropriate solution in the domain of complex diagnosis using large amounts of deterministic data. The study produced a framework that facilitates the diagnosing of faults on aircraft engines, thus reducing the burden on the aircraft mechanic regardless of experience level.
An intelligent system, the Virtually Automated Maintenance Analysis System (V AMAS), was created as a test model. It was used to compare the relative efficiency of the different expert systems techniques and the effectiveness of expert systems. One aviation malfunction problem was identified. Information collected for the Main Ignition Malfunction was developed into question sets and coded. Six specific subsets of problems were addressed.
This research compared the rule-based and frame-based knowledge representation techniques using a set of evaluation factors: computational efficiency, correctness, expressiveness, and consistency. From the analysis it was concluded that the frame based knowledge representation technique ranked higher than the rule-based representation, and is suitable for use with an expert system to represent an aircraft propulsion system \u27s deterministic data
Advancing automation and robotics technology for the Space Station and for the US economy, volume 2
In response to Public Law 98-371, dated July 18, 1984, the NASA Advanced Technology Advisory Committee has studied automation and robotics for use in the Space Station. The Technical Report, Volume 2, provides background information on automation and robotics technologies and their potential and documents: the relevant aspects of Space Station design; representative examples of automation and robotics; applications; the state of the technology and advances needed; and considerations for technology transfer to U.S. industry and for space commercialization
Fifth Conference on Artificial Intelligence for Space Applications
The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration
Recommended from our members
The Effectiveness of <i>t</i>-Way Test Data Generation
Modern society is increasingly dependent on the correct functioning of software and increasingly so in areas that are considered safety related or safety critical. Therefore, there is an increasing need to be able to verify and validate that the software is in fact correct and will perform its intended function. Many approaches to this problem have been proposed; however, none seems likely to supplant the role of testing in the near future.
If we accept that there is, and will be, a continuing need to be able to test software then the question becomes one of how can this be done effectively, both in terms of ability to detect errors and in terms of cost. One avenue of research that offers prospects of improving both of these aspects is the automatic generation of test data.
There has recently been a large amount of work conducted in this area. One particularly promising direction has been the application of ideas from the field of experimental design and in particular, the field of t-way adequate factorial designs.
The area however, is not without issues; there is evidence that the technique is capable of detecting errors but that evidence is not unequivocal. Moreover, as with almost all work in the area of automatic test generation, there has been very little comparative work comparing the technique with other test data generation techniques. Worse, there has been effectively no work done that compares any automatic test data generation technique with the effectiveness of tests generated by humans. Another major issue with the technique is the number of tests that applying the technique can result in. This implies that there is a need for an automated oracle if the technique is to be successfully applied. The flaw with this is of course that in most situations the oracle is the human that is conducting the tests, a point often ignored in testing research.
The work presented here addresses both of these points. To do this I have used a code base taken from an industrial engine control system that has an existing set of high quality unit tests developed by hand. To complement this, several other techniques for automatically generating test data have been applied, namely random testing, random experimental designs and a technique for generating single factor experiments. To address the issue of being able to compare the error detection ability of all of the sets of test vectors, rather than the usual effectiveness surrogates of code coverage I have used mutation analysis on the code base to directly measure the ability of each set of test vectors to discover common coding errors. The results presented here show that test data generation techniques based on t-way factorial designs are at least as effective as handgenerated tests and superior to random testing and the factor experimental technique.
The oracle problem associated with the factorial design techniques was addressed using a test set minimisation approach. The mutation tool monitored which vectors could “kill” which code mutants. After a subset of the test vectors had been run, the most effective vectors were retained and the rest discarded. Likewise, mutants that were killed were removed from further consideration and the process repeated. Experimental results show that this minimisation procedure is effective at reducing computational overhead and is capable of producing final sets of test vectors that are comparable in size with the sets of hand-generated tests and so amenable to final hand checking
- …