140 research outputs found

    Propagation of uncertainty in a knowledge-based system to assess energy management strategies for new technologies

    Get PDF
    The goal of this project is to investigate the propagation of uncertainty in a knowledge-based system that assesses energy management strategies for new gas and electric technologies that can help reduce energy consumption and demand. The new technologies that have been investigated include lighting, electric heat pumps, motors, refrigerators, microwave clothes dryers, freeze concentration, electric vehicles, gas furnaces, gas heat pumps, engine-driven chillers, absorption chillers, and natural gas vehicles distributed throughout the residential, commercial, industrial, and transportation sectors;The description of a complex assessment system may be simplified by allowing some degree of uncertainty. A number of uncertainty-representing mechanisms, such as probability theory, certainty factors, Dempster-Shafer theory, fuzzy logic, rough sets, non-numerical methods, and belief networks, were reviewed and compared. The proper application of uncertainty provides an effective and efficient way to represent knowledge;A knowledge-based system has been developed to assess the impacts of rebate programs on customer adoption of new technologies and, hence, the reductions in energy and demand. Three modes have been programmed: (1) one in which uncertainty is not considered, (2) another where fuzzy logic with linguistic variables is used to represent uncertainty, and (3) one in which uncertainty is represented using Dempster-Shafer theory with basic probability assignments. A correlation for rebate, expected (energy) savings, and customer adoption is employed in the knowledge base. Predictions for annual adoption of a new technology are made for specified useful life, rebate, and expected savings; or a suggested rebate can be determined for specified useful life, expected savings, and annual adoption. With input for energy use and demand for each technology, the impacts of rebate programs on energy use and power demand can be evaluated;This report and the knowledge-based system should help utilities determine these new technologies that are most promising and these strategies that should be emphasized in their energy management programs

    THE DEVELOPMENT OF A HOLISTIC EXPERT SYSTEM FOR INTEGRATED COASTAL ZONE MANAGEMENT

    Get PDF
    Coastal data and information comprise a massive and complex resource, which is vital to the practice of Integrated Coastal Zone Management (ICZM), an increasingly important application. ICZM is just as complex, but uses the holistic paradigm to deal with the sophistication. The application domain and its resource require a tool of matching characteristics, which is facilitated by the current wide availability of high performance computing. An object-oriented expert system, COAMES, has been constructed to prove this concept. The application of expert systems to ICZM in particular has been flagged as a viable challenge and yet very few have taken it up. COAMES uses the Dempster- Shafer theory of evidence to reason with uncertainty and importantly introduces the power of ignorance and integration to model the holistic approach. In addition, object orientation enables a modular approach, embodied in the inference engine - knowledge base separation. Two case studies have been developed to test COAMES. In both case studies, knowledge has been successfully used to drive data and actions using metadata. Thus a holism of data, information and knowledge has been achieved. Also, a technological holism has been proved through the effective classification of landforms on the rapidly eroding Holderness coast. A holism across disciplines and CZM institutions has been effected by intelligent metadata management of a Fal Estuary dataset. Finally, the differing spatial and temporal scales that the two case studies operate at implicitly demonstrate a holism of scale, though explicit means of managing scale were suggested. In all cases the same knowledge structure was used to effectively manage and disseminate coastal data, information and knowledge

    Improving the Evaluation of Network Anomaly Detection Using a Data Fusion Approach

    Get PDF
    Any future extensions or updates will be published as a part of WAND's ongoing research projects: http://research.wand.net.nzCurrently, the evaluation of network anomaly detection methods is often not repeatable. It is difficult to ascertain if different implementations of the same methods have the same performance or the relative performance of different methods. This is in part due to a lack of open implementations, the absence of recent datasets and no common format to express results. A common approach to evaluating a method is to use the Defense Advanced Research Projects Agency (DARPA) 1999 datasets, or a derivative of them, in combination with a different dataset or network capture. The DARPA datasets are relatively old and bear little resemblance to modern day traffic and the other datasets are unlabelled and typically publicly unavailable making it difficult to ascertain the validity of the research evaluated in such a way. This thesis primarily contributes a new evaluation methodology that uses a data fusion based approach that allows for reproducible evaluations with modern datasets. The new methodology incorporates three other contributions: A new way to capture network traces that are fully anonymised yet retains more information than any current network traces and a new trace annotation format and a method for verifying the correctness of the annotations. The DARPA 1999 dataset was used to demonstrate the validity of the approach and an evaluation was performed on a new dataset that has been captured using the methods introduced. In the evaluation we find that methodology is a viable approach forward, but that it comes with a different set of drawbacks than the current state of the art

    Sensor data validation and reconstruction. Phase 1: System architecture study

    Get PDF
    The sensor validation and data reconstruction task reviewed relevant literature and selected applicable validation and reconstruction techniques for further study; analyzed the selected techniques and emphasized those which could be used for both validation and reconstruction; analyzed Space Shuttle Main Engine (SSME) hot fire test data to determine statistical and physical relationships between various parameters; developed statistical and empirical correlations between parameters to perform validation and reconstruction tasks, using a computer aided engineering (CAE) package; and conceptually designed an expert system based knowledge fusion tool, which allows the user to relate diverse types of information when validating sensor data. The host hardware for the system is intended to be a Sun SPARCstation, but could be any RISC workstation with a UNIX operating system and a windowing/graphics system such as Motif or Dataviews. The information fusion tool is intended to be developed using the NEXPERT Object expert system shell, and the C programming language

    Second generation knowledge based systems in habitat evaluation.

    Get PDF
    Many expert, or knowledge-based, systems have been constructed in the domain of ecology, several of which are concerned with habitat evaluation. However, these systems have been geared to solving particular problems, with little regard paid to the underlying relationships that exist within a biological system. The implementation of problem-solving methods with little regard to understanding the more primary knowledge of a problem area is referred to in the literature as 'shallow', whilst the representation and utilisation of knowledge of a more fundamental kind is termed 'deep'. This thesis contains the details of a body of research exploring issues that arise from the refinement of traditional expert systems methodologies and theory via the incorporation of depth, along with enhancements in the sophistication of the methods of reasoning (and subsequent effects on the mechanisms of communication between human and computer), and the handling of uncertainty. The approach used to address this research incorporates two distinct aspects. Firstly, the literature of 'depth', expert systems in ecology, uncertainty, and control of reasoning and related user interface issues are critically reviewed, and where inadequacies exist, proposals for improvements are made. Secondly, practical work has taken place involving the construction of two knowledge based systems, one 'traditional', and the other a second generation system. Both systems are primarily geared to the problem of evaluating a pond site with respect to its suitability for the great crested newt (Triturus cristatus). This research indicates that it is possible to build a second-generation knowledge-based system in the domain of ecology, and that construction of the second generation system required a magnitude of effort similar to the firstgeneration system. In addition, it shows that, despite using different architectures and reasoning strategies, such systems may be judged as equally acceptable by endusers, and of similar accuracy in their conclusions. The research also offers guidance concerning the organisation and utilisation of deep knowledge within an expert systems framework, in both ecology and in other domains that have a similar concept-rich nature

    Biomedical applications of belief networks

    Get PDF
    Biomedicine is an area in which computers have long been expected to play a significant role. Although many of the early claims have proved unrealistic, computers are gradually becoming accepted in the biomedical, clinical and research environment. Within these application areas, expert systems appear to have met with the most resistance, especially when applied to image interpretation.In order to improve the acceptance of computerised decision support systems it is necessary to provide the information needed to make rational judgements concerning the inferences the system has made. This entails an explanation of what inferences were made, how the inferences were made and how the results of the inference are to be interpreted. Furthermore there must be a consistent approach to the combining of information from low level computational processes through to high level expert analyses.nformation from low level computational processes through to high level expert analyses. Until recently ad hoc formalisms were seen as the only tractable approach to reasoning under uncertainty. A review of some of these formalisms suggests that they are less than ideal for the purposes of decision making. Belief networks provide a tractable way of utilising probability theory as an inference formalism by combining the theoretical consistency of probability for inference and decision making, with the ability to use the knowledge of domain experts.nowledge of domain experts. The potential of belief networks in biomedical applications has already been recog¬ nised and there has been substantial research into the use of belief networks for medical diagnosis and methods for handling large, interconnected networks. In this thesis the use of belief networks is extended to include detailed image model matching to show how, in principle, feature measurement can be undertaken in a fully probabilistic way. The belief networks employed are usually cyclic and have strong influences between adjacent nodes, so new techniques for probabilistic updating based on a model of the matching process have been developed.An object-orientated inference shell called FLAPNet has been implemented and used to apply the belief network formalism to two application domains. The first application is model-based matching in fetal ultrasound images. The imaging modality and biological variation in the subject make model matching a highly uncertain process. A dynamic, deformable model, similar to active contour models, is used. A belief network combines constraints derived from local evidence in the image, with global constraints derived from trained models, to control the iterative refinement of an initial model cue.In the second application a belief network is used for the incremental aggregation of evidence occurring during the classification of objects on a cervical smear slide as part of an automated pre-screening system. A belief network provides both an explicit domain model and a mechanism for the incremental aggregation of evidence, two attributes important in pre-screening systems.Overall it is argued that belief networks combine the necessary quantitative features required of a decision support system with desirable qualitative features that will lead to improved acceptability of expert systems in the biomedical domain

    An intelligent classification system for land use and land cover mapping using spaceborne remote sensing and GIS

    Get PDF
    The objectives of this study were to experiment with and extend current methods of Synthetic Aperture Rader (SAR) image classification, and to design and implement a prototype intelligent remote sensing image processing and classification system for land use and land cover mapping in wet season conditions in Bangladesh, which incorporates SAR images and other geodata. To meet these objectives, the problem of classifying the spaceborne SAR images, and integrating Geographic Information System (GIS) data and ground truth data was studied first. In this phase of the study, an extension to traditional techniques was made by applying a Self-Organizing feature Map (SOM) to include GIS data with the remote sensing data during image segmentation. The experimental results were compared with those of traditional statistical classifiers, such as Maximum Likelihood, Mahalanobis Distance, and Minimum Distance classifiers. The performances of the classifiers were evaluated in terms of the classification accuracy with respect to the collected real-time ground truth data. The SOM neural network provided the highest overall accuracy when a GIS layer of land type classification (with respect to the period of inundation by regular flooding) was used in the network. Using this method, the overall accuracy was around 15% higher than the previously mentioned traditional classifiers. It also achieved higher accuracies for more classes in comparison to the other classifiers. However, it was also observed that different classifiers produced better accuracy for different classes. Therefore, the investigation was extended to consider Multiple Classifier Combination (MCC) techniques, which is a recently emerging research area in pattern recognition. The study has tested some of these techniques to improve the classification accuracy by harnessing the goodness of the constituent classifiers. A Rule-based Contention Resolution method of combination was developed, which exhibited an improvement in the overall accuracy of about 2% in comparison to its best constituent (SOM) classifier. The next phase of the study involved the design of an architecture for an intelligent image processing and classification system (named ISRIPaC) that could integrate the extended methodologies mentioned above. Finally, the architecture was implemented in a prototype and its viability was evaluated using a set of real data. The originality of the ISRIPaC architecture lies in the realisation of the concept of a complete system that can intelligently cover all the steps of image processing classification and utilise standardised metadata in addition to a knowledge base in determining the appropriate methods and course of action for the given task. The implemented prototype of the ISRIPaC architecture is a federated system that integrates the CLIPS expert system shell, the IDRISI Kilimanjaro image processing and GIS software, and the domain experts' knowledge via a control agent written in Visual C++. It starts with data assessment and pre-processing and ends up with image classification and accuracy assessment. The system is designed to run automatically, where the user merely provides the initial information regarding the intended task and the source of available data. The system itself acquires necessary information about the data from metadata files in order to make decisions and perform tasks. The test and evaluation of the prototype demonstrates the viability of the proposed architecture and the possibility of extending the system to perform other image processing tasks and to use different sources of data. The system design presented in this study thus suggests some directions for the development of the next generation of remote sensing image processing and classification systems

    Vehicle Integrated Prognostic Reasoner (VIPR) 2010 Annual Final Report

    Get PDF
    Honeywell's Central Maintenance Computer Function (CMCF) and Aircraft Condition Monitoring Function (ACMF) represent the state-of-the art in integrated vehicle health management (IVHM). Underlying these technologies is a fault propagation modeling system that provides nose-to-tail coverage and root cause diagnostics. The Vehicle Integrated Prognostic Reasoner (VIPR) extends this technology to interpret evidence generated by advanced diagnostic and prognostic monitors provided by component suppliers to detect, isolate, and predict adverse events that affect flight safety. This report describes year one work that included defining the architecture and communication protocols and establishing the user requirements for such a system. Based on these and a set of ConOps scenarios, we designed and implemented a demonstration of communication pathways and associated three-tiered health management architecture. A series of scripted scenarios showed how VIPR would detect adverse events before they escalate as safety incidents through a combination of advanced reasoning and additional aircraft data collected from an aircraft condition monitoring system. Demonstrating VIPR capability for cases recorded in the ASIAS database and cross linking them with historical aircraft data is planned for year two
    corecore