5,558 research outputs found

    Improving performance through concept formation and conceptual clustering

    Get PDF
    Research from June 1989 through October 1992 focussed on concept formation, clustering, and supervised learning for purposes of improving the efficiency of problem-solving, planning, and diagnosis. These projects resulted in two dissertations on clustering, explanation-based learning, and means-ends planning, and publications in conferences and workshops, several book chapters, and journals; a complete Bibliography of NASA Ames supported publications is included. The following topics are studied: clustering of explanations and problem-solving experiences; clustering and means-end planning; and diagnosis of space shuttle and space station operating modes

    Automatic event log abstraction to support forensic investigation

    Get PDF
    Abstraction of event logs is the creation of a template that contains the most common words representing all members in a group of event log entries. Abstraction helps the forensic investigators to obtain an overall view of the main events in a log file. Existing log abstraction methods require user input parameters. This manual input is time consuming due to the need to identify the best parameters, especially when a log file is large. We propose an automatic method to facilitate event log abstraction avoiding the need for the user to manually identify suitable parameters. We model event logs as a graph and propose a new graph clustering approach to group log entries. The abstraction is then extracted from each cluster. Experimental results show that the proposed method achieves superior performance compared to existing approaches with an F-measure of 95.35%

    A comprehensive theory of induction and abstraction, part I

    Get PDF
    I present a solution to the epistemological or characterisation problem of induction. In part I, Bayesian Confirmation Theory (BCT) is discussed as a good contender for such a solution but with a fundamental explanatory gap (along with other well discussed problems); useful assigned probabilities like priors require substantive degrees of belief about the world. I assert that one does not have such substantive information about the world. Consequently, an explanation is needed for how one can be licensed to act as if one has substantive information about the world when one does not. I sketch the outlines of a solution in part I, showing how it differs from others, with full details to follow in subsequent parts. The solution is pragmatic in sentiment (though differs in specifics to arguments from, for example, William James); the conceptions we use to guide our actions are and should be at least partly determined by preferences. This is cashed out in a reformulation of decision theory motivated by a non-reductive formulation of hypotheses and logic. A distinction emerges between initial assumptions--that can be non-dogmatic--and effective assumptions that can simultaneously be substantive. An explanation is provided for the plausibility arguments used to explain assigned probabilities in BCT. In subsequent parts, logic is constructed from principles independent of language and mind. In particular, propositions are defined to not have form. Probabilities are logical and uniquely determined by assumptions. The problems considered fatal to logical probabilities--Goodman's `grue' problem and the uniqueness of priors problem are dissolved due to the particular formulation of logic used. Other problems such as the zero-prior problem are also solved. A universal theory of (non-linguistic) meaning is developed. Problems with counterfactual conditionals are solved by developing concepts of abstractions and corresponding pictures that make up hypotheses. Spaces of hypotheses and the version of Bayes' theorem that utilises them emerge from first principles. Theoretical virtues for hypotheses emerge from the theory. Explanatory force is explicated. The significance of effective assumptions is partly determined by combinatoric factors relating to the structure of hypotheses. I conjecture that this is the origin of simplicity

    Sequential metastatic breast cancer chemotherapy: Should the median be the message?

    Get PDF
    Background: Counseling and anticipatory guidance of the expected course of treatment for women newly diagnosed with metastatic breast cancer (MBC) are difficult due to multiple factors influencing survival following MBC therapy. In order to better tailor counseling at the onset and through the duration of MBC we used non-clinical trial data to better characterize real life experience of sequential MBC treatment.We examined the following aims: (1) What demographic and tumor characteristics are predictive of survival in MBC? (2)What is the median duration of each sequential chemotherapy regimen and subsequent survival of women following each sequence of chemotherapy regimen in MBC? Methods: Retrospective study included 792women diagnosed from January 1999 through December 2009 at the University of Pittsburgh Cancer Institute Breast Cancer Program. Results: Median duration of sequential chemotherapy regimen and median survival from completion of sequence of chemotherapy regimens were relatively short with a wide range of treatment duration and survival. Characteristics for poor survival included hormone status, human epidermal growth factor receptor-2 (HER 2/neu) status, and increased number and type of metastatic sites.Women who took more than the second sequential chemotherapy regimens had no more than median 3 months of treatment duration and 6 months survival from treatment termination. Discussion: Median clinical response and survival shorten with sequential chemotherapy regimen but with wide ranges. The rare clinical response of the minority should not set the standard for treatment expectations. All cancer clinicians, including oncology nurses, must ensure that patients are receiving tailored counseling regarding their specific risks and benefits for sequential MBC chemotherapy

    Space station automation of common module power management and distribution, volume 2

    Get PDF
    The new Space Station Module Power Management and Distribution System (SSM/PMAD) testbed automation system is described. The subjects discussed include testbed 120 volt dc star bus configuration and operation, SSM/PMAD automation system architecture, fault recovery and management expert system (FRAMES) rules english representation, the SSM/PMAD user interface, and the SSM/PMAD future direction. Several appendices are presented and include the following: SSM/PMAD interface user manual version 1.0, SSM/PMAD lowest level processor (LLP) reference, SSM/PMAD technical reference version 1.0, SSM/PMAD LLP visual control logic representation's (VCLR's), SSM/PMAD LLP/FRAMES interface control document (ICD) , and SSM/PMAD LLP switchgear interface controller (SIC) ICD

    Automated IT Service Fault Diagnosis Based on Event Correlation Techniques

    Get PDF
    In the previous years a paradigm shift in the area of IT service management could be witnessed. IT management does not only deal with the network, end systems, or applications anymore, but is more and more concerned with IT services. This is caused by the need of organizations to monitor the efficiency of internal IT departments and to have the possibility to subscribe IT services from external providers. This trend has raised new challenges in the area of IT service management, especially with respect to service level agreements laying down the quality of service to be guaranteed by a service provider. Fault management is also facing new challenges which are related to ensuring the compliance to these service level agreements. For example, a high utilization of network links in the infrastructure can imply a delay increase in the delivery of services with respect to agreed time constraints. Such relationships have to be detected and treated in a service-oriented fault diagnosis which therefore does not deal with faults in a narrow sense, but with service quality degradations. This thesis aims at providing a concept for service fault diagnosis which is an important part of IT service fault management. At first, a motivation of the need of further examinations regarding this issue is given which is based on the analysis of services offered by a large IT service provider. A generalization of the scenario forms the basis for the specification of requirements which are used for a review of related research work and commercial products. Even though some solutions for particular challenges have already been provided, a general approach for service fault diagnosis is still missing. For addressing this issue, a framework is presented in the main part of this thesis using an event correlation component as its central part. Event correlation techniques which have been successfully applied to fault management in the area of network and systems management are adapted and extended accordingly. Guidelines for the application of the framework to a given scenario are provided afterwards. For showing their feasibility in a real world scenario, they are used for both example services referenced earlier

    Validating Predictions of Unobserved Quantities

    Full text link
    The ultimate purpose of most computational models is to make predictions, commonly in support of some decision-making process (e.g., for design or operation of some system). The quantities that need to be predicted (the quantities of interest or QoIs) are generally not experimentally observable before the prediction, since otherwise no prediction would be needed. Assessing the validity of such extrapolative predictions, which is critical to informed decision-making, is challenging. In classical approaches to validation, model outputs for observed quantities are compared to observations to determine if they are consistent. By itself, this consistency only ensures that the model can predict the observed quantities under the conditions of the observations. This limitation dramatically reduces the utility of the validation effort for decision making because it implies nothing about predictions of unobserved QoIs or for scenarios outside of the range of observations. However, there is no agreement in the scientific community today regarding best practices for validation of extrapolative predictions made using computational models. The purpose of this paper is to propose and explore a validation and predictive assessment process that supports extrapolative predictions for models with known sources of error. The process includes stochastic modeling, calibration, validation, and predictive assessment phases where representations of known sources of uncertainty and error are built, informed, and tested. The proposed methodology is applied to an illustrative extrapolation problem involving a misspecified nonlinear oscillator

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    A generic architecture style for self-adaptive cyber-physical systems

    Get PDF
    Die aktuellen Konzepte zur Gestaltung von Regelungssystemen basieren auf dynamischen Verhaltensmodellen, die mathematische Ansätze wie Differentialgleichungen zur Ableitung der entsprechenden Funktionen verwenden. Diese Konzepte stoßen jedoch aufgrund der zunehmenden Systemkomplexität allmählich an ihre Grenzen. Zusammen mit der Entwicklung dieser Konzepte entsteht eine Architekturevolution der Regelungssysteme. In dieser Dissertation wird eine Taxonomie definiert, um die genannte Architekturevolution anhand eines typischen Beispiels, der adaptiven Geschwindigkeitsregelung (ACC), zu veranschaulichen. Aktuelle ACC-Varianten, die auf der Regelungstheorie basieren, werden in Bezug auf ihre Architekturen analysiert. Die Analyseergebnisse zeigen, dass das zukünftige Regelungssystem im ACC eine umfangreichere Selbstadaptationsfähigkeit und Skalierbarkeit erfordert. Dafür sind kompliziertere Algorithmen mit unterschiedlichen Berechnungsmechanismen erforderlich. Somit wird die Systemkomplexität erhöht und führt dazu, dass das zukünftige Regelungssystem zu einem selbstadaptiven cyber-physischen System wird und signifikante Herausforderungen für die Architekturgestaltung des Systems darstellt. Inspiriert durch Ansätze des Software-Engineering zur Gestaltung von Architekturen von softwareintensiven Systemen wird in dieser Dissertation ein generischer Architekturstil entwickelt. Der entwickelte Architekturstil dient als Vorlage, um vernetzte Architekturen mit Verfolgung der entwickelten Designprinzipien nicht nur für die aktuellen Regelungssysteme, sondern auch für selbstadaptiven cyber-physischen Systeme in der Zukunft zu konstruieren. Unterschiedliche Auslösemechanismen und Kommunikationsparadigmen zur Gestaltung der dynamischen Verhalten von Komponenten sind in der vernetzten Architektur anwendbar. Zur Bewertung der Realisierbarkeit des Architekturstils werden aktuelle ACCs erneut aufgenommen, um entsprechende logische Architekturen abzuleiten und die Architekturkonsistenz im Vergleich zu den originalen Architekturen basierend auf der Regelungstheorie (z. B. in Form von Blockdiagrammen) zu untersuchen. Durch die Anwendung des entwickelten generischen Architekturstils wird in dieser Dissertation eine künstliche kognitive Geschwindigkeitsregelung (ACCC) als zukünftige ACC-Variante entworfen, implementiert und evaluiert. Die Evaluationsergebnisse zeigen signifikante Leistungsverbesserungen des ACCC im Vergleich zum menschlichen Fahrer und aktuellen ACC-Varianten.Current concepts of designing automatic control systems rely on dynamic behavioral modeling by using mathematical approaches like differential equations to derive corresponding functions, and slowly reach limitations due to increasing system complexity. Along with the development of these concepts, an architectural evolution of automatic control systems is raised. This dissertation defines a taxonomy to illustrate the aforementioned architectural evolution relying on a typical example of control application: adaptive cruise control (ACC). Current ACC variants, with their architectures considering control theory, are analyzed. The analysis results indicate that the future automatic control system in ACC requires more substantial self-adaptation capability and scalability. For this purpose, more complicated algorithms requiring different computation mechanisms must be integrated into the system and further increase system complexity. This makes the future automatic control system evolve into a self-adaptive cyber-physical system and consistitutes significant challenges for the system’s architecture design. Inspired by software engineering approaches for designing architectures of software-intensive systems, a generic architecture style is proposed. The proposed architecture style serves as a template by following the developed design principle to construct networked architectures not only for the current automatic control systems but also for self-adaptive cyber-physical systems in the future. Different triggering mechanisms and communication paradigms for designing dynamic behaviors are applicable in the networked architecture. To evaluate feasibility of the architecture style, current ACCs are retaken to derive corresponding logical architectures and examine architectural consistency compared to the previous architectures considering the control theory (e.g., in the form of block diagrams). By applying the proposed generic architecture style, an artificial cognitive cruise control (ACCC) is designed, implemented, and evaluated as a future ACC in this dissertation. The evaluation results show significant performance improvements in the ACCC compared to the human driver and current ACC variants
    corecore