6,774 research outputs found

    Second CLIPS Conference Proceedings, volume 1

    Get PDF
    Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems

    Modeling and Analysis of Software Product Line Variability in Clafer

    Get PDF
    Both feature and class modeling are used in Software Product Line (SPL) engineering to model variability. Feature models are used primarily to represent user-visible characteristics (i.e., features) of products; whereas class models are often used to model types of components and connectors in a product-line architecture. Previous works have explored the approach of using a single language to express both configurations of features and components. Their goal was to simplify the definition and analysis of feature-to-component mappings and to allow modeling component options as features. A prominent example of this approach is cardinality-based feature modeling, which extends feature models with multiple instantiation and references to express component-like, replicated features. Another example is to support feature modeling in a class modeling language, such as UML or MOF, using their profiling mechanisms and a stylized use of composition. Both examples have notable drawbacks: cardinality-based feature modeling lacks a constraint language and a well-defined semantics; encoding feature models as class models and their evolution bring extra complexity. This dissertation presents Clafer (class, feature, reference), a class modeling language with first-class support for feature modeling. Clafer can express rich structural models augmented with complex constraints, i.e., domain, variability, component models, and meta-models. Clafer supports: (i) class-based meta-models, (ii) object models (with uncertainty, if needed), (iii) feature models with attributes and multiple instantiation, (iv) configurations of feature models, (v) mixtures of meta- and feature models and model templates, and (vi) first-order logic constraints. Clafer also makes it possible to arrange models into multiple specialization and extension layers via constraints and inheritance. On the other hand, in designing Clafer we wanted to create a language that builds upon as few concepts as possible, and is easy to learn. The language is supported by tools for SPL verification and optimization. We propose to unify basic modeling constructs into a single concept, called clafer. In other words, Clafer is not a hybrid language. We identify several key mechanisms allowing a class modeling language to express feature models concisely. We provide Clafer with a formal semantics built in a novel, structurally explicit way. As Clafer subsumes cardinality-based feature modeling with attributes, references, and constraints, we are the first to precisely define semantics of such models. We also explore the notion of partial instantiation that allows for modeling with uncertainty and variability. We show that Object-Oriented Modeling (OOM) languages with no direct support for partial instances can support them via class modeling, using subclassing and strengthening multiplicity constraints. We make the encoding of partial instances via subclassing precise and general. Clafer uses this encoding and pushes the idea even further: it provides a syntactic unification of types and (partial) instances via subclassing and redefinition. We evaluate Clafer analytically and experimentally. The analytical evaluation shows that Clafer can concisely express feature and meta-models via a uniform syntax and unified semantics. The experimental evaluation shows that: 1) Clafer can express a variety of realistic rich structural models with complex constraints, such as variability models, meta-models, model templates, and domain models; and 2) that useful analyses can be performed within seconds

    Empirical models, rules, and optimization

    Get PDF
    This paper considers supply decisions by firms in a dynamic setting with adjustment costs and compares the behavior of an optimal control model to that of a rule-based system which relaxes the assumption that agents are explicit optimizers. In our approach, the economic agent uses believably simple rules in coping with complex situations. We estimate rules using an artificially generated sample obtained by running repeated simulations of a dynamic optimal control model of a firm's hiring/firing decisions. We show that (i) agents using heuristics can behave as if they were seeking rationally to maximize their dynamic returns; (ii) the approach requires fewer behavioral assumptions relative to dynamic optimization and the assumptions made are based on economically intuitive theoretical results linking rule adoption to uncertainty; (iii) the approach delineates the domain of applicability of maximization hypotheses and describes the behavior of agents in situations of economic disequilibrium. The approach adopted uses concepts from fuzzy control theory. An agent, instead of optimizing, follows Fuzzy Associative Memory (FAM) rules which, given input and output data, can be estimated and used to approximate any non-linear dynamic process. Empirical results indicate that the fuzzy rule-based system performs extremely well in approximating optimal dynamic behavior in situations with limited noise.Decision-making. ,econometric models ,TMD ,

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Semantic Similarity of Spatial Scenes

    Get PDF
    The formalization of similarity in spatial information systems can unleash their functionality and contribute technology not only useful, but also desirable by broad groups of users. As a paradigm for information retrieval, similarity supersedes tedious querying techniques and unveils novel ways for user-system interaction by naturally supporting modalities such as speech and sketching. As a tool within the scope of a broader objective, it can facilitate such diverse tasks as data integration, landmark determination, and prediction making. This potential motivated the development of several similarity models within the geospatial and computer science communities. Despite the merit of these studies, their cognitive plausibility can be limited due to neglect of well-established psychological principles about properties and behaviors of similarity. Moreover, such approaches are typically guided by experience, intuition, and observation, thereby often relying on more narrow perspectives or restrictive assumptions that produce inflexible and incompatible measures. This thesis consolidates such fragmentary efforts and integrates them along with novel formalisms into a scalable, comprehensive, and cognitively-sensitive framework for similarity queries in spatial information systems. Three conceptually different similarity queries at the levels of attributes, objects, and scenes are distinguished. An analysis of the relationship between similarity and change provides a unifying basis for the approach and a theoretical foundation for measures satisfying important similarity properties such as asymmetry and context dependence. The classification of attributes into categories with common structural and cognitive characteristics drives the implementation of a small core of generic functions, able to perform any type of attribute value assessment. Appropriate techniques combine such atomic assessments to compute similarities at the object level and to handle more complex inquiries with multiple constraints. These techniques, along with a solid graph-theoretical methodology adapted to the particularities of the geospatial domain, provide the foundation for reasoning about scene similarity queries. Provisions are made so that all methods comply with major psychological findings about people’s perceptions of similarity. An experimental evaluation supplies the main result of this thesis, which separates psychological findings with a major impact on the results from those that can be safely incorporated into the framework through computationally simpler alternatives

    Semi-Structured Decision Processes: A Conceptual Framework for Understanding Human-Automation Decision Systems

    Get PDF
    The purpose of this work is to improve understanding of existing and proposed decision systems, ideally to improve the design of future systems. A "decision system" is defined as a collection of information-processing components -- often involving humans and automation (e.g., computers) -- that interact towards a common set of objectives. Since a key issue in the design of decision systems is the division of work between humans and machines (a task known as "function allocation"), this report is primarily intended to help designers incorporate automation more appropriately within these systems. This report does not provide a design methodology, but introduces a way to qualitatively analyze potential designs early in the system design process. A novel analytical framework is presented, based on the concept of "semi-Structured" decision processes. It is believed that many decisions involve both well-defined "Structured" parts (e.g., formal procedures, traditional algorithms) and ill-defined "Unstructured" parts (e.g., intuition, judgement, neural networks) that interact in a known manner. While Structured processes are often desired because they fully prescribe how a future decision (during "operation") will be made, they are limited by what is explicitly understood prior to operation. A system designer who incorporates Unstructured processes into a decision system understands which parts are not understood sufficiently, and relinquishes control by deferring decision-making from design to operation. Among other things, this design choice tends to add flexibility and robustness. The value of the semi-Structured framework is that it forces people to consider system design concepts as operational decision processes in which both well-defined and ill-defined components are made explicit. This may provide more insight into decision systems, and improve understanding of the implications of design choices. The first part of this report defines the semi-Structured process and introduces a diagrammatic notation for decision process models. In the second part, the semi-Structured framework is used to understand and explain highly evolved decision system designs (these are assumed to be representative of "good" designs) whose components include feedback controllers, alerts, decision aids, and displays. Lastly, the semi-Structured framework is applied to a decision system design for a mobile robot.Charles Stark Draper Laboratory, Inc., under IR&D effort 101

    Functional object-types as a foundation of complex knowledge-based systems

    Get PDF

    Invariant Generation through Strategy Iteration in Succinctly Represented Control Flow Graphs

    Full text link
    We consider the problem of computing numerical invariants of programs, for instance bounds on the values of numerical program variables. More specifically, we study the problem of performing static analysis by abstract interpretation using template linear constraint domains. Such invariants can be obtained by Kleene iterations that are, in order to guarantee termination, accelerated by widening operators. In many cases, however, applying this form of extrapolation leads to invariants that are weaker than the strongest inductive invariant that can be expressed within the abstract domain in use. Another well-known source of imprecision of traditional abstract interpretation techniques stems from their use of join operators at merge nodes in the control flow graph. The mentioned weaknesses may prevent these methods from proving safety properties. The technique we develop in this article addresses both of these issues: contrary to Kleene iterations accelerated by widening operators, it is guaranteed to yield the strongest inductive invariant that can be expressed within the template linear constraint domain in use. It also eschews join operators by distinguishing all paths of loop-free code segments. Formally speaking, our technique computes the least fixpoint within a given template linear constraint domain of a transition relation that is succinctly expressed as an existentially quantified linear real arithmetic formula. In contrast to previously published techniques that rely on quantifier elimination, our algorithm is proved to have optimal complexity: we prove that the decision problem associated with our fixpoint problem is in the second level of the polynomial-time hierarchy.Comment: 35 pages, conference version published at ESOP 2011, this version is a CoRR version of our submission to Logical Methods in Computer Scienc
    corecore