2,760 research outputs found

    COLAB : a hybrid knowledge representation and compilation laboratory

    Get PDF
    Knowledge bases for real-world domains such as mechanical engineering require expressive and efficient representation and processing tools. We pursue a declarative-compilative approach to knowledge engineering. While Horn logic (as implemented in PROLOG) is well-suited for representing relational clauses, other kinds of declarative knowledge call for hybrid extensions: functional dependencies and higher-order knowledge should be modeled directly. Forward (bottom-up) reasoning should be integrated with backward (top-down) reasoning. Constraint propagation should be used wherever possible instead of search-intensive resolution. Taxonomic knowledge should be classified into an intuitive subsumption hierarchy. Our LISP-based tools provide direct translators of these declarative representations into abstract machines such as an extended Warren Abstract Machine (WAM) and specialized inference engines that are interfaced to each other. More importantly, we provide source-to-source transformers between various knowledge types, both for user convenience and machine efficiency. These formalisms with their translators and transformers have been developed as part of COLAB, a compilation laboratory for studying what we call, respectively, "vertical\u27; and "horizontal\u27; compilation of knowledge, as well as for exploring the synergetic collaboration of the knowledge representation formalisms. A case study in the realm of mechanical engineering has been an important driving force behind the development of COLAB. It will be used as the source of examples throughout the paper when discussing the enhanced formalisms, the hybrid representation architecture, and the compilers

    An object query language for multimedia federations

    Get PDF
    The Fischlar system provides a large centralised repository of multimedia files. As expansion is difficult in centralised systems and as different user groups have a requirement to define their own schemas, the EGTV (Efficient Global Transactions for Video) project was established to examine how the distribution of this database could be managed. The federated database approach is advocated where global schema is designed in a top-down approach, while all multimedia and textual data is stored in object-oriented (O-O) and object-relational (0-R) compliant databases. This thesis investigates queries and updates on large multimedia collections organised in the database federation. The goal of this research is to provide a generic query language capable of interrogating global and local multimedia database schemas. Therefore, a new query language EQL is defined to facilitate the querying of object-oriented and objectrelational database schemas in a database and platform independent manner, and acts as a canonical language for database federations. A new canonical language was required as the existing query language standards (SQL: 1999 and OQL) axe generally incompatible and translation between them is not trivial. EQL is supported with a formally defined object algebra and specified semantics for query evaluation. The ability to capture and store metadata of multiple database schemas is essential when constructing and querying a federated schema. Therefore we also present a new platform independent metamodel for specifying multimedia schemas stored in both object-oriented and object-relational databases. This metadata information is later used for the construction of a global schemas, and during the evaluation of local and global queries. Another important feature of any federated system is the ability to unambiguously define database schemas. The schema definition language for an EGTV database federation must be capable of specifying both object-oriented and object-relational schemas in the database independent format. As XML represents a standard for encoding and distributing data across various platforms, a language based upon XML has been developed as a part of our research. The ODLx (Object Definition Language XML) language specifies a set of XMLbased structures for defining complex database schemas capable of representing different multimedia types. The language is fully integrated with the EGTV metamodel through which ODLx schemas can be mapped to 0-0 and 0-R databases

    Active Learning for Decision Making

    Get PDF
    This paper addresses focused information acquisition for predictive data mining. As businesses strive to cater to the preferences of individual consumers, they often employ predictive models to customize marketing efforts. Building accurate models requires information about consumer preferences that often is costly to acquire. Prior research has introduced many â active learningâ policies for identifying information that is particularly useful for model induction, the goal being to reduce the acquisition cost necessary to induce a model with a given accuracy. However, predictive models often are used as part of a decision-making process, and costly improvements in model accuracy do not always result in better decisions. This paper develops a new approach for active information acquisition that targets decision-making specifically. The method we introduce departs from the traditional error-reducing paradigm and places emphasis on acquisitions that are more likely to affect decision-making. Empirical evaluations with direct marketing data demonstrate that for a fixed information acquisition cost the method significantly improves the targeting decisions. The method is designed to be genericâ not based on a single model or induction algorithmâ and we show that it can be applied effectively to various predictive modeling techniques.Information Systems Working Papers Serie

    Transforming OCL to PVS: Using Theorem Proving Support for Analysing Model Constraints

    Get PDF
    The Unified Modelling Language (UML) is a de facto standard language for describing software systems. UML models are often supplemented with Object Constraint Language (OCL) constraints, to capture detailed properties of components and systems. Sophisticated tools exist for analysing UML models, e.g., to check that well-formedness rules have been satisfied. As well, tools are becoming available to analyse and reason about OCL constraints. Previous work has been done on analysing OCL constraints by translating them to formal languages and then analysing the translated constraints with tools such as theorem provers. This project contributes a transformation from OCL to the specification language of the Prototype Verification System (PVS). PVS can be used to analyse and reason about translated OCL constraints. A particular novelty of this project is that it carries out the transformation of OCL to PVS by using model transformation, as exemplified by the OMG's Model-Driven Architecture. The project implements and automates model transformations from OCL to PVS using the Epsilon Transformation Language (ETL) and tests the results using the Epsilon Comparison Language (ECL )

    A method and application of machine learning in design

    Get PDF
    This thesis addresses the issue of developing machine learning techniques for the acquisition and organization of design knowledge to be used in knowledge-based design systems. It presents a general method of developing machine learning tools in the design domain. An identification tree is introduced to distinguish different approaches and strategies of machine learning in design. Three existing approaches are identified: the knowledge-oriented, the learner-oriented, and the design-oriented approach. The learner-oriented approach is critical, which focuses on the development of new machine learning tools for design knowledge acquisition. Four strategies that are suitable for this approach are: specialization, generalization, integration and exploration. A general method, called MLDS (Machine Learning in Design with 5 steps), of developing machine learning techniques in the design domain is presented. It consists of the following steps: 1) identify source data and target knowledge; 2) determine source representation and target representation; 3) identify the background knowledge available; 4) identify the features of data, knowledge and domain; and 5) develop (specialize, generalize, integrate or explore) a machine learning tool. The method is elaborated step by step and the dependencies between the components are illustrated with a corresponding framework. To assist in characterising the data, knowledge and domain, a set of formal measures are introduced. They include density of dataset, size of description space, homogeneity of dataset, complexity of domain, difficulty of domain, stability of domain, and usage of knowledge. Design knowledge is partitioned into two main types: empirical and causal. Empirical knowledge is modelled as empirical associations in categories of design attributes or empirical mappings between these meaningful categories. Eight types of empirical mappings are distinguished. Among them the mappings from one multiple dimensional space to another are recognized as the most important for both knowledge-based design systems and machine learning in design. The MLDS method is applied to the preliminary design of a learning model for the integration of design cases and design prototypes. Both source and target representations use the framework of design prototypes. The function-behaviour-structure categorization of design prototypes is used as background knowledge to improve both supervised and unsupervised learning in this task. Many-to-many mappings and time- or order-dependent data are discovered as the most important characteristics of the design domain for machine learning. Multiple attribute prediction and the capture of design concept ‘drift’ are identified as challenging tasks for machine learning in design. After the possibilities and limitations of solving the problem by modifying existing learning methods (both supervised and unsupervised) are considered, a learning model is created by integrating several learning techniques. The basic scheme of this model is that of goal-driven concept formation, which consists of flexible categorization, extensive generalization, temporary suspension, and cognitively-based sequence prediction in design. The learning process is described as follows: each time one category of attributes is treated as the predictive feature set and the remaining as the predicted feature set; a conceptual hierarchy or decision tree is constructed incrementally according the predictive features of design cases (but statistical information is generalized with both feature sets); whenever the predictive or the predicted feature set of a node becomes homogeneous, the construction process at that branch will temporarily suspend until a new case arrives and breaks this homogeneity; frequency—based prediction at indeterminate nodes is replaced with a cognitively-based sequence prediction, which allows the more recent cases to have stronger influence on the determination of the default or predicted values. An advantage of this scheme is that with the single learning algorithm, all the types of empirical mappings between function, behaviour and structure or between design problem specification and design solution description can be generalized from design cases. To enrich the indexing facilities in a conceptual hierarchy and improve its case retrieval ability, extensive generalization based memory organizations are investigated as alternatives for concept formation. An integration of the above learning techniques reduces the memory requirement of some existing extensive generalization models to a level applicable to practical problems in the design domain. The MLD5 method is particularly useful in the preliminary design of a learning system for the identification of a learning problem and of suitable strategies for solving the problem in the domain. Although the MLDS method is developed and demonstrated in the context of design, it is independent of any particular design problems and is applicable to some other domains as well. The cognitive model of sequence-based prediction developed with this method can be integrated with general concept formation methods to improve their performance in those domains where concepts drift or knowledge changes quickly, and where the degree of indeterminacy is high

    ROS based inventory methodology for Nordic skiing opportunities

    Get PDF

    Mathematical models of games of chance: Epistemological taxonomy and potential in problem-gambling research

    Get PDF
    Games of chance are developed in their physical consumer-ready form on the basis of mathematical models, which stand as the premises of their existence and represent their physical processes. There is a prevalence of statistical and probabilistic models in the interest of all parties involved in the study of gambling – researchers, game producers and operators, and players – while functional models are of interest more to math-inclined players than problem-gambling researchers. In this paper I present a structural analysis of the knowledge attached to mathematical models of games of chance and the act of modeling, arguing that such knowledge holds potential in the prevention and cognitive treatment of excessive gambling, and I propose further research in this direction

    Adaptable software reuse:binding time aware modelling language to support variations of feature binding time in software product line engineering

    Get PDF
    Software product line engineering (SPLE) is a paradigm for developing a family of software products from the same reusable assets rather than developing individual products from scratch. In many SPLE approaches, a feature is often used as the key abstraction to distinguish between the members of the product family. Thus, the sets of products in the product line are said to have ’common’ features and differ in ’variable’ features. Consequently, reusable assets are developed with variation points where variant features may be bound for each of the diverse products. Emerging deployment environments and market segments have been fuelling demands for adaptable reusable assets to support additional variations that may be required to increase the usage-context of the products of a product line. Similarly, feature binding time - when a feature is included in a product and made available for use - may vary between the products because of uncertain market conditions or diverse deployment environments. Hence, variations of feature binding time should also be supported to cover the wide-range of usage-contexts. Through the execution of action research, this thesis has established the following: Language-based implementation techniques, that are specifically proposed to implement variations in the form of features, have better modularity but are not better than the existing classical technique in terms of modifiability and do not support variations in feature binding time. Similarly, through a systematic literature review, this thesis has established the following: The different engineering approaches that are proposed to support variations of feature binding time are limited in one of the following ways: a feature may have to be represented/implemented multiple time, each for a specific binding time; The support is only to execution context and therefore limited in scope; The support focuses on too fine-grained model elements or too low-level of abstraction at source-codes. Given the limitations of the existing approaches, this thesis presents binding time aware modelling language that supports variations of feature binding time by design and improves the modifiability of reusable assets of a product line
    • …
    corecore