2,805 research outputs found

    Intersection schemas as a dataspace integration technique

    Get PDF
    This paper introduces the concept of Intersection Schemas in the field of heterogeneous data integration and dataspaces. We introduce a technique for incrementally integrating heterogeneous data sources by specifying semantic overlaps between sets of extensional schemas using bidirectional schema transformations, and automatically combining them into a global schema at each iteration of the integration process. We propose an incremental data integration methodology that uses this technique and that aims to reduce the amount of up-front effort required. Such approaches to data integration are often described as pay-as-you-go. A demonstrator of our technique is described, which utilizes a new graphical user tool implemented using the AutoMed heterogeneous data integration system. A case study is also described, and our technique and integration methodology are compared with a classical data integration strategy

    OWL-Miner: Concept Induction in OWL Knowledge Bases

    Get PDF
    The Resource Description Framework (RDF) and Web Ontology Language (OWL) have been widely used in recent years, and automated methods for the analysis of data and knowledge directly within these formalisms are of current interest. Concept induction is a technique for discovering descriptions of data, such as inducing OWL class expressions to describe RDF data. These class expressions capture patterns in the data which can be used to characterise interesting clusters or to act as classifica- tion rules over unseen data. The semantics of OWL is underpinned by Description Logics (DLs), a family of expressive and decidable fragments of first-order logic. Recently, methods of concept induction which are well studied in the field of Inductive Logic Programming have been applied to the related formalism of DLs. These methods have been developed for a number of purposes including unsuper- vised clustering and supervised classification. Refinement-based search is a concept induction technique which structures the search space of DL concept/OWL class expressions and progressively generalises or specialises candidate concepts to cover example data as guided by quality criteria such as accuracy. However, the current state-of-the-art in this area is limited in that such methods: were not primarily de- signed to scale over large RDF/OWL knowledge bases; do not support class lan- guages as expressive as OWL2-DL; or, are limited to one purpose, such as learning OWL classes for integration into ontologies. Our work addresses these limitations by increasing the efficiency of these learning methods whilst permitting a concept language up to the expressivity of OWL2-DL classes. We describe methods which support both classification (predictive induction) and subgroup discovery (descrip- tive induction), which, in this context, are fundamentally related. We have implemented our methods as the system called OWL-Miner and show by evaluation that our methods outperform state-of-the-art systems for DL learning in both the quality of solutions found and the speed in which they are computed. Furthermore, we achieve the best ever ten-fold cross validation accuracy results on the long-standing benchmark problem of carcinogenesis. Finally, we present a case study on ongoing work in the application of OWL-Miner to a real-world problem directed at improving the efficiency of biological macromolecular crystallisation

    Management control systems across different modes of innovation: Implications for firm performance

    Full text link
    © 2015 Elsevier Ltd. This study examines the use of management control systems (MCS) across different modes of innovation and the effects on firm performance. Specifically, this study draws on Simons' levers of control framework to investigate how top managers attempt to simultaneously balance exploration and exploitation, which place contradictory requirements on firms. Using data collected from a survey of top managers in 400 firms this study demonstrates that the patterns of use and interdependencies among control levers associated with enhanced performance differ depending on the mode of innovation. The findings show that control levers are independently associated with enhanced performance in firms that specialize in either exploration or exploitation, suggesting that levers operate as supplementary rather than as complementary controls in these contexts. However, in ambidextrous firms, diagnostic and interactive levers are shown to have interdependent effects on performance. Furthermore, some evidence suggests that both the combined and balanced use of these levers contributes to generating dynamic tension necessary for managing contradictory innovation modes

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    HOW BUSINESS INTELLIGENCE CREATES VALUE

    Get PDF
    Assessing IT business value has long been recognized as a major challenge, stemming largely from the considerable variability in the role and contribution of IT. This study examines the business value associated with business intelligence (BI) systems, suggesting that business value assessment is largely contingent on system type and should consider its unique contribution. The study adopts a process-oriented approach to evaluating the value contribution of BI, arguing that it stems from the improvement of business processes. The study develops and tests a research model that explains the unique mechanisms through which BI creates business value. The model draws on the resource-based view to identify key resources and capabilities that determine the impact of BI on business processes and, consequently, on organizational performance. Furthermore, the research model seeks to analyse the manner in which the organizational approach to innovation moderates the business value of BI. Analysis of data collected from 159 managers and IT/BI experts, using Structural Equations Modelling (SEM) techniques, shows that BI largely contributes to business value by improving both operational and strategic business processes. Further, it highlights the effect of the organizational approach toward exploration on transforming BI resources into capabilities and further into business value

    Efficient Learning and Evaluation of Complex Concepts in Inductive Logic Programming

    No full text
    Inductive Logic Programming (ILP) is a subfield of Machine Learning with foundations in logic programming. In ILP, logic programming, a subset of first-order logic, is used as a uniform representation language for the problem specification and induced theories. ILP has been successfully applied to many real-world problems, especially in the biological domain (e.g. drug design, protein structure prediction), where relational information is of particular importance. The expressiveness of logic programs grants flexibility in specifying the learning task and understandability to the induced theories. However, this flexibility comes at a high computational cost, constraining the applicability of ILP systems. Constructing and evaluating complex concepts remain two of the main issues that prevent ILP systems from tackling many learning problems. These learning problems are interesting both from a research perspective, as they raise the standards for ILP systems, and from an application perspective, where these target concepts naturally occur in many real-world applications. Such complex concepts cannot be constructed or evaluated by parallelizing existing top-down ILP systems or improving the underlying Prolog engine. Novel search strategies and cover algorithms are needed. The main focus of this thesis is on how to efficiently construct and evaluate complex hypotheses in an ILP setting. In order to construct such hypotheses we investigate two approaches. The first, the Top Directed Hypothesis Derivation framework, implemented in the ILP system TopLog, involves the use of a top theory to constrain the hypothesis space. In the second approach we revisit the bottom-up search strategy of Golem, lifting its restriction on determinate clauses which had rendered Golem inapplicable to many key areas. These developments led to the bottom-up ILP system ProGolem. A challenge that arises with a bottom-up approach is the coverage computation of long, non-determinate, clauses. Prolog’s SLD-resolution is no longer adequate. We developed a new, Prolog-based, theta-subsumption engine which is significantly more efficient than SLD-resolution in computing the coverage of such complex clauses. We provide evidence that ProGolem achieves the goal of learning complex concepts by presenting a protein-hexose binding prediction application. The theory ProGolem induced has a statistically significant better predictive accuracy than that of other learners. More importantly, the biological insights ProGolem’s theory provided were judged by domain experts to be relevant and, in some cases, novel

    Strategic project management approach to the study of large-scale software development using system dynamics

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 122-123).Large-scale software engineering organizations have traditionally used plan-driven, heavyweight, waterfall-style approaches for the planning, execution, and monitoring of software development efforts. This approach often results in relatively long development schedules that are susceptible to failure, especially in a rapidly changing environment: Schedule pressure, defects and requirements changes, can drive endless redesign, delay the project, and incur extra cost. Many in the commercial software world have dealt with these pressures by adopting Agile Software Development, an approach designed to be flexible and responsive to high-change environments. Software development teams that are said to employ "Agile development" in effect practice a variety of "agile methods". These practices are advertised to reduce coordination costs, to focus teams, and to produce stable product iterations that can be released incrementally. Agile software development has become a de-facto approach to the engineering of software systems in the commercial world, and is now entering the aerospace and defense sectors. The APD model developed in this research aids in the understanding of the impact that alternative combinations of Agile practices, combined with different management policies, have on project performance, compared to a waterfall approach. This research culminates in a formulation of insights and recommendations for how to integrate Agile practices into a large-scale software engineering organization.by Firas Glaiel.S.M

    The CDIO Syllabus 3.0 - An Updated Statement of Goals

    Get PDF
    The CDIO Initiative is going through a process of reconsidering and updating the CDIO approach for engineering education development. Previous work resulted in substantial updates of the twelve CDIO standards and the introduction of “optional” CDIO standards. This paper reports on a similar review and update of the CDIO Syllabus to version 3.0. It has been developed by a working group consisting of four sub-groups and iterated and refined guided by feedback from the whole CDIO community. There are mainly three external drivers that motivate the changes: sustainability, digitalization, and acceleration. There is also an internal driver in the form of lessons learned within the CDIO community, from using the Syllabus in curriculum and course development. Approximately 70 updates are proposed, amongst them three additions on the X.X level, namely 1.4 Knowledge of Social Sciences and Humanities, 3.1 Teamwork and Collaboration, and 5.3 Research

    The CDIO Syllabus 3.0 - An Updated Statement of Goals

    Get PDF
    The CDIO Initiative is going through a process of reconsidering and updating the CDIO approach for engineering education development. Previous work resulted in substantial updates of the twelve CDIO standards and the introduction of “optional” CDIO standards. This paper reports on a similar review and update of the CDIO Syllabus to version 3.0. It has been developed by a working group consisting of four sub-groups and iterated and refined guided by feedback from the whole CDIO community. There are mainly three external drivers that motivate the changes: sustainability, digitalization, and acceleration. There is also an internal driver in the form of lessons learned within the CDIO community, from using the Syllabus in curriculum and course development. Approximately 70 updates are proposed, amongst them three additions on the X.X level, namely 1.4 Knowledge of Social Sciences and Humanities, 3.1 Teamwork and Collaboration, and 5.3 Research
    • …
    corecore