765 research outputs found

    Converting Instance Checking to Subsumption: A Rethink for Object Queries over Practical Ontologies

    Full text link
    Efficiently querying Description Logic (DL) ontologies is becoming a vital task in various data-intensive DL applications. Considered as a basic service for answering object queries over DL ontologies, instance checking can be realized by using the most specific concept (MSC) method, which converts instance checking into subsumption problems. This method, however, loses its simplicity and efficiency when applied to large and complex ontologies, as it tends to generate very large MSC's that could lead to intractable reasoning. In this paper, we propose a revision to this MSC method for DL SHI, allowing it to generate much simpler and smaller concepts that are specific-enough to answer a given query. With independence between computed MSC's, scalability for query answering can also be achieved by distributing and parallelizing the computations. An empirical evaluation shows the efficacy of our revised MSC method and the significant efficiency achieved when using it for answering object queries

    Users as inventors and developers of radical innovation: An explorative case study analysis in the field of medical technology

    Get PDF
    Our study focuses on the question, whether users should be intensively involved in the innovation process of radical product innovations or better not - from the manufacturer's perspective. Radical innovations incorporate new technologies, shift market structures, require intensive user learning and induce significant behavior changes. Due to these specifics the question arises, whether users play a productive role in the innovation process of radical innovations at all, or if their contributions might even be counterproductive. To gain a better understanding for the users' role in radical innovation and to develop a differentiated view of their contributions, we have studied three dimensions of user involvement were studied: (1) Which characteristics enable users to contribute to the innovation process? (2) How do manufacturers need to interact with users to benefit from their contributions? (3) How does user involvement impact on the manufacturer? We focused our study on the early phases of the innovation process. Two phases were distinquished for the analysis of these questions: Idea gen-eration and development. This distinction allows us to analyse the role of users within separate phases of the innovation process. Based on relevant theories and empirical work a set of propositions was formulated for each dimension. To study the addressed research questions, an explorative case study analysis was conducted in the field of medical technology. Five radical innovation projects were selected including medical robots, navigation systems, and biocompatible implants. In-depth inter-views were conducted with marketing, R&D, project leaders, CEO's, and users. A content analysis framework was applied to systematically analyse the collected data. The case studies reveal that users with a unique set of characteristics (motivation, competencies, contextual factors) were able to deliver major contributions in all three phases of the radical innova-tion projects. In four cases users turned out to be the original inventor of the radical innovations. Particularly users that work under extreme conditions (e.g. neurosurgeons) prooved to be a valuable source for radically new ideas. Furthermore the cases show that the innovative users took over classi-cal functions of manufacturers in the development process. For example the innovative users identified relevant experts and manufacturers that were required to transform their ideas into prototypes and products. These users therefore took over the networking function. some users were able to actively contribute to the development of first prototypes. A unique set of characteristics enabled users to do so. With regard to appropriate patterns of interaction between users and manufacturers the analysis reveals that face-to-face-interactions are required. This is due to the nature of information that is transferred. The information provided by users and by manufacturers is highly complex. Therefore explanations and visualisations are needed to gain an understanding on either side. In addition the analysis shows that it seems to be appropriate to interact with a small, well selected number of users in early phases and to increase the number of involved users as the project gets closer to market introduc-tion. In four cases specific users contributed significantly to NPD success. Based on the results of the study, the recommendation for manufacturers is to leverage the knowledge of users with certain char-acteristics for radical innovation projects. The results of our study form the basis of a market research concept for radical innovations. --innovation process,product innovation

    The Formation of Novel Social Category Conjunctions in Working Memory: A Possible Role for the Episodic Buffer?

    Get PDF
    Recent research (e.g., Hutter, Crisp, Humphreys, Waters, & Moffit; Siebler) has confirmed that combining novel social categories involves two stages (e.g., Hampton; Hastie, Schroeder, & Weber). Furthermore, it is also evident that following stage 1 (constituent additivity), the second stage in these models involves cognitively effortful complex reasoning. However, while current theory and research has addressed how category conjunctions are initially represented to some degree, it is not clear precisely where we first combine or bind existing social constituent categories. For example, how and where do we compose and temporarily store a coherent representation of an individual who shares membership of “female” and “blacksmith” categories? In this article, we consider how the revised multi-component model of working memory (Baddeley) can assist in resolving the representational limitations in the extant two-stage theoretical models. This is a new approach to understanding how novel conjunctions form new bound “composite” representations

    Generalizing GAMETH: Inference rule procedure..

    Get PDF
    In this paper we present a generalisation of GAMETH framework, that play an important role in identifying crucial knowledge. Thus, we have developed a method based on three phases. In the first phase, we have used GAMETH to identify the set of “reference knowledge”. During the second phase, decision rules are inferred, through rough sets theory, from decision assignments provided by the decision maker(s). In the third phase, a multicriteria classification of “potential crucial knowledge” is performed on the basis of the decision rules that have been collectively identified by the decision maker(s).Knowledge Management; Knowledge Capitalizing; Managing knowledge; crucial knowledge;

    Towards a Multi-Subject Analysis of Neural Connectivity

    Full text link
    Directed acyclic graphs (DAGs) and associated probability models are widely used to model neural connectivity and communication channels. In many experiments, data are collected from multiple subjects whose connectivities may differ but are likely to share many features. In such circumstances it is natural to leverage similarity between subjects to improve statistical efficiency. The first exact algorithm for estimation of multiple related DAGs was recently proposed by Oates et al. 2014; in this letter we present examples and discuss implications of the methodology as applied to the analysis of fMRI data from a multi-subject experiment. Elicitation of tuning parameters requires care and we illustrate how this may proceed retrospectively based on technical replicate data. In addition to joint learning of subject-specific connectivity, we allow for heterogeneous collections of subjects and simultaneously estimate relationships between the subjects themselves. This letter aims to highlight the potential for exact estimation in the multi-subject setting.Comment: to appear in Neural Computation 27:1-2

    New Work For Certainty

    Get PDF
    This paper argues that we should assign certainty a central place in epistemology. While epistemic certainty played an important role in the history of epistemology, recent epistemology has tended to dismiss certainty as an unattainable ideal, focusing its attention on knowledge instead. I argue that this is a mistake. Attending to certainty attributions in the wild suggests that much of our everyday knowledge qualifies, in appropriate contexts, as certain. After developing a semantics for certainty ascriptions, I put certainty to explanatory work. Specifically, I argue that by taking certainty as our central epistemic notion, we can shed light on a variety of important topics, including evidence and evidential probability, epistemic modals, and the normative constraints on credence and assertion

    Critical Thinking Development Program in EFL learning

    Get PDF
    This is an experimental and quantitative study in the field of Linguistics applied to education attempts to determine the influence of the Critical Thinking Development Program, through a conference-like course aided by computers, on learning styles, linguistic competences, types of thinking, and the activation of intelligence, over one semester. Specifically, this study tries to prove that learning styles exert a certain influence on critical thinking, as well as on linguistic competences, emotional intelligences and leadership abilities. Hence the methodology is based on the cognitive paradigm which helps university learnersdevelop Constructivist and Interactionist strategies to obtain information in the computer lab in order to generate and construct their own learning and knowledge. The sample of 20 university students studying English as L2 was exposed to the computer to obtain information about a specific topic to be analyzed and presented orally in the group, and in writing. Students had to develop collaborative learning with their classmates to eventually construct knowledge. In addition, values and attitudes were internalized and reinforced, and the CHAEA Questionnaire was used to establish the types of learning styles students used at the beginning and at the end of the semester. Results from the statistical data obtained in Pre-tests and Post-tests were presented in tables, which helped to draw conclusions related to types of thinking, linguistic, cognitive and metacognitive strategies, emotional intelligences and to leadership and learning in general

    On Cognitive Preferences and the Plausibility of Rule-based Models

    Get PDF
    It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly speaking, we equate the plausibility of a model with the likeliness that a user accepts it as an explanation for a prediction. In particular, we argue that, all other things being equal, longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models, which is typically necessary for learning powerful discriminative models, may not be suitable when it comes to user acceptance of the learned models. To that end, we first recapitulate evidence for and against this postulate, and then report the results of an evaluation in a crowd-sourcing study based on about 3.000 judgments. The results do not reveal a strong preference for simple rules, whereas we can observe a weak preference for longer rules in some domains. We then relate these results to well-known cognitive biases such as the conjunction fallacy, the representative heuristic, or the recogition heuristic, and investigate their relation to rule length and plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus on plausibility and relation to interpretability, comprehensibility, and justifiabilit

    Inductive Biases for Deep Learning of Higher-Level Cognition

    Full text link
    A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human-like intelligence. This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories. Deep learning already exploits several key inductive biases, and this work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing. The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities in terms of flexible out-of-distribution and systematic generalization, which is currently an area where a large gap exists between state-of-the-art machine learning and human intelligence.Comment: This document contains a review of authors research as part of the requirement of AG's predoctoral exam, an overview of the main contributions of the authors few recent papers (co-authored with several other co-authors) as well as a vision of proposed future researc
    • …
    corecore