93,691 research outputs found

    Interaction Analysis in Smart Work Environments through Fuzzy Temporal Logic

    Get PDF
    Interaction analysis is defined as the generation of situation descriptions from machine perception. World models created through machine perception are used by a reasoning engine based on fuzzy metric temporal logic and situation graph trees, with optional parameter learning and clustering as preprocessing, to deduce knowledge about the observed scene. The system is evaluated in a case study on automatic behavior report generation for staff training purposes in crisis response control rooms

    Interaction Analysis in Smart Work Environments through Fuzzy Temporal Logic

    Get PDF
    Interaction analysis is defined as the generation of situation descriptions from machine perception. World models created through machine perception are used by a reasoning engine based on fuzzy metric temporal logic and situation graph trees, with optional parameter learning and clustering as preprocessing, to deduce knowledge about the observed scene. The system is evaluated in a case study on automatic behavior report generation for staff training purposes in crisis response control rooms

    Identification of Design Principles

    Get PDF
    This report identifies those design principles for a (possibly new) query and transformation language for the Web supporting inference that are considered essential. Based upon these design principles an initial strawman is selected. Scenarios for querying the Semantic Web illustrate the design principles and their reflection in the initial strawman, i.e., a first draft of the query language to be designed and implemented by the REWERSE working group I4

    The Case for Dynamic Models of Learners' Ontologies in Physics

    Full text link
    In a series of well-known papers, Chi and Slotta (Chi, 1992; Chi & Slotta, 1993; Chi, Slotta & de Leeuw, 1994; Slotta, Chi & Joram, 1995; Chi, 2005; Slotta & Chi, 2006) have contended that a reason for students' difficulties in learning physics is that they think about concepts as things rather than as processes, and that there is a significant barrier between these two ontological categories. We contest this view, arguing that expert and novice reasoning often and productively traverses ontological categories. We cite examples from everyday, classroom, and professional contexts to illustrate this. We agree with Chi and Slotta that instruction should attend to learners' ontologies; but we find these ontologies are better understood as dynamic and context-dependent, rather than as static constraints. To promote one ontological description in physics instruction, as suggested by Slotta and Chi, could undermine novices' access to productive cognitive resources they bring to their studies and inhibit their transition to the dynamic ontological flexibility required of experts.Comment: The Journal of the Learning Sciences (In Press

    Textual Economy through Close Coupling of Syntax and Semantics

    Get PDF
    We focus on the production of efficient descriptions of objects, actions and events. We define a type of efficiency, textual economy, that exploits the hearer's recognition of inferential links to material elsewhere within a sentence. Textual economy leads to efficient descriptions because the material that supports such inferences has been included to satisfy independent communicative goals, and is therefore overloaded in Pollack's sense. We argue that achieving textual economy imposes strong requirements on the representation and reasoning used in generating sentences. The representation must support the generator's simultaneous consideration of syntax and semantics. Reasoning must enable the generator to assess quickly and reliably at any stage how the hearer will interpret the current sentence, with its (incomplete) syntax and semantics. We show that these representational and reasoning requirements are met in the SPUD system for sentence planning and realization.Comment: 10 pages, uses QobiTree.te

    An unsupervised approach to disjointness learning based on terminological cluster trees

    Get PDF
    In the context of the Semantic Web regarded as a Web of Data, research efforts have been devoted to improving the quality of the ontologies that are used as vocabularies to enable complex services based on automated reasoning. From various surveys it emerges that many domains would require better ontologies that include non-negligible constraints for properly conveying the intended semantics. In this respect, disjointness axioms are representative of this general problem: these axioms are essential for making the negative knowledge about the domain of interest explicit yet they are often overlooked during the modeling process (thus affecting the efficacy of the reasoning services). To tackle this problem, automated methods for discovering these axioms can be used as a tool for supporting knowledge engineers in modeling new ontologies or evolving existing ones. The current solutions, either based on statistical correlations or relying on external corpora, often do not fully exploit the terminology. Stemming from this consideration, we have been investigating on alternative methods to elicit disjointness axioms from existing ontologies based on the induction of terminological cluster trees, which are logic trees in which each node stands for a cluster of individuals which emerges as a sub-concept. The growth of such trees relies on a divide-and-conquer procedure that assigns, for the cluster representing the root node, one of the concept descriptions generated via a refinement operator and selected according to a heuristic based on the minimization of the risk of overlap between the candidate sub-clusters (quantified in terms of the distance between two prototypical individuals). Preliminary works have showed some shortcomings that are tackled in this paper. To tackle the task of disjointness axioms discovery we have extended the terminological cluster tree induction framework with various contributions: 1) the adoption of different distance measures for clustering the individuals of a knowledge base; 2) the adoption of different heuristics for selecting the most promising concept descriptions; 3) a modified version of the refinement operator to prevent the introduction of inconsistency during the elicitation of the new axioms. A wide empirical evaluation showed the feasibility of the proposed extensions and the improvement with respect to alternative approaches
    • 

    corecore