113 research outputs found

    Visualising the effects of ontology changes and studying their understanding with ChImp

    Get PDF
    Due to the Semantic Web's decentralised nature, ontology engineers rarely know all applications that leverage their ontology. Consequently, they are unaware of the full extent of possible consequences that changes might cause to the ontology. Our goal is to lessen the gap between ontology engineers and users by investigating ontology engineers’ understanding of ontology changes’ impact at editing time. Hence, this paper introduces the Protégé plugin ChImp which we use to reach our goal. We elicited requirements for ChImp through a questionnaire with ontology engineers. We then developed ChImp according to these requirements and it displays all changes of a given session and provides selected information on said changes and their effects. For each change, it computes a number of metrics on both the ontology and its materialisation. It displays those metrics on both the originally loaded ontology at the beginning of the editing session and the current state to help ontology engineers understand the impact of their changes. We investigated the informativeness of materialisation impact measures, the meaning of severe impact, and also the usefulness of ChImp in an online user study with 36 ontology engineers. We asked the participants to solve two ontology engineering tasks – with and without ChImp (assigned in random order) – and answer in-depth questions about the applied changes as well as the materialisation impact measures. We found that ChImp increased the participants’ understanding of change effects and that they felt better informed. Answers also suggest that the proposed measures were useful and informative. We also learned that the participants consider different outcomes of changes severe, but most would define severity based on the amount of changes to the materialisation compared to its size. The participants also acknowledged the importance of quantifying the impact of changes and that the study will affect their approach of editing ontologies

    Integrating ontologies and argumentation for decision-making in breast cancer

    Get PDF
    This thesis describes some of the problems in providing care for patients with breast cancer. These are then used to motivate the development of an extension to an existing theory of argumentation, which I call the Ontology-based Argumentation Formalism (OAF). The work is assessed in both theoretical and empirical ways. From a clinical perspective, there is a problem with the provision of care. Numerous reports have noted the failure to provide uniformly high quality care, as well as the number of deaths caused by medical care. The medical profession has responded in various ways, but one of these has been the development of Decision Support Systems (DSS). The evidence for the effectiveness of such systems is mixed, and the technical basis of such systems remains open to debate. However, one basis that has been used is argumentation. An important aspect of clinical practice is the use of the evidence from clinical trials, but these trials are based on the results in defined groups of patients. Thus when we use the results of clinical trials to reason about treatments, there are two forms of information we are interested in - the evidence from trials and the relationships between groups of patients and treatments. The relational information can be captured in an ontology about the groups of patients and treatments, and the information from the trials captured as a set of defeasible rules. OAF is an extension of an existing argumentation system, and provides the basis for an argumentation-based Knowledge Representation system which could serve as the basis for future DSS. In OAF, the ontology provides a repository of facts, both asserted and inferred on the basis of formulae in the ontology, as well as defining the language of the defeasible rules. The defeasible rules are used in a process of defeasible reasoning, where monotonic consistent chains of reasoning are used to draw plausible conclusions. This defeasible reasoning is used to generate arguments and counter-arguments. Conflict between arguments is defined in terms of inconsistent formulae in the ontology, and by using existing proposals for ontology languages we are able to make use of existing proposals and technologies for ontological reasoning. There are three substantial areas of novel work: I develop an extension to an existing argumentation formalism, and prove some simple properties of the formalism. I also provide a novel formalism of the practical syllogism and related hypothetical reasoning, and compare my approach to two other proposals in the literature. I conclude with a substantial case study based on a breast cancer guideline, and in order to do so I describe a methodology for comparing formal and informal arguments, and use the results of this to discuss the strengths and weaknesses of OAF. In order to develop the case study, I provide a prototype implementation. The prototype uses a novel incremental algorithm to construct arguments and I give soundness, completeness and time-complexity results. The final chapter of the thesis discusses some general lessons from the development of OAF and gives ideas for future work

    An intelligent system for facility management

    Get PDF
    A software system has been developed that monitors and interprets temporally changing (internal) building environments and generates related knowledge that can assist in facility management (FM) decision making. The use of the multi agent paradigm renders a system that delivers demonstrable rationality and is robust within the dynamic environment that it operates. Agent behaviour directed at working toward goals is rendered intelligent with semantic web technologies. The capture of semantics though formal expression to model the environment, adds a richness that the agents exploit to intelligently determine behaviours to satisfy goals that are flexible and adaptable. The agent goals are to generate knowledge about building space usage as well as environmental conditions by elaborating and combining near real time sensor data and information from conventional building models. Additionally further inferences are facilitated including those about wasted resources such as unnecessary lighting and heating for example. In contrast, current FM tools, lacking automatic synchronisation with the domain and rich semantic modelling, are limited to the simpler querying of manually maintained models.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Linked Research on the Decentralised Web

    Get PDF
    This thesis is about research communication in the context of the Web. I analyse literature which reveals how researchers are making use of Web technologies for knowledge dissemination, as well as how individuals are disempowered by the centralisation of certain systems, such as academic publishing platforms and social media. I share my findings on the feasibility of a decentralised and interoperable information space where researchers can control their identifiers whilst fulfilling the core functions of scientific communication: registration, awareness, certification, and archiving. The contemporary research communication paradigm operates under a diverse set of sociotechnical constraints, which influence how units of research information and personal data are created and exchanged. Economic forces and non-interoperable system designs mean that researcher identifiers and research contributions are largely shaped and controlled by third-party entities; participation requires the use of proprietary systems. From a technical standpoint, this thesis takes a deep look at semantic structure of research artifacts, and how they can be stored, linked and shared in a way that is controlled by individual researchers, or delegated to trusted parties. Further, I find that the ecosystem was lacking a technical Web standard able to fulfill the awareness function of research communication. Thus, I contribute a new communication protocol, Linked Data Notifications (published as a W3C Recommendation) which enables decentralised notifications on the Web, and provide implementations pertinent to the academic publishing use case. So far we have seen decentralised notifications applied in research dissemination or collaboration scenarios, as well as for archival activities and scientific experiments. Another core contribution of this work is a Web standards-based implementation of a clientside tool, dokieli, for decentralised article publishing, annotations and social interactions. dokieli can be used to fulfill the scholarly functions of registration, awareness, certification, and archiving, all in a decentralised manner, returning control of research contributions and discourse to individual researchers. The overarching conclusion of the thesis is that Web technologies can be used to create a fully functioning ecosystem for research communication. Using the framework of Web architecture, and loosely coupling the four functions, an accessible and inclusive ecosystem can be realised whereby users are able to use and switch between interoperable applications without interfering with existing data. Technical solutions alone do not suffice of course, so this thesis also takes into account the need for a change in the traditional mode of thinking amongst scholars, and presents the Linked Research initiative as an ongoing effort toward researcher autonomy in a social system, and universal access to human- and machine-readable information. Outcomes of this outreach work so far include an increase in the number of individuals self-hosting their research artifacts, workshops publishing accessible proceedings on the Web, in-the-wild experiments with open and public peer-review, and semantic graphs of contributions to conference proceedings and journals (the Linked Open Research Cloud). Some of the future challenges include: addressing the social implications of decentralised Web publishing, as well as the design of ethically grounded interoperable mechanisms; cultivating privacy aware information spaces; personal or community-controlled on-demand archiving services; and further design of decentralised applications that are aware of the core functions of scientific communication

    An intelligent system for facility management

    Get PDF
    A software system has been developed that monitors and interprets temporally changing (internal) building environments and generates related knowledge that can assist in facility management (FM) decision making. The use of the multi agent paradigm renders a system that delivers demonstrable rationality and is robust within the dynamic environment that it operates. Agent behaviour directed at working toward goals is rendered intelligent with semantic web technologies. The capture of semantics though formal expression to model the environment, adds a richness that the agents exploit to intelligently determine behaviours to satisfy goals that are flexible and adaptable. The agent goals are to generate knowledge about building space usage as well as environmental conditions by elaborating and combining near real time sensor data and information from conventional building models. Additionally further inferences are facilitated including those about wasted resources such as unnecessary lighting and heating for example. In contrast, current FM tools, lacking automatic synchronisation with the domain and rich semantic modelling, are limited to the simpler querying of manually maintained models

    Augmented Conversation and Cognitive Apprenticeship Metamodel Based Intelligent Learning Activity Builder System

    Get PDF
    This research focused on a formal (theory based) approach to designing Intelligent Tutoring System (ITS) authoring tool involving two specific conventional pedagogical theories—Conversation Theory (CT) and Cognitive Apprenticeship (CA). The research conceptualised an Augmented Conversation and Cognitive Apprenticeship Metamodel (ACCAM) based on apriori theoretical knowledge and assumptions of its underlying theories. ACCAM was implemented in an Intelligent Learning Activity Builder System (ILABS)—an ITS authoring tool. ACCAM’s implementation aims to facilitate formally designed tutoring systems, hence, ILABS―the practical implementation of ACCAM― constructs metamodels for Intelligent Learning Activity Tools (ILATs) in a numerical problem-solving context (focusing on the construction of procedural knowledge in applied numerical disciplines). Also, an Intelligent Learning Activity Management System (ILAMS), although not the focus of this research, was developed as a launchpad for ILATs constructed and to administer learning activities. Hence, ACCAM and ILABS constitute the conceptual and practical contributions that respectively flow from this research. ACCAM’s implementation was tested through the evaluation of ILABS and ILATs within an applied numerical domain―the accounting domain. The evaluation focused on the key constructs of ACCAM―cognitive visibility and conversation, implemented through a tutoring strategy employing Process Monitoring (PM). PM augments conversation within a cognitive apprenticeship framework; it aims to improve the visibility of the cognitive process of a learner and infers intelligence in tutoring systems. PM was implemented via an interface that attempts to bring learner’s thought process to the surface. This approach contrasted with previous studies that adopted standard Artificial Intelligence (AI) based inference techniques. The interface-based PM extends the existing CT and CA work. The strategy (i.e. interface-based PM) makes available a new tutoring approach that aimed fine-grain (or step-wise) feedbacks, unlike the goal-oriented feedbacks of model-tracing. The impact of PM—as a preventive strategy (or intervention) and to aid diagnosis of learners’ cognitive process—was investigated in relation to other constructs from the literature (such as detection of misconception, feedback generation and perceived learning effectiveness). Thus, the conceptualisation and implementation of PM via an interface also contributes to knowledge and practice. The evaluation of the ACCAM-based design approach and investigation of the above mentioned constructs were undertaken through users’ reaction/perception to ILABS and ILAT. This involved, principally, quantitative approach. However, a qualitative approach was also utilised to gain deeper insight. Findings from the evaluation supports the formal (theory based) design approach—the design of ILABS through interaction with ACCAM. Empirical data revealed the presence of conversation and cognitive visibility constructs in ILATs, which were determined through its behaviour during the learning process. This research identified some other theoretical elements (e.g. motivation, reflection, remediation, evaluation, etc.) that possibly play out in a learning process. This clarifies key conceptual variables that should be considered when constructing tutoring systems for applied numerical disciplines (e.g. accounting, engineering). Also, the research revealed that PM enhances the detection of a learner’s misconception and feedback generation. Nevertheless, qualitative data revealed that frequent feedbacks due to the implementation of PM could be obstructive to thought process at advance stage of learning. Thus, PM implementations should also include delayed diagnosis, especially for advance learners who prefer to have it on request. Despite that, current implementation allows users to turn PM off, thereby using alternative learning route. Overall, the research revealed that the implementation of interface-based PM (i.e. conversation and cognitive visibility) improved the visibility of learner’s cognitive process, and this in turn enhanced learning—as perceived
    • …
    corecore