283,649 research outputs found

    TARGET: Rapid Capture of Process Knowledge

    Get PDF
    TARGET (Task Analysis/Rule Generation Tool) represents a new breed of tool that blends graphical process flow modeling capabilities with the function of a top-down reporting facility. Since NASA personnel frequently perform tasks that are primarily procedural in nature, TARGET models mission or task procedures and generates hierarchical reports as part of the process capture and analysis effort. Historically, capturing knowledge has proven to be one of the greatest barriers to the development of intelligent systems. Current practice generally requires lengthy interactions between the expert whose knowledge is to be captured and the knowledge engineer whose responsibility is to acquire and represent the expert's knowledge in a useful form. Although much research has been devoted to the development of methodologies and computer software to aid in the capture and representation of some types of knowledge, procedural knowledge has received relatively little attention. In essence, TARGET is one of the first tools of its kind, commercial or institutional, that is designed to support this type of knowledge capture undertaking. This paper will describe the design and development of TARGET for the acquisition and representation of procedural knowledge. The strategies employed by TARGET to support use by knowledge engineers, subject matter experts, programmers and managers will be discussed. This discussion includes the method by which the tool employs its graphical user interface to generate a task hierarchy report. Next, the approach to generate production rules for incorporation in and development of a CLIPS based expert system will be elaborated. TARGET also permits experts to visually describe procedural tasks as a common medium for knowledge refinement by the expert community and knowledge engineer making knowledge consensus possible. The paper briefly touches on the verification and validation issues facing the CLIPS rule generation aspects of TARGET. A description of efforts to support TARGET's interoperability issues on PCs, Macintoshes and UNIX workstations concludes the paper

    General cost analysis for scholarly communication in Germany : results of the "Houghton Report" for Germany

    Get PDF
    Management Summary: Conducted within the project “Economic Implications of New Models for Information Supply for Science and Research in Germany”, the Houghton Report for Germany provides a general cost and benefit analysis for scientific communication in Germany comparing different scenarios according to their specific costs and explicitly including the German National License Program (NLP). Basing on the scholarly lifecycle process model outlined by Björk (2007), the study compared the following scenarios according to their accounted costs: - Traditional subscription publishing, - Open access publishing (Gold Open Access; refers primarily to journal publishing where access is free of charge to readers, while the authors or funding organisations pay for publication) - Open Access self-archiving (authors deposit their work in online open access institutional or subject-based repositories, making it freely available to anyone with Internet access; further divided into (i) CGreen Open Access’ self-archiving operating in parallel with subscription publishing; and (ii) the ‘overlay services’ model in which self-archiving provides the foundation for overlay services (e.g. peer review, branding and quality control services)) - the NLP. Within all scenarios, five core activity elements (Fund research and research communication; perform research and communicate the results; publish scientific and scholarly works; facilitate dissemination, retrieval and preservation; study publications and apply the knowledge) were modeled and priced with all their including activities. Modelling the impacts of an increase in accessibility and efficiency resulting from more open access on returns to R&D over a 20 year period and then comparing costs and benefits, we find that the benefits of open access publishing models are likely to substantially outweigh the costs and, while smaller, the benefits of the German NLP also exceed the costs. This analysis of the potential benefits of more open access to research findings suggests that different publishing models can make a material difference to the benefits realised, as well as the costs faced. It seems likely that more Open Access would have substantial net benefits in the longer term and, while net benefits may be lower during a transitional period, they are likely to be positive for both ‘author-pays’ Open Access publishing and the ‘over-lay journals’ alternatives (‘Gold Open Access’), and for parallel subscription publishing and self-archiving (‘Green Open Access’). The NLP returns substantial benefits and savings at a modest cost, returning one of the highest benefit/cost ratios available from unilateral national policies during a transitional period (second to that of ‘Green Open Access’ self-archiving). Whether ‘Green Open Access’ self-archiving in parallel with subscriptions is a sustainable model over the longer term is debateable, and what impact the NLP may have on the take up of Open Access alternatives is also an important consideration. So too is the potential for developments in Open Access or other scholarly publishing business models to significantly change the relative cost-benefit of the NLP over time. The results are comparable to those of previous studies from the UK and Netherlands. Green Open Access in parallel with the traditional model yields the best benefits/cost ratio. Beside its benefits/cost ratio, the meaningfulness of the NLP is given by its enforceability. The true costs of toll access publishing (beside the buyback” of information) is the prohibition of access to research and knowledge for society

    Integrating e-commerce standards and initiatives in a multi-layered ontology

    Get PDF
    The proliferation of different standards and joint initiatives for the classification of products and services (UNSPSC, e-cl@ss, RosettaNet, NAICS, SCTG, etc.) reveals that B2B markets have not reached a consensus on the coding systems, on the level of detail of their descriptions, on their granularity, etc. This paper shows how these standards and initiatives, which are built to cover different needs and functionalities, can be integrated in an ontology using a common multi-layered knowledge architecture. This multi-layered ontology will provide a shared understanding of the domain for applications of e-commerce, allowing the information sharing between heterogeneous systems. We will present a method for designing ontologies from these information sources by automatically transforming, integrating and enriching the existing vocabularies with the WebODE platform. As an illustration, we show an example on the computer domain, presenting the relationships between UNSPSC, e-cl@ss, RosettaNet and an electronic catalogue from an e-commerce platform

    Conceptual graph-based knowledge representation for supporting reasoning in African traditional medicine

    Get PDF
    Although African patients use both conventional or modern and traditional healthcare simultaneously, it has been proven that 80% of people rely on African traditional medicine (ATM). ATM includes medical activities stemming from practices, customs and traditions which were integral to the distinctive African cultures. It is based mainly on the oral transfer of knowledge, with the risk of losing critical knowledge. Moreover, practices differ according to the regions and the availability of medicinal plants. Therefore, it is necessary to compile tacit, disseminated and complex knowledge from various Tradi-Practitioners (TP) in order to determine interesting patterns for treating a given disease. Knowledge engineering methods for traditional medicine are useful to model suitably complex information needs, formalize knowledge of domain experts and highlight the effective practices for their integration to conventional medicine. The work described in this paper presents an approach which addresses two issues. First it aims at proposing a formal representation model of ATM knowledge and practices to facilitate their sharing and reusing. Then, it aims at providing a visual reasoning mechanism for selecting best available procedures and medicinal plants to treat diseases. The approach is based on the use of the Delphi method for capturing knowledge from various experts which necessitate reaching a consensus. Conceptual graph formalism is used to model ATM knowledge with visual reasoning capabilities and processes. The nested conceptual graphs are used to visually express the semantic meaning of Computational Tree Logic (CTL) constructs that are useful for formal specification of temporal properties of ATM domain knowledge. Our approach presents the advantage of mitigating knowledge loss with conceptual development assistance to improve the quality of ATM care (medical diagnosis and therapeutics), but also patient safety (drug monitoring)

    Epistemic and social scripts in computer-supported collaborative learning

    Get PDF
    Collaborative learning in computer-supported learning environments typically means that learners work on tasks together, discussing their individual perspectives via text-based media or videoconferencing, and consequently acquire knowledge. Collaborative learning, however, is often sub-optimal with respect to how learners work on the concepts that are supposed to be learned and how learners interact with each other. One possibility to improve collaborative learning environments is to conceptualize epistemic scripts, which specify how learners work on a given task, and social scripts, which structure how learners interact with each other. In this contribution, two studies will be reported that investigated the effects of epistemic and social scripts in a text-based computer-supported learning environment and in a videoconferencing learning environment in order to foster the individual acquisition of knowledge. In each study the factors ‘epistemic script’ and ‘social script’ have been independently varied in a 2×2-factorial design. 182 university students of Educational Science participated in these two studies. Results of both studies show that social scripts can be substantially beneficial with respect to the individual acquisition of knowledge, whereas epistemic scripts apparently do not to lead to the expected effects

    Modeling an ontology on accessible evacuation routes for emergencies

    Get PDF
    Providing alert communication in emergency situations is vital to reduce the number of victims. However, this is a challenging goal for researchers and professionals due to the diverse pool of prospective users, e.g. people with disabilities as well as other vulnerable groups. Moreover, in the event of an emergency situation, many people could become vulnerable because of exceptional circumstances such as stress, an unknown environment or even visual impairment (e.g. fire causing smoke). Within this scope, a crucial activity is to notify affected people about safe places and available evacuation routes. In order to address this need, we propose to extend an ontology, called SEMA4A (Simple EMergency Alert 4 [for] All), developed in a previous work for managing knowledge about accessibility guidelines, emergency situations and communication technologies. In this paper, we introduce a semi-automatic technique for knowledge acquisition and modeling on accessible evacuation routes. We introduce a use case to show applications of the ontology and conclude with an evaluation involving several experts in evacuation procedures. © 2014 Elsevier Ltd. All rights reserved

    Use of Research Evidence: Social Services Portfolio

    Get PDF
    The William T. Grant Foundation intends that the emerging research evidence from its Use of Research Evidence (URE) portfolio be useful to those engaged in these (and other) diverse efforts. But broad and meaningful use of research evidence will require conversations that extend beyond researchers and expert forums. Indeed, URE findings suggest that policymakers and practitioners should not be viewed simply as "end users" of research evidence. To provide insight into how URE studies and the resulting evidence could be most relevant and useful to them, policymakers and practitioners at all levels in the social services system must have a voice in these conversations. This paper is intended to foster and inform dialogue among researchers, policymakers, and practitioners by reflecting on the Foundation's social services URE portfolio from the perspective of policy and practice and by identifying potential opportunities for the next generation of studies and considerations for those undertaking that work

    Epistemic and Social Scripts in Computer-Supported Collaborative Learning

    Get PDF
    Collaborative learning in computer-supported learning environments typically means that learners work on tasks together, discussing their individual perspectives via text-based media or videoconferencing, and consequently acquire knowledge. Collaborative learning, however, is often sub-optimal with respect to how learners work on the concepts that are supposed to be learned and how learners interact with each other. Therefore, instructional support needs to be implemented into computer-supported collaborative learning environments. One possibility to improve collaborative learning environments is to conceptualize scripts that structure epistemic activities and social interactions of learners. In this contribution, two studies will be reported that investigated the effects of epistemic and social scripts in a text-based computer-supported learning environment and in a videoconferencing learning environment in order to foster the individual acquisition of knowledge. In each study the factors "epistemic script" and "social script" have been independently varied in a 2×2-factorial design. 182 university students of Educational Science participated in these two studies. Results of both studies show that social scripts can be substantially beneficial with respect to the individual acquisition of knowledge, whereas epistemic scripts apparently do not lead to the expected effects.Unter kooperativem Lernen in computerunterstĂŒtzten Lernumgebungen versteht man typischerweise, dass Lernende Wissen erwerben indem sie gemeinsam Aufgaben bearbeiten und dabei ihre individuellen Perspektiven mittels textbasierter Medien oder in Videokonferenzen diskutieren. Kooperatives Lernen scheint aber hĂ€ufig suboptimal zu sein in Bezug auf die inhaltliche Bearbeitung der zu lernenden Konzepte sowie hinsichtlich der sozialen Interaktionen der Lernenden. Eine Möglichkeit kooperative Lernumgebungen zu verbessern besteht darin, Skripts zu konzeptualisieren, die epistemische AktivitĂ€ten und soziale Interaktionen von Lernenden unterstĂŒtzen. In diesem Beitrag werden zwei Studien berichtet, die die Wirkungen epistemischer und sozialer Skripts auf den individuellen Wissenserwerb in einer text- bzw. einer videobasierten computerunterstĂŒtzten Lernumgebung untersuchen. In beiden Studien wurden die Faktoren "epistemisches Skript" und "soziales Skript" unabhĂ€ngig voneinander in einem 2×2-faktoriellen Design miteinander variiert. 182 Studierende der PĂ€dagogik der LMU MĂŒnchen nahmen an diesen beiden Studien teil. Die Ergebnisse beider Studien deuten darauf hin, dass soziale Skripts individuellen Wissenserwerb substanziell fördern können, wĂ€hrend epistemische Skripts scheinbar nicht zu den erwarteten Ergebnissen fĂŒhren
    • 

    corecore