16,191 research outputs found

    Design of a performance-oriented workplace e-learning system using ontology

    Get PDF
    E-learning is emerging as a popular learning approach utilized by many organizations. Despite the ever increasing practices of e-learning in the workplace, most e-learning applications fail to meet learners' needs or serve organization's quests for success. Significant gaps exist between organizational interests and individual needs when they come to e-learning, which make e-learning applications less goal-effective. To solve this problem, a performance-oriented approach is presented in this study. Key performance indicators (KPIs) are set up to clarify organizational training needs, and help learners establish rational learning objectives. In addition, ontology is used for constructing formal and machine-understandable conceptualization of the performance-oriented learning environment. Using this approach, a KPI-oriented learning ontology and prototype system have been developed and evaluated to demonstrate the effectiveness of the approach. © 2010 Elsevier Ltd. All rights reserved.postprin

    Veterans engineering resource center: the DREAM project

    Get PDF
    Due to technological advances, data collected from direct healthcare delivery is growing by the day. The constantly growing data that was collected from various resources including patient visits, images, laboratory results and physician notes, though important, has no significance beyond its satisfying reporting and/or documentation requirements and potential application to specific clinical situations, mainly due to the voluminous and heterogeneous nature of the data. With this tremendous amount of data, manual extraction of information is expensive, time consuming, and subject to human error. Fortunately, information technologies have enabled the generation and collection of this data and also the efficient extraction of useful information. Currently, there is a broad spectrum of secondary uses of this clinical data including clinical and translational research, public health and policy analysis, and quality measurement and improvement. The following case study examines a pilot project undertaken by the Veterans Engineering Resource Center(VERC) to design a data mining software utility called Data Resource Engine & Analytical Model (DREAM).This software should be operable within the VA IT infrastructure and will allow providers to view aggregate patient data rapidly and accurately using electronic health records

    Enhancing the Digital Backchannel Backstage on the Basis of a Formative User Study

    Get PDF
    Contemporary higher education with its large audiences suffers from passivity of students. Enhancing the classroom with a digital backchannel can contribute to establishing and fostering active participation of and collaboration among students in the lecture. Therefore, we conceived the digital backchannel Backstage specifically tailored for the use in large classes. At an early phase of development we tested its core functionalities in a small-scale user study. The aim of the study was to gain first impressions of its adoption, and also to form a basis for further steps in the conception of Backstage. Regarding adoption we particularly focused on how Backstage influences the participants' questioning behavior, a salient aspect in learning. We observed that during the study much more questions were uttered on Backstage than being asked without backchannel support. Regarding the further development of Backstage we capitalized on the participants' usability feedback. The key of the refinement is the integration of presentation slides in Backstage, which leads to an interesting reconsideration of the user interactions of Backstage

    Future internet enablers for VGI applications

    No full text
    This paper presents the authors experiences with the development of mobile Volunteered Geographic Information (VGI) applications in the context of the ENVIROFI project and Future Internet Public Private Partnership (FI-PPP) FP7 research programme.FI-PPP has an ambitious goal of developing a set of Generic FI Enablers (GEs) - software and hardware tools that will simplify development of thematic future internet applications. Our role in the programme was to provide requirements and assess the usability of the GEs from the point of view of the environmental usage area, In addition, we specified and developed three proof of concept implementations of environmental FI applications, and a set of specific environmental enablers (SEs) complementing the functionality offered by GEs. Rather than trying to rebuild the whole infrastructure of the Environmental Information Space (EIS), we concentrated on two aspects: (1) how to assure the existing and future EIS services and applications can be integrated and reused in FI context; and (2) how to profit from the GEs in future environmental applications.This paper concentrates on the GEs and SEs which were used in two of the ENVIROFI pilots which are representative for the emerging class of Volunteered Geographic Information (VGI) use-cases: one of them is pertinent to biodiversity and another to influence of weather and airborne pollution on users’ wellbeing. In VGI applications, the EIS and SensorWeb overlap with the Social web and potentially huge amounts of information from mobile citizens needs to be assessed and fused with the observations from official sources. On the whole, the authors are confident that the FI-PPP programme will greatly influence the EIS, but the paper also warns of the shortcomings in the current GE implementations and provides recommendations for further developments

    Ontology-Based Intelligent Agents in Workplace eLearning

    Get PDF
    Despite the ever increasing practices of e-learning, most workplace e-learning applications fail to meet the learners’ needs and ultimately fail to serve the organization’s quest for success. The dominance of technology-oriented approaches makes e-learning applications less goal-effective, and makes them perceived to be of poor quality and design. To solve this problem, a performance oriented approach is presented in this study. This approach aims to align the individual learning needs vis-à-vis the organizational goals and makes learning connected with work performance. Based on the approach, a prototype system has been developed that uses intelligent agent and ontology technology. A set of experiments have been conducted to demonstrate the effectiveness of the approach

    Designing Clinical Data Presentation Using Cognitive Task Analysis Methods

    Get PDF
    Despite the many decades of research on effective use of clinical systems in medicine, the adoption of health information technology to improve patient care continues to be slow especially in ambulatory settings. This applies to dentistry as well, a primary care discipline with approximately 137,000 practicing dentists in the United States. One critical reason is the poor usability of clinical systems, which makes it difficult for providers to navigate through the system and obtain an integrated view of patient data during patient care. Cognitive science methods have shown significant promise to meaningfully inform and formulate the design, development and assessment of clinical information systems. Most of these methods were applied to evaluate the design of systems after they have been developed. Very few studies, on the other hand, have used cognitive engineering methods to inform the design process for a system itself. It is this gap in knowledge – how cognitive engineering methods can be optimally applied to inform the system design process – that this research seeks to address through this project proposal. This project examined the cognitive processes and information management strategies used by dentists during a typical patient exam and used the results to inform the design of an electronic dental record interface. The resulting 'proof of concept' was evaluated to determine the effectiveness and efficiency of such a cognitively engineered and application flow design. The results of this study contribute to designing clinical systems that provide clinicians with better cognitive support during patient care. Such a system will contribute to enhancing the quality and safety of patient care, and potentially to reducing healthcare costs

    Recruitment and selection processes through an effective GDSS

    Get PDF
    [[abstract]]This study proposes a group decision support system (GDSS), with multiple criteria to assist in recruitment and selection (R&S) processes of human resources. A two-phase decision-making procedure is first suggested; various techniques involving multiple criteria and group participation are then defined corresponding to each step in the procedure. A wide scope of personnel characteristics is evaluated, and the concept of consensus is enhanced. The procedure recommended herein is expected to be more effective than traditional approaches. In addition, the procedure is implemented on a network-based PC system with web interfaces to support the R&S activities. In the final stage, key personnel at a human resources department of a chemical company in southern Taiwan authenticated the feasibility of the illustrated example.[[notice]]補正完畢[[journaltype]]國內[[incitationindex]]SCI[[incitationindex]]E

    Defining and Assessing Critical Thinking: toward an automatic analysis of HiEd students’ written texts

    Get PDF
    L'obiettivo principale di questa tesi di dottorato è testare, attraverso due studi empirici, l'affidabilità di un metodo volto a valutare automaticamente le manifestazioni del Pensiero Critico (CT) nei testi scritti da studenti universitari. Gli studi empirici si sono basati su una review critica della letteratura volta a proporre una nuova classificazione per sistematizzare le diverse definizioni di CT e i relativi approcci teorici. La review esamina anche la relazione tra le diverse definizioni di CT e i relativi metodi di valutazione. Dai risultati emerge la necessità di concentrarsi su misure aperte per la valutazione del CT e di sviluppare strumenti automatici basati su tecniche di elaborazione del linguaggio naturale (NLP) per superare i limiti attuali delle misure aperte, come l’attendibilità e i costi di scoring. Sulla base di una rubrica sviluppata e implementata dal gruppo di ricerca del Centro di Didattica Museale – Università di Roma Tre (CDM) per la valutazione e l'analisi dei livelli di CT all'interno di risposte aperte (Poce, 2017), è stato progettato un prototipo per la misurazione automatica di alcuni indicatori di CT. Il primo studio empirico condotto su un gruppo di 66 docenti universitari mostra livelli di affidabilità soddisfacenti della rubrica di valutazione, mentre la valutazione effettuata dal prototipo non era sufficientemente attendibile. I risultati di questa sperimentazione sono stati utilizzati per capire come e in quali condizioni il modello funziona meglio. La seconda indagine empirica era volta a capire quali indicatori del linguaggio naturale sono maggiormente associati a sei sottodimensioni del CT, valutate da esperti in saggi scritti in lingua italiana. Lo studio ha utilizzato un corpus di 103 saggi pre-post di studenti universitari di laurea magistrale che hanno frequentato il corso di "Pedagogia sperimentale e valutazione scolastica". All'interno del corso, sono state proposte due attività per stimolare il CT degli studenti: la valutazione delle risorse educative aperte (OER) (obbligatoria e online) e la progettazione delle OER (facoltativa e in modalità blended). I saggi sono stati valutati sia da valutatori esperti, considerando sei sotto-dimensioni del CT, sia da un algoritmo che misura automaticamente diversi tipi di indicatori del linguaggio naturale. Abbiamo riscontrato un'affidabilità interna positiva e un accordo tra valutatori medio-alto. I livelli di CT degli studenti sono migliorati in modo significativo nel post-test. Tre indicatori del linguaggio naturale sono 5 correlati in modo significativo con il punteggio totale di CT: la lunghezza del corpus, la complessità della sintassi e la funzione di peso tf-idf (term frequency–inverse document frequency). I risultati raccolti durante questo dottorato hanno implicazioni sia teoriche che pratiche per la ricerca e la valutazione del CT. Da un punto di vista teorico, questa tesi mostra sovrapposizioni inesplorate tra diverse tradizioni, prospettive e metodi di studio del CT. Questi punti di contatto potrebbero costituire la base per un approccio interdisciplinare e la costruzione di una comprensione condivisa di CT. I metodi di valutazione automatica possono supportare l’uso di misure aperte per la valutazione del CT, specialmente nell'insegnamento online. Possono infatti facilitare i docenti e i ricercatori nell'affrontare la crescente presenza di dati linguistici prodotti all'interno di piattaforme educative (es. Learning Management Systems). A tal fine, è fondamentale sviluppare metodi automatici per la valutazione di grandi quantità di dati che sarebbe impossibile analizzare manualmente, fornendo agli insegnanti e ai valutatori un supporto per il monitoraggio e la valutazione delle competenze dimostrate online dagli studenti.The main goal of this PhD thesis is to test, through two empirical studies, the reliability of a method aimed at automatically assessing Critical Thinking (CT) manifestations in Higher Education students’ written texts. The empirical studies were based on a critical review aimed at proposing a new classification for systematising different CT definitions and their related theoretical approaches. The review also investigates the relationship between the different adopted CT definitions and CT assessment methods. The review highlights the need to focus on open-ended measures for CT assessment and to develop automatic tools based on Natural Language Processing (NLP) technique to overcome current limitations of open-ended measures, such as reliability and costs. Based on a rubric developed and implemented by the Center for Museum Studies – Roma Tre University (CDM) research group for the evaluation and analysis of CT levels within open-ended answers (Poce, 2017), a NLP prototype for the automatic measurement of CT indicators was designed. The first empirical study was carried out on a group of 66 university teachers. The study showed satisfactory reliability levels of the CT evaluation rubric, while the evaluation carried out by the prototype was not yet sufficiently reliable. The results were used to understand how and under what conditions the model works better. The second empirical investigation was aimed at understanding which NLP features are more associated with six CT sub-dimensions as assessed by human raters in essays written in the Italian language. The study used a corpus of 103 students’ pre-post essays who attended a Master's Degree module in “Experimental Education and School Assessment” to assess students' CT levels. Within the module, we proposed two activities to stimulate students' CT: Open Educational Resources (OERs) assessment (mandatory and online) and OERs design (optional and blended). The essays were assessed both by expert evaluators, considering six CT sub-dimensions, and by an algorithm that automatically calculates different kinds of NLP features. The study shows a positive internal reliability and a medium to high inter-coder agreement in expert evaluation. Students' CT levels improved significantly in the post-test. Three NLP indicators significantly correlate with CT total score: the Corpus Length, the Syntax Complexity, and an adapted measure of Term Frequency- Inverse Document Frequency. The results collected during this PhD have both theoretical and practical implications for CT research and assessment. From a theoretical perspective, this thesis shows unexplored similarities among different CT traditions, perspectives, and study methods. These similarities could be exploited to open up an interdisciplinary dialogue among experts and build up a shared understanding of CT. Automatic assessment methods can enhance the use of open-ended measures for CT assessment, especially in online teaching. Indeed, they can support teachers and researchers to deal with the growing presence of linguistic data produced within educational 4 platforms. To this end, it is pivotal to develop automatic methods for the evaluation of large amounts of data which would be impossible to analyse manually, providing teachers an
    corecore