188 research outputs found

    Towards using a physio-cognitive model in tutoring for psychomotor tasks.

    Get PDF
    We report our exploratory research of psychomotor task training in intelligent tutoring systems (ITSs) that are generally limited to tutoring in the desktop learning environment where the learner acquires cognitively oriented knowledge and skills. It is necessary to support computer-guided training in a psychomotor task domain that is beyond the desktop environment. In this study, we seek to extend the current capability of GIFT (Generalized Intelligent Frame-work for Tutoring) to address these psychomotor task training needs. Our ap-proach is to utilize heterogeneous sensor data to identify physical motions through acceleration data from a smartphone and to monitor respiratory activity through a BioHarness, while interacting with GIFT simultaneously. We also uti-lize a computational model to better understand the learner and domain. We focus on a precision-required psychomotor task (i.e., golf putting) and create a series of courses in GIFT that instruct how to do putting with tactical breathing. We report our implementation of a physio-cognitive model that can account for the process of psychomotor skill development, the GIFT extension, and a pilot study that uses the extension. The physio-cognitive model is based on the ACT-R/Φ architecture to model and predict the process of learning, and how it can be used for improving the fundamental understanding of the domain and learner model. Our study contributes to the use of cognitive modeling with physiological con-straints to support adaptive training of psychomotor tasks in ITSs

    Distributed Computing in a Pandemic: A Review of Technologies Available for Tackling COVID-19

    Full text link
    The current COVID-19 global pandemic caused by the SARS-CoV-2 betacoronavirus has resulted in over a million deaths and is having a grave socio-economic impact, hence there is an urgency to find solutions to key research challenges. Much of this COVID-19 research depends on distributed computing. In this article, I review distributed architectures -- various types of clusters, grids and clouds -- that can be leveraged to perform these tasks at scale, at high-throughput, with a high degree of parallelism, and which can also be used to work collaboratively. High-performance computing (HPC) clusters will be used to carry out much of this work. Several bigdata processing tasks used in reducing the spread of SARS-CoV-2 require high-throughput approaches, and a variety of tools, which Hadoop and Spark offer, even using commodity hardware. Extremely large-scale COVID-19 research has also utilised some of the world's fastest supercomputers, such as IBM's SUMMIT -- for ensemble docking high-throughput screening against SARS-CoV-2 targets for drug-repurposing, and high-throughput gene analysis -- and Sentinel, an XPE-Cray based system used to explore natural products. Grid computing has facilitated the formation of the world's first Exascale grid computer. This has accelerated COVID-19 research in molecular dynamics simulations of SARS-CoV-2 spike protein interactions through massively-parallel computation and was performed with over 1 million volunteer computing devices using the Folding@home platform. Grids and clouds both can also be used for international collaboration by enabling access to important datasets and providing services that allow researchers to focus on research rather than on time-consuming data-management tasks.Comment: 21 pages (15 excl. refs), 2 figures, 3 table

    Distributed Computing in a Pandemic

    Get PDF
    The current COVID-19 global pandemic caused by the SARS-CoV-2 betacoronavirus has resulted in over a million deaths and is having a grave socio-economic impact, hence there is an urgency to find solutions to key research challenges. Much of this COVID-19 research depends on distributed computing. In this article, I review distributed architectures -- various types of clusters, grids and clouds -- that can be leveraged to perform these tasks at scale, at high-throughput, with a high degree of parallelism, and which can also be used to work collaboratively. High-performance computing (HPC) clusters will be used to carry out much of this work. Several bigdata processing tasks used in reducing the spread of SARS-CoV-2 require high-throughput approaches, and a variety of tools, which Hadoop and Spark offer, even using commodity hardware. Extremely large-scale COVID-19 research has also utilised some of the world's fastest supercomputers, such as IBM's SUMMIT -- for ensemble docking high-throughput screening against SARS-CoV-2 targets for drug-repurposing, and high-throughput gene analysis -- and Sentinel, an XPE-Cray based system used to explore natural products. Grid computing has facilitated the formation of the world's first Exascale grid computer. This has accelerated COVID-19 research in molecular dynamics simulations of SARS-CoV-2 spike protein interactions through massively-parallel computation and was performed with over 1 million volunteer computing devices using the Folding@home platform. Grids and clouds both can also be used for international collaboration by enabling access to important datasets and providing services that allow researchers to focus on research rather than on time-consuming data-management tasks

    Abstracts of Papers, 86th Annual Meeting of the Virginia Academy of Science

    Get PDF
    Abstracts for the 86th Annual Meeting of the Virginia Academy of Science, May 20-23, 2008, Hampton University, Hampton, VA

    CogTool+: Modeling human performance at large scale

    Get PDF
    Cognitive modeling tools have been widely used by researchers and practitioners to help design, evaluate and study computer user interfaces (UIs). Despite their usefulness, large-scale modeling tasks can still be very challenging due to the amount of manual work needed. To address this scalability challenge, we propose CogTool+, a new cognitive modeling software framework developed on top of the well-known software tool CogTool. CogTool+ addresses the scalability problem by supporting the following key features: 1) a higher level of parameterization and automation; 2) algorithmic components; 3) interfaces for using external data; 4) a clear separation of tasks, which allows programmers and psychologists to define reusable components (e.g., algorithmic modules and behavioral templates) that can be used by UI/UX researchers and designers without the need to understand the low-level implementation details of such components. CogTool+ also supports mixed cognitive models required for many large-scale modeling tasks and provides an offline analyzer of simulation results. In order to show how CogTool+ can reduce the human effort required for large-scale modeling, we illustrate how it works using a pedagogical example, and demonstrate its s actual performance by applying it to large-scale modeling tasks of two real-world user-authentication systems

    Distributed Computing in a Pandemic

    Get PDF
    The current COVID-19 global pandemic caused by the SARS-CoV-2 betacoronavirus has resulted in over a million deaths and is having a grave socio-economic impact, hence there is an urgency to find solutions to key research challenges. Much of this COVID-19 research depends on distributed computing. In this article, I review distributed architectures -- various types of clusters, grids and clouds -- that can be leveraged to perform these tasks at scale, at high-throughput, with a high degree of parallelism, and which can also be used to work collaboratively. High-performance computing (HPC) clusters will be used to carry out much of this work. Several bigdata processing tasks used in reducing the spread of SARS-CoV-2 require high-throughput approaches, and a variety of tools, which Hadoop and Spark offer, even using commodity hardware. Extremely large-scale COVID-19 research has also utilised some of the world's fastest supercomputers, such as IBM's SUMMIT -- for ensemble docking high-throughput screening against SARS-CoV-2 targets for drug-repurposing, and high-throughput gene analysis -- and Sentinel, an XPE-Cray based system used to explore natural products. Grid computing has facilitated the formation of the world's first Exascale grid computer. This has accelerated COVID-19 research in molecular dynamics simulations of SARS-CoV-2 spike protein interactions through massively-parallel computation and was performed with over 1 million volunteer computing devices using the Folding@home platform. Grids and clouds both can also be used for international collaboration by enabling access to important datasets and providing services that allow researchers to focus on research rather than on time-consuming data-management tasks

    How to implement HyGene into ACT-R

    Get PDF
    We investigate if and how the model of hypothesis generation and probability judgment HyGene can be implemented in ACT-R. We ground our endeavour on the formal comparison of the memory theories behind ACT-R and HyGene, whereby we contrast the predictions of the two as a function of prior history and current context. After demonstrating the convergence of the two memory theories, we provide a 3-step guide of how to translate a memory representation from HyGene into ACT-R. We also outline how HyGene’s processing steps can be translated into ACT-R. We finish with a discussion of points of divergence between the two theories

    An Energy-Efficient and Reliable Data Transmission Scheme for Transmitter-based Energy Harvesting Networks

    Get PDF
    Energy harvesting technology has been studied to overcome a limited power resource problem for a sensor network. This paper proposes a new data transmission period control and reliable data transmission algorithm for energy harvesting based sensor networks. Although previous studies proposed a communication protocol for energy harvesting based sensor networks, it still needs additional discussion. Proposed algorithm control a data transmission period and the number of data transmission dynamically based on environment information. Through this, energy consumption is reduced and transmission reliability is improved. The simulation result shows that the proposed algorithm is more efficient when compared with previous energy harvesting based communication standard, Enocean in terms of transmission success rate and residual energy.This research was supported by Basic Science Research Program through the National Research Foundation by Korea (NRF) funded by the Ministry of Education, Science and Technology(2012R1A1A3012227)
    corecore