355,538 research outputs found

    Computer-based role playing for interpersonal skills training

    Get PDF
    This study examines the design and evaluation of computer-based role-playing. For novices, a conventional role-play is a very complex learning situation. Computer-based role-playing is designed to simplify role-playing so that students can more effectively develop interpersonal skills. It is a gradual lead-in to, not a replacement of, conventional role-playing. An experiment is reported in which 41 students participated. The students were randomly distributed over two groups. Two instructional programs were compared, one with and one without computer-based role-playing. One major finding is that computer-based role-playing enhances interpersonal skills development by (a) practicing the use of a conversational model, (b) offering opportunities for reflection, (c) performing four protagonist roles, and (d) capturing individual contribution and learning

    Quantifying the public's view on social value judgments in vaccine decision-making:A discrete choice experiment

    Get PDF
    Vaccination programs generate direct protection, herd protection and, occasionally, side effects, distributed over different age groups. This study elicits the general public's view on how to balance these outcomes in funding decisions for vaccines. We performed an optimal design discrete choice experiment with partial profiles in a representative sample (N = 1499) of the population in the United Kingdom in November 2016. Using a panel mixed logit model, we quantified, for four different types of infectious disease, the importance of a person's age during disease, how disease was prevented-via direct vaccine protection or herd protection-and whether the vaccine induced side effects. Our study shows clear patterns in how the public values vaccination programs. These diverge from the assumptions made in public health and cost-effectiveness models that inform decision-making. We found that side effects and infections in newborns and children were of primary importance to the perceived value of a vaccination program. Averting side effects was, in any age group, weighted three times as important as preventing an identical natural infection in a child whereas the latter was weighted six times as important as preventing the same infection in elderly aged 65-75 years. These findings were independent of the length or severity of the disease, and were robust across respondents' backgrounds. We summarize these patterns in a set of preference weights that can be incorporated into future models. Although the normative significance of these weights remains a matter open for debate, our study can, hopefully, contribute to the evaluation of vaccination programs beyond cost-effectiveness

    Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission

    Get PDF
    The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Holistic debugging - enabling instruction set simulation for software quality assurance

    Get PDF
    We present holistic debugging, a novel method for observing execution of complex and distributed software. It builds on an instruction set simulator, which provides reproducible experiments and non-intrusive probing of state in a distributed system. Instruction set simulators, however, only provide low-level information, so a holistic debugger contains a translation framework that maps this information to higher abstraction level observation tools, such as source code debuggers. We have created Nornir, a proof-of-concept holistic debugger, built on the simulator Simics. For each observed process in the simulated system, Nornir creates an abstraction translation stack, with virtual machine translators that map machine-level storage contents (e.g. physical memory, registers) provided by Simics, to application-level data (e.g. virtual memory contents) by parsing the data structures of operating systems and virtual machines. Nornir includes a modified version of the GNU debugger (GDB), which supports non-intrusive symbolic debugging of distributed applications. Nornir's main interface is a debugger shepherd, a programmable interface that controls multiple debuggers, and allows users to coherently inspect the entire state of heterogeneous, distributed applications. It provides a robust observation platform for construction of new observation tools

    Using Negotiation to Reduce Redundant Autonomous Mobile Program Movements

    Get PDF
    Distributed load managers exhibit thrashing where tasks are repeatedly moved between locations due to incomplete global load information. This paper shows that systems of Autonomous Mobile Programs (AMPs) exhibit the same behaviour, identifying two types of redundant movement and terming them greedy effects. AMPs are unusual in that, in place of some external load management system, each AMP periodically recalculates network and program parameters and may independently move to a better execution environment. Load management emerges from the behaviour of collections of AMPs. The paper explores the extent of greedy effects by simulation, and then proposes negotiating AMPs (NAMPs) to ameliorate the problem. We present the design of AMPs with a competitive negotiation scheme (cNAMPs), and compare their performance with AMPs by simulation

    Rumba : a Python framework for automating large-scale recursive internet experiments on GENI and FIRE+

    Get PDF
    It is not easy to design and run Convolutional Neural Networks (CNNs) due to: 1) finding the optimal number of filters (i.e., the width) at each layer is tricky, given an architecture; and 2) the computational intensity of CNNs impedes the deployment on computationally limited devices. Oracle Pruning is designed to remove the unimportant filters from a well-trained CNN, which estimates the filters’ importance by ablating them in turn and evaluating the model, thus delivers high accuracy but suffers from intolerable time complexity, and requires a given resulting width but cannot automatically find it. To address these problems, we propose Approximated Oracle Filter Pruning (AOFP), which keeps searching for the least important filters in a binary search manner, makes pruning attempts by masking out filters randomly, accumulates the resulting errors, and finetunes the model via a multi-path framework. As AOFP enables simultaneous pruning on multiple layers, we can prune an existing very deep CNN with acceptable time cost, negligible accuracy drop, and no heuristic knowledge, or re-design a model which exerts higher accuracy and faster inferenc

    Goal-based structuring in a recommender systems

    Get PDF
    Recommender systems help people to find information that is interesting to them. However, current recommendation techniques only address the user's short-term and long-term interests, not their immediate interests. This paper describes a method to structure information (with or without using recommendations) taking into account the users' immediate interests: a goal-based structuring method. Goal-based structuring is based on the fact that people experience certain gratifications from using information, which should match with their goals. An experiment using an electronic TV guide shows that structuring information using a goal-based structure makes it easier for users to find interesting information, especially if the goals are used explicitly; this is independent of whether recommendations are used or not. It also shows that goal-based structuring has more influence on how easy it is for users to find interesting information than recommendations

    Genetic Programming for Smart Phone Personalisation

    Full text link
    Personalisation in smart phones requires adaptability to dynamic context based on user mobility, application usage and sensor inputs. Current personalisation approaches, which rely on static logic that is developed a priori, do not provide sufficient adaptability to dynamic and unexpected context. This paper proposes genetic programming (GP), which can evolve program logic in realtime, as an online learning method to deal with the highly dynamic context in smart phone personalisation. We introduce the concept of collaborative smart phone personalisation through the GP Island Model, in order to exploit shared context among co-located phone users and reduce convergence time. We implement these concepts on real smartphones to demonstrate the capability of personalisation through GP and to explore the benefits of the Island Model. Our empirical evaluations on two example applications confirm that the Island Model can reduce convergence time by up to two-thirds over standalone GP personalisation.Comment: 43 pages, 11 figure
    corecore