17,715 research outputs found

    Solving the Linda multiple rd problem using the copy-collect primitive

    Get PDF
    AbstractLinda is a mature co-ordination language that has been in use for several years. However, as a result of recent work on the model we have found a simple class of operation that is widely used in many different algorithms which the Linda model is unable to express in a viable fashion. An example algorithm which uses this operation is the composition of two binary relations. By examining how to implement this in parallel using Linda we demonstrate that the approaches possible using the current Linda primitives are unsatisfactory. This paper demonstrates how this “multiple rd problem” can be overcome by the addition of a primitive to the Linda model, copy-collect. This builds on previous work on another primitive called collect (Butcher et al., 1994). The parallel composition of two binary relations using the copy-collect primitive can be achieved with maximal parallelism

    Leveraging Crowdsourcing Data For Deep Active Learning - An Application: Learning Intents in Alexa

    Full text link
    This paper presents a generic Bayesian framework that enables any deep learning model to actively learn from targeted crowds. Our framework inherits from recent advances in Bayesian deep learning, and extends existing work by considering the targeted crowdsourcing approach, where multiple annotators with unknown expertise contribute an uncontrolled amount (often limited) of annotations. Our framework leverages the low-rank structure in annotations to learn individual annotator expertise, which then helps to infer the true labels from noisy and sparse annotations. It provides a unified Bayesian model to simultaneously infer the true labels and train the deep learning model in order to reach an optimal learning efficacy. Finally, our framework exploits the uncertainty of the deep learning model during prediction as well as the annotators' estimated expertise to minimize the number of required annotations and annotators for optimally training the deep learning model. We evaluate the effectiveness of our framework for intent classification in Alexa (Amazon's personal assistant), using both synthetic and real-world datasets. Experiments show that our framework can accurately learn annotator expertise, infer true labels, and effectively reduce the amount of annotations in model training as compared to state-of-the-art approaches. We further discuss the potential of our proposed framework in bridging machine learning and crowdsourcing towards improved human-in-the-loop systems

    Evaluation of the Wellspring Model for Improving Nursing Home Quality

    Get PDF
    Examines how successfully the Wellspring model improved the quality of care for residents of eleven nonprofit nursing homes in Wisconsin. Looks at staff turnover, and evaluates the impact on facilities, employees, residents, and cost

    Impact of California's Transitional Kindergarten Program, 2013-14

    Get PDF
    Transitional kindergarten (TK)—the first year of a two-year kindergarten program for California children who turn 5 between September 2 and December 2—is intended to better prepare young five-year-olds for kindergarten and ensure a strong start to their educational career. To determine whether this goal is being achieved, American Institutes for Research (AIR) is conducting an evaluation of the impact of TK in California. The goal of this study is to measure the success of the program by determining the impact of TK on students' readiness for kindergarten in several areas. Using a rigorous regression discontinuity (RD) research design,1 we compared language, literacy, mathematics, executive function, and social-emotional skills at kindergarten entry for students who attended TK and for students who did not attend TK. Overall, we found that TK had a positive impact on students' kindergarten readiness in several domains, controlling for students' age differences. These effects are over and above the experiences children in the comparison group had the year before kindergarten, which for more than 80 percent was some type of preschool program

    Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics

    Get PDF
    We show how to extend a recently proposed multi-level Monte Carlo approach to the continuous time Markov chain setting, thereby greatly lowering the computational complexity needed to compute expected values of functions of the state of the system to a specified accuracy. The extension is non-trivial, exploiting a coupling of the requisite processes that is easy to simulate while providing a small variance for the estimator. Further, and in a stark departure from other implementations of multi-level Monte Carlo, we show how to produce an unbiased estimator that is significantly less computationally expensive than the usual unbiased estimator arising from exact algorithms in conjunction with crude Monte Carlo. We thereby dramatically improve, in a quantifiable manner, the basic computational complexity of current approaches that have many names and variants across the scientific literature, including the Bortz-Kalos-Lebowitz algorithm, discrete event simulation, dynamic Monte Carlo, kinetic Monte Carlo, the n-fold way, the next reaction method,the residence-time algorithm, the stochastic simulation algorithm, Gillespie's algorithm, and tau-leaping. The new algorithm applies generically, but we also give an example where the coupling idea alone, even without a multi-level discretization, can be used to improve efficiency by exploiting system structure. Stochastically modeled chemical reaction networks provide a very important application for this work. Hence, we use this context for our notation, terminology, natural scalings, and computational examples.Comment: Improved description of the constants in statement of Theorem

    Experiencing simulated outcomes

    Get PDF
    Whereas much literature has documented difficulties in making probabilistic inferences, it has also emphasized the importance of task characteristics in determining judgmental accuracy. Noting that people exhibit remarkable efficiency in encoding frequency information sequentially, we construct tasks that exploit this ability by requiring people to experience the outcomes of sequentially simulated data. We report two experiments. The first involved seven well-known probabilistic inference tasks. Participants differed in statistical sophistication and answered with and without experience obtained through sequentially simulated outcomes in a design that permitted both between- and within-subject analyses. The second experiment involved interpreting the outcomes of a regression analysis when making inferences for investment decisions. In both experiments, even the statistically naïve make accurate probabilistic inferences after experiencing sequentially simulated outcomes and many prefer this presentation format. We conclude by discussing theoretical and practical implications.probabilistic reasoning; natural frequencies; experiential sampling; simulation., leex
    corecore