742 research outputs found

    Risk, Unexpected Uncertainty, and Estimation Uncertainty: Bayesian Learning in Unstable Settings

    Get PDF
    Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free) reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter) estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free) reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating

    A Rapid Assessment of the Quality of Neonatal Healthcare in Kilimanjaro Region, Northeast Tanzania.

    Get PDF
    While child mortality is declining in Africa there has been no evidence of a comparable reduction in neonatal mortality. The quality of inpatient neonatal care is likely a contributing factor but data from resource limited settings are few. The objective of this study was to assess the quality of neonatal care in the district hospitals of the Kilimanjaro region of Tanzania. Clinical records were reviewed for ill or premature neonates admitted to 13 inpatient health facilities in the Kilimanjaro region; staffing and equipment levels were also assessed. Among the 82 neonates reviewed, key health information was missing from a substantial proportion of records: on maternal antenatal cards, blood group was recorded for 52 (63.4%) mothers, Rhesus (Rh) factor for 39 (47.6%), VDRL for 59 (71.9%) and HIV status for 77 (93.1%). From neonatal clinical records, heart rate was recorded for3 (3.7%) neonates, respiratory rate in 14, (17.1%) and temperature in 33 (40.2%). None of 13 facilities had a functioning premature unit despite calculated gestational age <36 weeks in 45.6% of evaluated neonates. Intravenous fluids and oxygen were available in 9 out of 13 of facilities, while antibiotics and essential basic equipment were available in more than two thirds. Medication dosing errors were common; under-dosage for ampicillin, gentamicin and cloxacillin was found in 44.0%, 37.9% and 50% of cases, respectively, while over-dosage was found in 20.0%, 24.2% and 19.9%, respectively. Physician or assistant physician staffing levels by the WHO indicator levels (WISN) were generally low. Key aspects of neonatal care were found to be poorly documented or incorrectly implemented in this appraisal of neonatal care in Kilimanjaro. Efforts towards quality assurance and enhanced motivation of staff may improve outcomes for this vulnerable group

    The impact of poor adult health on labor supply in the Russian Federation

    Get PDF
    We examine the labor supply consequences of poor health in the Russian Federation, a country with exceptionally adverse adult health outcomes. In both baseline OLS models and in models with individual fixed effects, more serious ill-health events, somewhat surprisingly, generally have only weak effects on hours worked. At the same time, their effect on the extensive margin of labor supply is substantial. Moreover, when combining the effects on both the intensive and extensive margins, the effect of illness on hours worked increases considerably for a range of conditions. In addition, for most part of the age distribution, people with poor self-assessed health living in rural areas are less likely to stop working, compared to people living in cities. While there is no conclusive explanation for this finding, it could be related to the existence of certain barriers that prevent people with poor health from withdrawing from the labor force in order to take care of their health

    The cost of large numbers of hypothesis tests on power, effect size and sample size

    Get PDF
    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing

    Reciprocity as a foundation of financial economics

    Get PDF
    This paper argues that the subsistence of the fundamental theorem of contemporary financial mathematics is the ethical concept ‘reciprocity’. The argument is based on identifying an equivalence between the contemporary, and ostensibly ‘value neutral’, Fundamental Theory of Asset Pricing with theories of mathematical probability that emerged in the seventeenth century in the context of the ethical assessment of commercial contracts in a framework of Aristotelian ethics. This observation, the main claim of the paper, is justified on the basis of results from the Ultimatum Game and is analysed within a framework of Pragmatic philosophy. The analysis leads to the explanatory hypothesis that markets are centres of communicative action with reciprocity as a rule of discourse. The purpose of the paper is to reorientate financial economics to emphasise the objectives of cooperation and social cohesion and to this end, we offer specific policy advice

    Quality Evaluation of the Weekly Vertical Loading Effects Induced from Continental Water Storage Models

    Get PDF
    To remove continental water storage (CWS) signals from the GPS data, CWS mass models are needed to obtain predicted surface displacements. We compared weekly GPS height time series with five CWS models: (1) the monthly and (2) three-hourly Global Land Data Assimilation System (GLDAS); (3) the monthly and (4) one-hourly Modern- Era Retrospective Analysis for Research and Applications (MERRA); (5) the six-hourly National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) global reanalysis products (NCEP-R-2). We find that of the 344 selected global IGS stations, more than 77% of stations have their weighted root mean square (WRMS) reduced in the weekly GPS height by using both the GLDAS and MERRA CWS products to model the surface displacement, and the best improvement concentrate mainly in North America and Eurasia.We find that the one-hourly MERRA-Land dataset is the most appropriate product for modeling weekly vertical surface displacement caused by CWS variations. The threehourly GLDAS data ranks the second, while the GLDAS and MERRA monthly products rank the third. The higher spatial resolution MERRA product improves the performance of the CWS model in reducing the scatter of the GPS height by about 2–6% compared with the GLDAS. Under the same spatial resolution, the higher temporal resolution could also improve the performance by almost the same magnitude. We also confirm that removing the ATML and NTOL effects from the weekly GPS height would remarkably improve the performance of CWS model in correcting the GPS height by at least 10%, especially for coastal and island stations. Since the GLDAS product has a much greater latency than the MERRA product, MERRA would be a better choice to model surface displacements from CWS. Finally, we find that the NCEP-R-2 data is not sufficiently precise to be used for this application. Further work is still required to determine the reason

    The computational therapeutic: exploring Weizenbaum's ELIZA as a history of the present

    Get PDF
    This paper explores the history of ELIZA, a computer programme approximating a Rogerian therapist, developed by Jospeh Weizenbaum at MIT in the 1970s, as an early AI experiment. ELIZA’s reception provoked Weizenbaum to re-appraise the relationship between ‘computer power and human reason’ and to attack the ‘powerful delusional thinking’ about computers and their intelligence that he understood to be widespread in the general public and also amongst experts. The root issue for Weizenbaum was whether human thought could be ‘entirely computable’ (reducible to logical formalism). This also provoked him to re-consider the nature of machine intelligence and to question the instantiation of its logics in the social world, which would come to operate, he said, as a ‘slow acting poison’. Exploring Weizenbaum’s 20th Century apostasy, in the light of ELIZA, illustrates ways in which contemporary anxieties and debates over machine smartness connect to earlier formations. In particular, this article argues that it is in its designation as a computational therapist that ELIZA is most significant today. ELIZA points towards a form of human–machine relationship now pervasive, a precursor of the ‘machinic therapeutic’ condition we find ourselves in, and thus speaks very directly to questions concerning modulation, autonomy, and the new behaviorism that are currently arising

    Reproductive Phase Locking of Mosquito Populations in Response to Rainfall Frequency

    Get PDF
    The frequency of moderate to heavy rainfall events is projected to change in response to global warming. Here we show that these hydrologic changes may have a profound effect on mosquito population dynamics and rates of mosquito-borne disease transmission. We develop a simple model, which treats the mosquito reproductive cycle as a phase oscillator that responds to rainfall frequency forcing. This model reproduces observed mosquito population dynamics and indicates that mosquito-borne disease transmission can be sensitive to rainfall frequency. These findings indicate that changes to the hydrologic cycle, in particular the frequency of moderate to heavy rainfall events, could have a profound effect on the transmission rates of some mosquito-borne diseases
    corecore