8,823 research outputs found

    Quantifying Systemic Risk

    Get PDF

    Metacognition for spelling in higher etudents with dyslexia: is there evidence for the dual burden hypothesis?

    Get PDF
    We examined whether academic and professional bachelor students with dyslexia are able to compensate for their spelling deficits with metacognitive experience. Previous research suggested that students with dyslexia may suffer from a dual burden. Not only do they perform worse on spelling but in addition they are not as fully aware of their difficulties as their peers without dyslexia. According to some authors, this is the result of a worse feeling of confidence, which can be considered as a form of metacognition (metacognitive experience). We tried to isolate this metacognitive experience by asking 100 students with dyslexia and 100 matched control students to rate their feeling of confidence in a word spelling task and a proofreading task. Next, we used Signal Detection Analysis to disentangle the effects of proficiency and criterion setting. We found that students with dyslexia showed lower proficiencies but not suboptimal response biases. They were as good at deciding when they could be confident or not as their peers without dyslexia. They just had more cases in which their spelling was wrong. We conclude that the feeling of confidence in our students with dyslexia is as good as in their peers without dyslexia. These findings go against the Dual Burden theory (Kruger & Dunning, 1999), which assumes that people with a skills problem suffer twice as a result of insufficiently developed metacognitive competence. As a result, there is no gain to be expected from extra training of this metacognitive experience in higher education students with dyslexia

    Machine learning for automatic prediction of the quality of electrophysiological recordings

    Get PDF
    The quality of electrophysiological recordings varies a lot due to technical and biological variability and neuroscientists inevitably have to select “good” recordings for further analyses. This procedure is time-consuming and prone to selection biases. Here, we investigate replacing human decisions by a machine learning approach. We define 16 features, such as spike height and width, select the most informative ones using a wrapper method and train a classifier to reproduce the judgement of one of our expert electrophysiologists. Generalisation performance is then assessed on unseen data, classified by the same or by another expert. We observe that the learning machine can be equally, if not more, consistent in its judgements as individual experts amongst each other. Best performance is achieved for a limited number of informative features; the optimal feature set being different from one data set to another. With 80–90% of correct judgements, the performance of the system is very promising within the data sets of each expert but judgments are less reliable when it is used across sets of recordings from different experts. We conclude that the proposed approach is relevant to the selection of electrophysiological recordings, provided parameters are adjusted to different types of experiments and to individual experimenters

    Challenges and solutions for Latin named entity recognition

    Get PDF
    Although spanning thousands of years and genres as diverse as liturgy, historiography, lyric and other forms of prose and poetry, the body of Latin texts is still relatively sparse compared to English. Data sparsity in Latin presents a number of challenges for traditional Named Entity Recognition techniques. Solving such challenges and enabling reliable Named Entity Recognition in Latin texts can facilitate many down-stream applications, from machine translation to digital historiography, enabling Classicists, historians, and archaeologists for instance, to track the relationships of historical persons, places, and groups on a large scale. This paper presents the first annotated corpus for evaluating Named Entity Recognition in Latin, as well as a fully supervised model that achieves over 90% F-score on a held-out test set, significantly outperforming a competitive baseline. We also present a novel active learning strategy that predicts how many and which sentences need to be annotated for named entities in order to attain a specified degree of accuracy when recognizing named entities automatically in a given text. This maximizes the productivity of annotators while simultaneously controlling quality

    Deep Object-Centric Representations for Generalizable Robot Learning

    Full text link
    Robotic manipulation in complex open-world scenarios requires both reliable physical manipulation skills and effective and generalizable perception. In this paper, we propose a method where general purpose pretrained visual models serve as an object-centric prior for the perception system of a learned policy. We devise an object-level attentional mechanism that can be used to determine relevant objects from a few trajectories or demonstrations, and then immediately incorporate those objects into a learned policy. A task-independent meta-attention locates possible objects in the scene, and a task-specific attention identifies which objects are predictive of the trajectories. The scope of the task-specific attention is easily adjusted by showing demonstrations with distractor objects or with diverse relevant objects. Our results indicate that this approach exhibits good generalization across object instances using very few samples, and can be used to learn a variety of manipulation tasks using reinforcement learning

    Forecast quality and simple instrument rules: a real-time data approach

    Get PDF
    We start from the assertion that a useful monetary policy design should be founded on more realistic assumptions about what policymakers can know at the time when policy decisions have to be made. Since the Taylor rule – if used as an operational device - implies a forward looking behaviour, we analyze the reliability of the input information. We investigate the forecasting performance of OECD projections for GDP growth rates and inflation. We diagnose a much better forecasting record for inflation rates compared to GDP growth rates, which for most countries are almost uninformative at the time a Taylor rule should sensibly be applied. Using this data set, we find significant differences between Taylor rules estimated over revised data compared to real-time data. There is evidence that monetary policy seems to react more actively in real time than rules estimated over revised data suggest. Given the evidence of systematic errors in OECD forecasts, in a next step we attempt to correct for these forecast biases and check to which extent this can lower the errors in interest rate policy setting. An ex-ante simulation for the years 1991 to 2001 supports the proposal that correcting for forecast errors and biases based on an error model can lower the resulting policy error in interest rate setting for most countries under consideration. In addition we investigate to what extent structural changes in the policy reaction behaviour can be handled with moving instead of expanding samples. Our results point out that the information set available needs a careful examination when applied to instrument rules like those of the Taylor type. Limited forecast quality and significant data revisions recommend a more sophisticated handling of the dated information, for which we present an operational procedure that has the potential of reducing the risk of severe policy errors. --Monetary policy rules,economic forecasting,OECD,real-time data
    • …
    corecore