136 research outputs found

    Quantile forecast discrimination ability and value

    Get PDF
    While probabilistic forecast verification for categorical forecasts is well established, some of the existing concepts and methods have not found their equivalent for the case of continuous variables. New tools dedicated to the assessment of forecast discrimination ability and forecast value are introduced here, based on quantile forecasts being the base product for the continuous case (hence in a nonparametric framework). The relative user characteristic (RUC) curve and the quantile value plot allow analysing the performance of a forecast for a specific user in a decision-making framework. The RUC curve is designed as a user-based discrimination tool and the quantile value plot translates forecast discrimination ability in terms of economic value. The relationship between the overall value of a quantile forecast and the respective quantile skill score is also discussed. The application of these new verification approaches and tools is illustrated based on synthetic datasets, as well as for the case of global radiation forecasts from the high resolution ensemble COSMO-DE-EPS of the German Weather Service

    Learning reference biases from language input:A cognitive modelling approach

    Get PDF
    In order to gain insight into how people acquire certain reference biases in language and how those biases influence online language processing, we constructed a cognitive model and presented it with a dataset containing reference asymmetries. Via prediction and reinforcement learning, the model was able to pick up on the asymmetries in the input. The model predictions have implications for various accounts of reference processing and demonstrate that seemingly complex behavior can be explained by simple learning mechanism

    A cognitive modeling approach to learning and using reference biases in language

    Get PDF
    During real-time language processing, people rely on linguistic and non-linguistic biases to anticipate upcoming linguistic input. One of these linguistic biases is known as the implicit causality bias, wherein language users anticipate that certain entities will be rementioned in the discourse based on the entity's particular role in an expressed causal event. For example, when language users encounter a sentence like “Elizabeth congratulated Tina
” during real-time language processing, they seemingly anticipate that the discourse will continue about Tina, the object referent, rather than Elizabeth, the subject referent. However, it is often unclear how these reference biases are acquired and how exactly they get used during real-time language processing. In order to investigate these questions, we developed a reference learning model within the PRIMs cognitive architecture that simulated the process of predicting upcoming discourse referents and their linguistic forms. Crucially, across the linguistic input the model was presented with, there were asymmetries with respect to how the discourse continued. By utilizing the learning mechanisms of the PRIMs architecture, the model was able to optimize its predictions, ultimately leading to biased model behavior. More specifically, following subject-biased implicit causality verbs the model was more likely to predict that the discourse would continue about the subject referent, whereas following object-biased implicit causality verbs the model was more likely to predict that the discourse would continue about the object referent. In a similar fashion, the model was more likely to predict that subject referent continuations would be in the form of a pronoun, whereas object referent continuations would be in the form of a proper name. These learned biases were also shown to generalize to novel contexts in which either the verb or the subject and object referents were new. The results of the present study demonstrate that seemingly complex linguistic behavior can be explained by cognitively plausible domain-general learning mechanisms. This study has implications for psycholinguistic accounts of predictive language processing and language learning, as well as for theories of implicit causality and reference processing

    Exocytosis and Endocytosis in Neuroendocrine Cells: Inseparable Membranes!

    Get PDF
    International audienceAlthough much has been learned concerning the mechanisms of secretory vesicle formation and fusion at donor and acceptor membrane compartments, relatively little attention has been paid toward understanding how cells maintain a homeostatic membrane balance through vesicular trafficking. In neurons and neuroendocrine cells, release of neurotrans-mitters, neuropeptides, and hormones occurs through calcium-regulated exocytosis at the plasma membrane. To allow recycling of secretory vesicle components and to preserve organelles integrity, cells must initiate and regulate compensatory membrane uptake. This review relates the fate of secretory granule membranes after full fusion exocytosis in neuroendocrine cells. In particular, we focus on the potential role of lipids in preserving and sorting secretory granule membranes after exocytosis and we discuss the potential mechanisms of membrane retrieval

    Maze solving using temperature-induced Marangoni flow

    Get PDF
    A temperature gradient can be utilized for maze solving using a temperature-induced Marangoni flow. Induced liquid flow drags passive tracers such as small dye particles, which dissolve in a water phase thus visualizing the shortest path.</p
    • 

    corecore