1,604,974 research outputs found
Likelihood-free inference of experimental Neutrino Oscillations using Neural Spline Flows
In machine learning, likelihood-free inference refers to the task of
performing an analysis driven by data instead of an analytical expression. We
discuss the application of Neural Spline Flows, a neural density estimation
algorithm, to the likelihood-free inference problem of the measurement of
neutrino oscillation parameters in Long Baseline neutrino experiments. A method
adapted to physics parameter inference is developed and applied to the case of
the disappearance muon neutrino analysis at the T2K experiment.Comment: 10 pages, 3 figure
Validation of the face-name pairs task in major depression: impaired recall but not recognition
Major depression can be associated with neurocognitive deficits which are believed in part to be related to medial temporal lobe pathology. The purpose of this study was to investigate this impairment using a hippocampal-dependent neuropsychological task. The face-name pairs task was used to assess associative memory functioning in 19 patients with major depression. When compared to age-sex-and-education matched controls, patients with depression showed impaired learning, delayed cued-recall, and delayed free-recall. However, they also showed preserved recognition of the verbal and nonverbal components of this task. Results indicate that the face-name pairs task is sensitive to neurocognitive deficits in major depression.Thisresearchwasfundedbya4-yearHealthResearch Board grant
A matter of time: Implicit acquisition of recursive sequence structures
A dominant hypothesis in empirical research on the evolution of language is the following: the fundamental difference between animal and human communication systems is captured by the distinction between regular and more complex non-regular grammars. Studies reporting successful artificial grammar learning of nested recursive structures and imaging studies of the same have methodological shortcomings since they typically allow explicit problem solving strategies and this has been shown to account for the learning effect in subsequent behavioral studies. The present study overcomes these shortcomings by using subtle violations of agreement structure in a preference classification task. In contrast to the studies conducted so far, we use an implicit learning paradigm, allowing the time needed for both abstraction processes and consolidation to take place. Our results demonstrate robust implicit learning of recursively embedded structures (context-free grammar) and recursive structures with cross-dependencies (context-sensitive grammar) in an artificial grammar learning task spanning 9 days. Keywords: Implicit artificial grammar learning; centre embedded; cross-dependency; implicit learning; context-sensitive grammar; context-free grammar; regular grammar; non-regular gramma
- …
