137 research outputs found
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies
A smartphone-based online system for fall detection with alert notifications and contextual information of real-life falls
This article presents the results of a prospective study investigating a proof-of-concept, smartphone-based, online system for fall detection and notification. Apart from functioning as a practical fall monitoring instrument, this system may serve as a valuable research tool, enable future studies to scale their ability to capture fall-related data, and help researchers and clinicians to investigate real-falls
Comparing families of dynamic causal models
Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data
Reducing bias in auditory duration reproduction by integrating the reproduced signal
Duration estimation is known to be far from veridical and to differ for sensory estimates and motor reproduction. To investigate how these differential estimates are integrated for estimating or reproducing a duration and to examine sensorimotor biases in duration comparison and reproduction tasks, we compared estimation biases and variances among three different duration estimation tasks: perceptual comparison, motor reproduction, and auditory reproduction (i.e. a combined perceptual-motor task). We found consistent overestimation in both motor and perceptual-motor auditory reproduction tasks, and the least overestimation in the comparison task. More interestingly, compared to pure motor reproduction, the overestimation bias was reduced in the auditory reproduction task, due to the additional reproduced auditory signal. We further manipulated the signal-to-noise ratio (SNR) in the feedback/comparison tones to examine the changes in estimation biases and variances. Considering perceptual and motor biases as two independent components, we applied the reliability-based model, which successfully predicted the biases in auditory reproduction. Our findings thus provide behavioral evidence of how the brain combines motor and perceptual information together to reduce duration estimation biases and improve estimation reliability
Dissociating Variability and Effort as Determinants of Coordination
When coordinating movements, the nervous system often has to decide how to distribute work across a number of redundant effectors. Here, we show that humans solve this problem by trying to minimize both the variability of motor output and the effort involved. In previous studies that investigated the temporal shape of movements, these two selective pressures, despite having very different theoretical implications, could not be distinguished; because noise in the motor system increases with the motor commands, minimization of effort or variability leads to very similar predictions. When multiple effectors with different noise and effort characteristics have to be combined, however, these two cost terms can be dissociated. Here, we measure the importance of variability and effort in coordination by studying how humans share force production between two fingers. To capture variability, we identified the coefficient of variation of the index and little fingers. For effort, we used the sum of squared forces and the sum of squared forces normalized by the maximum strength of each effector. These terms were then used to predict the optimal force distribution for a task in which participants had to produce a target total force of 4–16 N, by pressing onto two isometric transducers using different combinations of fingers. By comparing the predicted distribution across fingers to the actual distribution chosen by participants, we were able to estimate the relative importance of variability and effort of 1∶7, with the unnormalized effort being most important. Our results indicate that the nervous system uses multi-effector redundancy to minimize both the variability of the produced output and effort, although effort costs clearly outweighed variability costs
Distortions of Subjective Time Perception Within and Across Senses
Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.
Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.
Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions
Estimating the Relevance of World Disturbances to Explain Savings, Interference and Long-Term Motor Adaptation Effects
Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters
Neuromatch Academy: Teaching Computational Neuroscience with Global Accessibility
Neuromatch Academy (NMA) designed and ran a fully online 3-week Computational Neuroscience Summer School for 1757 students with 191 teaching assistants (TAs) working in virtual inverted (or flipped) classrooms and on small group projects. Fourteen languages, active community management, and low cost allowed for an unprecedented level of inclusivity and universal accessibility
An effect of serotonergic stimulation on learning rates for rewards apparent after long intertrial intervals
Serotonin has widespread, but computationally obscure, modulatory effects on learning and cognition. Here, we studied the impact of optogenetic stimulation of dorsal raphe serotonin neurons in mice performing a non-stationary, reward-driven decision-making task. Animals showed two distinct choice strategies. Choices after short inter-trial-intervals (ITIs) depended only on the last trial outcome and followed a win-stay-lose-switch pattern. In contrast, choices after long ITIs reflected outcome history over multiple trials, as described by reinforcement learning models. We found that optogenetic stimulation during a trial significantly boosted the rate of learning that occurred due to the outcome of that trial, but these effects were only exhibited on choices after long ITIs. This suggests that serotonin neurons modulate reinforcement learning rates, and that this influence is masked by alternate, unaffected, decision mechanisms. These results provide insight into the role of serotonin in treating psychiatric disorders, particularly its modulation of neural plasticity and learning.info:eu-repo/semantics/publishedVersio
Self versus Environment Motion in Postural Control
To stabilize our position in space we use visual information as well as non-visual physical motion cues. However, visual cues can be ambiguous: visually perceived motion may be caused by self-movement, movement of the environment, or both. The nervous system must combine the ambiguous visual cues with noisy physical motion cues to resolve this ambiguity and control our body posture. Here we have developed a Bayesian model that formalizes how the nervous system could solve this problem. In this model, the nervous system combines the sensory cues to estimate the movement of the body. We analytically demonstrate that, as long as visual stimulation is fast in comparison to the uncertainty in our perception of body movement, the optimal strategy is to weight visually perceived movement velocities proportional to a power law. We find that this model accounts for the nonlinear influence of experimentally induced visual motion on human postural behavior both in our data and in previously published results
- …