12 research outputs found

    Task context determines whether common or separate inhibitory signals underlie the control of eye-hand movements

    No full text
    Whereas inhibitory control of single effector movements has been widely studied, the control of coordinated eye-band movements has received less attention. Nevertheless, previous studies have contradictorily suggested that either a common or separate signal/s is/are responsible for inhibition of coordinated eye-hand movements. In continuation of our previous study. we varied behavioral contexts and used a stochastic accumulation-to-threshold model, which predicts a scaling of the mean reaction time distribution with its variance, to study the inhibitory control of eye-hand movements. Participants performed eye-hand movements in different task conditions, and in each condition they had to redirect movements in a fraction of trials. Task contexts where the behavior could be best explained by a common initiation signal had similar error responses for eye and hand, despite having different mean reaction times, indicating a common inhibitory signal. In contrast, behavior that could be best explained by separate initiation signals had dissimilar error responses for eye and hand indicating separate inhibitory signals. These behavioral responses were further validated using electromyography and computational models having either a common or separate inhibitory control signal/s. Interestingly, in a particular context, whereas in majority trials a common initiation and inhibitory signal could explain the behavior, in a subset of trials separate initiation and inhibitory signals predicted the behavior better. This highlights the flexibility that exists in the brain and in effect reconciles the heterogeneous results reported by previous studies. NEW & NOTEWORTHY Prior studies have contradictorily suggested either a single or separate inhibitory signal/s underlying inhibition of coordinated eye-hand movements. With the use of different tasks, we observed that when eye-hand movements were initiated by a common signal, they were controlled by a common inhibitory signal. However, when the two effectors were initialed by separate signals, they were controlled by separate inhibitory signals. This highlights the flexible control of eye-hand movements and reconciles the heterogeneous results previously reported in the literature

    Mind-wandering impedes response inhibition by affecting the triggering of the inhibitory process

    No full text
    Mind-wandering is a state where our mental focus shifts towards task-unrelated thoughts. While it is known that mind-wandering has a detrimental effect on concurrent task performance, e.g., decreased accuracy, its effect on executive functions is poorly studied. Yet, the latter question is relevant to many real-world situations, e.g., rapid stopping during driving. Here we studied how mind-wandering would affect the requirement to subsequently stop an incipient motor response. We tested, first, whether mind-wandering affected stopping, and second, which component of stopping was affected: the triggering of the inhibitory brake or the implementation of the brake following triggering. We observed that during mind-wandering, stopping-latency increased as did the proportion of trials with failed triggering. Indeed, 67% of the variance of the increase in stopping-latency was explained by the increased trigger failures. Thus, mind-wandering affects stopping, primarily by affecting the triggering of the brake

    Testing how mind-wandering affects response inhibition

    No full text
    In an online pilot study, we tested the hypothesis that during MW episodes participants will have poorer response inhibition. Based on the statistical power analysis of the pilot study, we now aim to launch a replication study

    Computational Mechanisms Mediating Inhibitory Control of Coordinated Eye-Hand Movements

    No full text
    Significant progress has been made in understanding the computational and neural mechanisms that mediate eye and hand movements made in isolation. However, less is known about the mechanisms that control these movements when they are coordinated. Here, we outline our computational approaches using accumulation-to-threshold and race-to-threshold models to elucidate the mechanisms that initiate and inhibit these movements. We suggest that, depending on the behavioral context, the initiation and inhibition of coordinated eye-hand movements can operate in two modes—coupled and decoupled. The coupled mode operates when the task context requires a tight coupling between the effectors; a common command initiates both effectors, and a unitary inhibitory process is responsible for stopping them. Conversely, the decoupled mode operates when the task context demands weaker coupling between the effectors; separate commands initiate the eye and hand, and separate inhibitory processes are responsible for stopping them. We hypothesize that the higher-order control processes assess the behavioral context and choose the most appropriate mode. This computational mechanism can explain the heterogeneous results observed across many studies that have investigated the control of coordinated eye-hand movements and may also serve as a general framework to understand the control of complex multi-effector movements

    Contrasting speed-accuracy tradeoffs for eye and hand movements reveal the optimal nature of saccade kinematics

    No full text
    In contrast to hand movements, the existence of a neural representation of saccade kinematics is unclear. Saccade kinematics is typically thought to be specified by motor error/desired displacement and generated by brain stem circuits that are not penetrable to voluntary control. We studied the influence of instructed hand movement velocity on the kinematics of saccades executed without explicit instructions. When the hand movement was slow the saccade velocity decreased, independent of saccade amplitude. We leveraged this modulation of saccade velocity to study the optimality of saccades (in terms of velocity and endpoint accuracy) in relation to the well-known speed-accuracy tradeoff that governs voluntary movements (Fitts' law). In contrast to hand movements that obeyed Fitts' law, normometric saccades exhibited the greatest endpoint accuracy and lower reaction times, relative to saccades accompanying slow and fast hand movements. In the slow condition, where saccade endpoint accuracy suffered, we observed that targets were more likely to be foveated by two saccades resulting in step-saccades. Interestingly, the endpoint accuracy was higher in two-saccade trials, compared with one-saccade trials in both the slow and fast conditions. This indicates that step-saccades are a part of the kinematic plan for optimal control of endpoint accuracy. Taken together, these findings suggest normometric saccades are already optimized to maximize endpoint accuracy and the modulation of saccade velocity by hand velocity is likely to reflect the sharing of kinematic plans between the two effectors. NEW & NOTEWORTHY The optimality of saccade kinematics has been suggested by modeling studies but experimental evidence is lacking. However, we observed that, when subjects voluntarily modulated their hand velocity, the velocity of saccades accompanying these hand movements was also modulated, suggesting a shared kinematic plan for eye and hand movements. We leveraged this modulation to show that saccades had less endpoint accuracy when their velocity decreased, illustrating that normometric saccades have optimal speed and accuracy

    A Computational Framework for Understanding Eye-Hand Coordination

    No full text
    Although many studies have documented the robustness of eye-hand coordination, the computational mechanisms underlying such coordinated movements remain elusive. Here, we review the literature, highlighting the differences between mostly phenomenological studies, while emphasizing the need to develop a computational architecture which can explain eye-hand coordination across different tasks. We outline a recent computational approach which uses the accumulator model framework to elucidate the mechanisms involved in coordination of the two effectors. We suggest that, depending on the behavioral context, one of the two independent mechanisms can be flexibly used for the generation of eye and hand movements. When the context requires a tight coupling between the effectors, a common command is instantiated to drive both the effectors (common mode). Conversely, when the behavioral context demands flexibility, separate commands are sent to eye and hand effectors to initiate them flexibly (separate mode). We hypothesize that a higher order executive controller assesses behavioral context, allowing switching between the two modes. Such a computational architecture can provide a conceptual framework that can explain the observed heterogeneity in eye-hand coordination

    Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination

    No full text
    Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eyehand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements
    corecore