18 research outputs found

    Predicting the Probability of Hitting a Target.

    No full text
    <p>We illustrate how we predicted the probability of hitting targets using data from one subject (AI). <b>A.</b> Suppose that the subject chooses the speed of the first movement marked in blue (710 mm/sec). Then, due to the time constraint, the speed of the second cannot be less than that marked in green (1008 mm/sec). For each movement, its speed can then be mapped onto the accuracy based on the SAT estimated from each subject and each condition. <b>B.</b> Based on the accuracy profile described in 5A, we simulated 10,000 points for each of two movements and plotted them as shown. As a consequence of speed-accuracy tradeoff, the time constraint, and the size of the targets, the probability of hitting the first was .72 and that for hitting the second was .57. The arrows represented the direction of movement. <b>C.</b> We plotted the probability of successfully touching targets estimated from the obtained speed-accuracy tradeoff with the actual proportion of the targets with 95% confidence interval as a function of average speed. Different colors coded for different direction conditions.</p

    Statistical independence of movements.

    No full text
    <p>The probability of hitting the second target given that the first target was missed was plotted against the probability of hitting the second target when the first target was hit . Each point represented a combination of reward and distance conditions from a subject. As most points are distributed symmetrically about the diagonal line, the outcomes of the two movements can be treated as statistically independent. See text.</p

    Movement times and dwell time.

    No full text
    <p>For each subject and condition, we plotted the mean movement times and dwell time of the unequal-reward conditions against those of the equal-reward conditions. Different colors were used to represent different distance conditions. Green represented equal-distance condition, while orange represented unequal-distance condition. <b>A.</b> Mean movement time to the first target . <b>B.</b> Mean movement time to the second target . <b>C.</b> Mean dwell time . <b>D.</b> Mean total time . Error bars represented 2 standard error of the mean.</p

    Sequential movement task.

    No full text
    <p>In a visually guided sequential pointing task, subjects started every trial by placing their index finger on the starting position (red dot). The subject's task was to hit the blue target (referred as target A in the main text) and the green target (target B) in sequence within 400 ms. The green target was located in one of the eight possible locations as shown. The eight possible locations of the green target were determined by the four possible angle changes () between the first and second movement and the two possible distances between the first and second targets. We emphasize that only one green target was present on each trial.</p

    Speed-accuracy tradeoff (SAT).

    No full text
    <p><b>A.</b> Spatial variability parallel to the direction of movement was plotted as a function of the average speed of the movement (mm/sec) from subject AI. Different colors coded for different direction conditions. Each data point represents a single condition. The lines represented the best fitted linear SAT functions (Eq. 4). <b>B.</b> Spatial variability perpendicular to the direction of movement was plotted against average speed from the same subject.</p

    The coordinate system.

    No full text
    <p>We used a two-dimensional coordinate system to represent each movement. The coordinate system was embedded in the stimulus array. One axis was parallel to the line connecting the start point and the end point of the movement, and the other, was perpendicular to the first. The origin was centered on the end point of the distribution.</p

    Detailed comparisons.

    No full text
    <p><b>A.</b> A comparison of time allocation in the training session with the experimental session. We plotted the proportion of time subjects allocated to the first movement in the experimental session against that in the training session. If were similar between the experimental and the training session, most points would fall symmetrically about the diagonal line. Colors were used to code for the reward conditions (blue: equal-reward condition; red: unequal-reward condition). Different symbols were used to code distance conditions (dot: equal-distance condition; cross: unequal-distance condition). <b>B.</b> The estimated probability of hitting target B (second movement) () was plotted against the mean movement time (ms) to target A (the first movement) separately for the top 25% fastest movements (in red) and for the bottom 25% movements (in green). Each data point in the graph represented a combination of reward and distance condition from a subject. The duration of the first movement had little effect on the probability of success of the second movement.</p

    Maximizing expected gain.

    No full text
    <p><b>A.</b> The probability of hitting the first target A (blue) and the second target B (green) for subject YCC were plotted as functions of the proportion of time allocated to the first movement (). As more time was spent on the first movement, the probability of hitting the first target increased, while the probability of hitting the second target decreased. <b>B.</b> The sum of the expected gain of the two targets was plotted as functions of . Here, the reward for hitting the first and the second target were bot 10.Themaximumontheorangecurve(10. The maximum on the orange curve (15.90) corresponded to . <b>C.</b> The same format as <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0008228#pone-0008228-g002" target="_blank">Figure 2B</a>, but now hitting the first target earns a reward of 10whilehittingthesecondearnsarewardof10 while hitting the second earns a reward of 50. Compared with the condition where the target rewards were equal, the maximum on the orange curve ($51.29) has shifted to the left with , indicating that subject YCC should allocate more time on the more rewarding target in order to maximize expected gain. The vertical arrows represent the loss to the subject that results from allocating time non-optimally expressed as a percentage of maximum expected gain.</p

    Dynamic combination of sensory and reward information under time pressure

    No full text
    <div><p>When making choices, collecting more information is beneficial but comes at the cost of sacrificing time that could be allocated to making other potentially rewarding decisions. To investigate how the brain balances these costs and benefits, we conducted a series of novel experiments in humans and simulated various computational models. Under six levels of time pressure, subjects made decisions either by integrating sensory information over time or by dynamically combining sensory and reward information over time. We found that during sensory integration, time pressure reduced performance as the deadline approached, and choice was more strongly influenced by the most recent sensory evidence. By fitting performance and reaction time with various models we found that our experimental results are more compatible with leaky integration of sensory information with an urgency signal or a decision process based on stochastic transitions between discrete states modulated by an urgency signal. When combining sensory and reward information, subjects spent less time on integration than optimally prescribed when reward decreased slowly over time, and the most recent evidence did not have the maximal influence on choice. The suboptimal pattern of reaction time was partially mitigated in an equivalent control experiment in which sensory integration over time was not required, indicating that the suboptimal response time was influenced by the perception of imperfect sensory integration. Meanwhile, during combination of sensory and reward information, performance did not drop as the deadline approached, and response time was not different between correct and incorrect trials. These results indicate a decision process different from what is involved in the integration of sensory information over time. Together, our results not only reveal limitations in sensory integration over time but also illustrate how these limitations influence dynamic combination of sensory and reward information.</p></div

    Response time was shorter on correct than incorrect trials in Experiment 1 but not in Experiment 2.

    No full text
    <p>(<b>A</b>) Plotted is the difference in normalized RT between correct and incorrect trials across all subjects in Experiment 1. The solid gray line shows the average difference over all schedules, and the gray asterisk next to it indicates that this difference was significantly different from 0 (two-sided sign test, <i>p</i> < 0.05). Each black asterisk shows that the difference in RT between correct and incorrect trials for a given schedule was significantly different from 0 (two-sided sign test, <i>p</i> < 0.05), and the outliers are indicated by circles with red crosses. (<b>B</b>) The same as in (A) but for Experiment 2. In Experiment 2, the difference in the normalized RT between correct and incorrect trials was not different from 0 in any schedule or in the average data.</p
    corecore