research

Maximizing decision rate in multisensory integration

Abstract

Effective decision-making in an uncertain world requires making use of all available information, even if distributed across different sensory modalities, as well as trading off the speed of a decision with its accuracy. In tasks with a fixed stimulus presentation time, animal and human subjects have previously been shown to combine information from several modalities in a statistically optimal manner. Furthermore, for easily discriminable stimuli and under the assumption that reaction times result from a race-to-threshold mechanism, multimodal reaction times are typically faster than predicted from unimodal conditions when assuming independent (parallel) races for each modality. However, due to a lack of adequate ideal observer models, it has remained unclear whether subjects perform optimal cue combination when they are allowed to choose their response times freely.
Based on data collected from human subjects performing a visual/vestibular heading discrimination task, we show that the subjects exhibit worse discrimination performance in the multimodal condition than predicted by standard cue combination criteria, which relate multimodal discrimination performance to sensitivity in the unimodal conditions. Furthermore, multimodal reaction times are slower than those predicted by a parallel race model, opposite to what is commonly observed for easily discriminable stimuli.
Despite violating the standard criteria for optimal cue combination, we show that subjects still accumulate evidence optimally across time and cues, even when the strength of the evidence varies with time. Additionally, subjects adjust their decision bounds, controlling the trade-off between speed and accuracy of a decision, such that they feature correct decision rates close to the maximum achievable value

    Similar works