11 research outputs found

    Comparing eye tracking with electrooculography for measuring individual sentence comprehension duration

    Get PDF
    The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations

    Effect of Speech Rate on Neural Tracking of Speech

    Get PDF
    Speech comprehension requires effort in demanding listening situations. Selective attention may be required for focusing on a specific talker in a multi-talker environment, may enhance effort by requiring additional cognitive resources, and is known to enhance the neural representation of the attended talker in the listener’s neural response. The aim of the study was to investigate the relation of listening effort, as quantified by subjective effort ratings and pupil dilation, and neural speech tracking during sentence recognition. Task demands were varied using sentences with varying levels of linguistic complexity and using two different speech rates in a picture-matching paradigm with 20 normal-hearing listeners. The participants’ task was to match the acoustically presented sentence with a picture presented before the acoustic stimulus. Afterwards they rated their perceived effort on a categorical effort scale. During each trial, pupil dilation (as an indicator of listening effort) and electroencephalogram (as an indicator of neural speech tracking) were recorded. Neither measure was significantly affected by linguistic complexity. However, speech rate showed a strong influence on subjectively rated effort, pupil dilation, and neural tracking. The neural tracking analysis revealed a shorter latency for faster sentences, which may reflect a neural adaptation to the rate of the input. No relation was found between neural tracking and listening effort, even though both measures were clearly influenced by speech rate. This is probably due to factors that influence both measures differently. Consequently, the amount of listening effort is not clearly represented in the neural tracking

    Segmentation of the stimulus presentation for statistical analysis.

    No full text
    <p>Segmentation of the stimulus presentation for statistical analysis.</p

    Single target detection amplitudes (sTDAs) recorded with EOG and ET.

    No full text
    <p>Three exemplary participants recorded simultaneously with EOG (left panel) and ET (right panel) are shown. The colored areas represent the 95% confidence intervals. The plus signs denote the DM where the sTDA first exceeds a relative threshold (indicated right beside the dashed line) and the circles describe the PTD for each sentence structure. The line starting from the PTD indicates the DDD, which represents the sentence comprehension processing duration.</p

    Visual stimulus.

    No full text
    <p>Picture set for the sentence: <i>Die nasse Ente tadelt der treue Hund</i>. In English: <i>The wet duck</i> (accusative, but indicated ambiguously by the article “die”, nominative would also be possible) <i>reprimands the loyal dog</i> (nominative, indicated by the unambiguous article “der”). The target is depicted on the left side and competitor on the right side in this example. The three black circles are the fixation points for the EOG calibration, which disappear when the picture set is displayed. ROI 1 and ROI 2 are the regions of interest where the pictures are displayed. ROI 3 is the background. The figure is adapted from Wendt et al. [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0164627#pone.0164627.ref005" target="_blank">5</a>] in a modified version.</p

    Disambiguation to decision delays (DDDs) across all participants.

    No full text
    <p>DDDs in quiet and modulated noise recorded with EOG and analyzed with the bootstrap procedure (EOG_BS; upper panel), recorded with ET and analyzed with the bootstrap procedure (ET_BS; middle panel), and recorded with EOG and analyzed with the growth curve analysis (EOG_GCA; lower panel) across all participants. Significant differences between sentence structures in the EOG_BS condition are denoted by an asterisk (*) above the sentence structure that differs from the other sentence structures (upper panel). Significant differences between sentence structures of different recording techniques (between EOG_BS and ET_BS) or analysis methods (between EOG_BS and EOG_GCA) are denoted by an asterisk (*) below the respective sentence structure in panel ET_BS and EOG_GCA.</p

    Single target detection amplitudes (sTDAs) of the BS and GCA.

    No full text
    <p>sTDAs determined with the bootstrap procedure (solid lines) and sTDAs modeled with the GCA procedure (dashed lines) of two exemplary participants (left and right panels). The three sentence structures are displayed separately (SVO: upper panels; OVS: middle panels; ambOVS: lower panels). The colored areas represent the 95% confidence interval of the bootstrap procedure. The plus signs (black: sTDA BS; grey: sTDA GCA) denote the DM where the sTDA first exceeded a relative threshold and the circles indicate the PTD for each sentence structure. The line starting from the PTD shows the DDD; this represents the processing duration during sentence comprehension (solid: DDD BS; dashed: DDD GCA).</p

    Disambiguation to decision delay (DDD) differences between recording techniques and analysis methods.

    No full text
    <p>Differences with standard errors between disambiguation to decision delays (DDDs) of data recorded with EOG and analyzed with the bootstrap procedure (EOG_BS) and data recorded with ET and analyzed with the bootstrap procedure (ET_BS) are indicated in black. Differences between DDDs of EOG_BS and data recorded with EOG and analyzed with the growth curve analysis (EOG_GCA) are indicated in grey. Statistical differences from zero are denoted by an asterisk (*) above the respective sentence structure.</p

    Relative threshold and fixed threshold.

    No full text
    <p>DDDs of the EOG_BS data determined based on the individual relative threshold compared to DDDs determined based on a fixed threshold of 15%. The results of all participants in both listening conditions are shown. Outliers are highlighted by red circles.</p
    corecore