64 research outputs found

    Contrast matching and discrimation in human vision

    Get PDF
    The aim of this work was to investigate human contrast perception at various contrast levels ranging from detection threshold to suprathreshold levels by using psychophysical techniques. The work consists of two major parts. The first part deals with contrast matching, and the second part deals with contrast discrimination. Contrast matching technique was used to determine when the perceived contrasts of different stimuli were equal. The effects of spatial frequency, stimulus area, image complexity and chromatic contrast on contrast detection thresholds and matches were studied. These factors influenced detection thresholds and perceived contrast at low contrast levels. However, at suprathreshold contrast levels perceived contrast became directly proportional to the physical contrast of the stimulus and almost independent of factors affecting detection thresholds. Contrast discrimination was studied by measuring contrast increment thresholds which indicate the smallest detectable contrast difference. The effects of stimulus area, external spatial image noise and retinal illuminance were studied. The above factors affected contrast detection thresholds and increment thresholds measured at low contrast levels. At high contrast levels, contrast increment thresholds became very similar so that the effect of these factors decreased. Human contrast perception was modelled by regarding the visual system as a simple image processing system. A visual signal is first low-pass filtered by the ocular optics. This is followed by spatial high-pass filtering by the neural visual pathways, and addition of internal neural noise. Detection is mediated by a local matched filter which is a weighted replica of the stimulus whose sampling efficiency decreases with increasing stimulus area and complexity. According to the model, the signals to be compared in a contrast matching task are first transferred through the early image processing stages mentioned above. Then they are filtered by a restoring transfer function which compensates for the low-level filtering and limited spatial integration at high contrast levels. Perceived contrasts of the stimuli are equal when the restored responses to the stimuli are equal. According to the model, the signals to be discriminated in a contrast discrimination task first go through the early image processing stages, after which signal dependent noise is added to the matched filter responses. The decision made by the human brain is based on the comparison between the responses of the matched filters to the stimuli, and the accuracy of the decision is limited by pre- and post-filter noises. The model for human contrast perception could accurately describe the results of contrast matching and discrimination in various conditions

    Exploring laterality and memory effects in the haptic discrimination of verbal and non-verbal shapes

    Get PDF
    The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d '. The d ' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.Peer reviewe

    KIELELLISET JA KEHOLLISET VIESTIT VUOROVAIKUTUKSESSA

    Get PDF
    Haddington & Kääntä (toim.) Kieli, keho javuorovaikutus. Multimodaalinen näkökulmasosiaaliseen toimintaan. 2011

    Stimulus duration has little effect on auditory, visual and audiovisual temporal order judgement

    Get PDF
    Some classical studies on temporal order judgments (TOJ) suggested a single central process comparing stimulus onsets across modalities. The prevalent current view suggests that there is modality-specific timing estimation followed by a cross-modal stage. If the latter view is correct, TOJ's may vary depending on stimulus modality. Further, if TOJ is based only on onsets, stimulus duration should be irrelevant. To address these issues, we used both unisensory and multisensory stimuli to test whether unisensory duration processing influences cross-modal TOJ's. The stimuli were auditory noise bursts, visual squares, and their cross-modal combinations presented at 10, 40 and 500 ms durations, and various stimulus onset asynchronies. Psychometric functions were measured with an identical task in all conditions: On each trial, two stimuli were presented, one to the left, the other to the right of fixation. The participants judged which one started first. TOJ's were little affected by stimulus duration, implying that they are mainly determined by stimulus onsets. Throughout, the cross-modal just noticeable differences were larger than the unisensory ones. In accordance with the current view, our results suggest that cross-modal TOJ's require a comparison of timing after modality-specific estimations.Peer reviewe

    Semantically Congruent Visual Information Can Improve Auditory Recognition Memory in Older Adults

    Get PDF
    In the course of normal aging, memory functions show signs of impairment. Studies of memory in the elderly have previously focused on a single sensory modality, although multisensory encoding has been shown to improve memory performance in children and young adults. In this study, we investigated how audiovisual encoding affects auditory recognition memory in older (mean age 71 years) and younger (mean age 23 years) adults. Participants memorized auditory stimuli (sounds, spoken words) presented either alone or with semantically congruent visual stimuli (pictures, text) during encoding. Subsequent recognition memory performance of auditory stimuli was better for stimuli initially presented together with visual stimuli than for auditory stimuli presented alone during encoding. This facilitation was observed both in older and younger participants, while the overall memory performance was poorer in older participants. However, the pattern of facilitation was influenced by age. When encoding spoken words, the gain was greater for older adults. When encoding sounds, the gain was greater for younger adults. These findings show that semantically congruent audiovisual encoding improves memory performance in late adulthood, particularly for auditory verbal material.Peer reviewe

    Disentangling unisensory from fusion effects in the attentional modulation of McGurk effects: a Bayesian modeling study suggests that fusion is attention-dependent

    No full text
    International audienceThe McGurk effect has been shown to be modulated by attention. However, it remains unclear whether attentional effects are due to changes in unisensory processing or in the fusion mechanism. In this paper, we used published experimental data showing that distraction of visual attention weakens the McGurk effect, to fit either the Fuzzy Logical Model of Perception (FLMP) in which the fusion mechanism is fixed, or a variant of it in which the fusion mechanism could be varied depending on attention. The latter model was associated with a larger likelihood when assessed with a Bayesian Model Selection criterion. Our findings suggest that distraction of visual attention affects fusion by decreasing the weight of the visual input

    Validated Interpersonal Confidence Questıonnaire to Measure the Impact of Improvisation Training

    Get PDF
    Theatre-based improvisation includes a model of constructive communication, which has been applied to education, and in fields requiring interpersonal competencies. Here, we present a validation study of the Interpersonal Confidence Questionnaire (ICQ) developed to measure self-reported interpersonal confidence, that is, beliefs regarding one’s capability related to effective social interactions. Confirmatory factor analysis (n = 208) confirmed the 18-item measurement model of ICQ as satisfactory, with six factors contributing to interpersonal confidence: performance confidence, flexibility, listening skills, tolerance of failure, collaboration motivation, and presence. The questionnaire showed discriminatory power, acceptable composite reliability, and strong test–retest reliability. The immediate and long-term impact of six improvisation interventions (n = 161) were measured using ICQ. Improvisation interventions resulted in improvements to interpersonal confidence, performance confidence, and tolerance of failure relative to controls, and an improved performance confidence persisted over time. This study provides initial evidence on the validity and reliability of the 18-item, 6-factor ICQ as a self-report measurement of interpersonal confidence, which may increase following improvisation training. Keywords: Improvisation, interpersonal confidence, performance confidence, tolerance of failure, questionnaire validationPeer reviewe

    Connecting directional limb movements to vowel fronting and backing

    Get PDF
    It has been shown recently that when participants are required to pronounce a vowel at the same time with the hand movement, the vocal and manual responses are facilitated when a front vowel is produced with forward-directed hand movements and a back vowel is produced with backward-directed hand movements. This finding suggests a coupling between spatial programing of articulatory tongue movements and hand movements. The present study revealed that the same effect can be also observed in relation to directional leg movements. The study suggests that the effect operates within the common directional processes of movement planning including at least tongue, hand and leg movements, and these processes might contribute sound-to-meaning mappings to the semantic concepts of 'forward' and 'backward'.Peer reviewe
    • …
    corecore