2,952 research outputs found

    The temporal pattern of impulses in primary afferents analogously encodes touch and hearing information

    Full text link
    An open question in neuroscience is the contribution of temporal relations between individual impulses in primary afferents in conveying sensory information. We investigated this question in touch and hearing, while looking for any shared coding scheme. In both systems, we artificially induced temporally diverse afferent impulse trains and probed the evoked perceptions in human subjects using psychophysical techniques. First, we investigated whether the temporal structure of a fixed number of impulses conveys information about the magnitude of tactile intensity. We found that clustering the impulses into periodic bursts elicited graded increases of intensity as a function of burst impulse count, even though fewer afferents were recruited throughout the longer bursts. The interval between successive bursts of peripheral neural activity (the burst-gap) has been demonstrated in our lab to be the most prominent temporal feature for coding skin vibration frequency, as opposed to either spike rate or periodicity. Given the similarities between tactile and auditory systems, second, we explored the auditory system for an equivalent neural coding strategy. By using brief acoustic pulses, we showed that the burst-gap is a shared temporal code for pitch perception between the modalities. Following this evidence of parallels in temporal frequency processing, we next assessed the perceptual frequency equivalence between the two modalities using auditory and tactile pulse stimuli of simple and complex temporal features in cross-sensory frequency discrimination experiments. Identical temporal stimulation patterns in tactile and auditory afferents produced equivalent perceived frequencies, suggesting an analogous temporal frequency computation mechanism. The new insights into encoding tactile intensity through clustering of fixed charge electric pulses into bursts suggest a novel approach to convey varying contact forces to neural interface users, requiring no modulation of either stimulation current or base pulse frequency. Increasing control of the temporal patterning of pulses in cochlear implant users might improve pitch perception and speech comprehension. The perceptual correspondence between touch and hearing not only suggests the possibility of establishing cross-modal comparison standards for robust psychophysical investigations, but also supports the plausibility of cross-sensory substitution devices

    Development of multisensory spatial integration and perception in humans

    Get PDF
    Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A‐V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants’ head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25° or 45° to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25° eccentricity. In addition to this main finding, we found age‐dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life

    The knowledge domain of affective computing: a scientometric review

    Get PDF
    Purpose – The aim of this study is to investigate the bibliographical information about Affective Computing identifying advances, trends, major papers, connections, and areas of research. Design/methodology/approach – A scientometric analysis was applied using CiteSpace, of 5,078 references about Affective Computing imported from the Web-of-Science Core Collection, covering the period of 1991-2016. Findings – The most cited, creative, burts and central references are displayed by areas of research, using metrics and througout-time visualization. Research limitations/implications – Interpretation is limited to references retrieved from theWeb-of-Science Core Collection in the fields of management, psychology and marketing. Nevertheless, the richness of bibliographical data obtained, largely compensates this limitation. Practical implications – The study provides managers with a sound body of knowledge on Affective Computing, with which they can capture general public emotion in respect of their products and services, and on which they can base their marketing intelligence gathering, and strategic planning. Originality/value – The paper provides new opportunities for companies to enhance their capabilities in terms of customer relationships.info:eu-repo/semantics/acceptedVersio

    Changes in the McGurk Effect Across Phonetic Contexts

    Full text link
    To investigate the process underlying audiovisual speech perception, the McGurk illusion was examined across a range of phonetic contexts. Two major changes were found. First, the frequency of illusory /g/ fusion percepts increased relative to the frequency of illusory /d/ fusion percepts as vowel context was shifted from /i/ to /a/ to /u/. This trend could not be explained by biases present in perception of the unimodal visual stimuli. However, the change found in the McGurk fusion effect across vowel environments did correspond systematically with changes in second format frequency patterns across contexts. Second, the order of consonants in illusory combination percepts was found to depend on syllable type. This may be due to differences occuring across syllable contexts in the timecourses of inputs from the two modalities as delaying the auditory track of a vowel-consonant stimulus resulted in a change in the order of consonants perceived. Taken together, these results suggest that the speech perception system either fuses audiovisual inputs into a visually compatible percept with a similar second formant pattern to that of the acoustic stimulus or interleaves the information from different modalities, at a phonemic or subphonemic level, based on their relative arrival times.National Institutes of Health (R01 DC02852

    Recent and upcoming BCI progress: overview, analysis, and recommendations

    Get PDF
    Brain–computer interfaces (BCIs) are finally moving out of the laboratory and beginning to gain acceptance in real-world situations. As BCIs gain attention with broader groups of users, including persons with different disabilities and healthy users, numerous practical questions gain importance. What are the most practical ways to detect and analyze brain activity in field settings? Which devices and applications are most useful for different people? How can we make BCIs more natural and sensitive, and how can BCI technologies improve usability? What are some general trends and issues, such as combining different BCIs or assessing and comparing performance? This book chapter provides an overview of the different sections of this book, providing a summary of how authors address these and other questions. We also present some predictions and recommendations that ensue from our experience from discussing these and other issues with our authors and other researchers and developers within the BCI community. We conclude that, although some directions are hard to predict, the field is definitely growing and changing rapidly, and will continue doing so in the next several years

    Towards a generalized theory of low-frequency sound source localization

    Get PDF
    Low-frequency sound source localization generates considerable amount of disagreement between audio/acoustics researchers, with some arguing that below a certain frequency humans cannot localize a source with others insisting that in certain cases localization is possible, even down to the lowest audible of frequencies. Nearly all previous work in this area depends on subjective evaluations to formulate theorems for low-frequency localization. This, of course, opens the argument of data reliability, a critical factor that may go some way to explain the reported ambiguities with regard to low-frequency localization. The resulting proposal stipulates that low-frequency source localization is highly dependent on room dimensions, source/listener location and absorptive properties. In some cases, a source can be accurately localized down to the lowest audible of frequencies, while in other situations it cannot. This is relevant as the standard procedure in live sound reinforcement, cinema sound and home-theater surround sound is to have a single mono channel for the low-frequency content, based on the assumption that human’s cannot determine direction in this band. This work takes the first steps towards showing that this may not be a universally valid simplification and that certain sound reproduction systems may actually benefit from directional low-frequency content

    Temporal auditory capture does not affect the time course of saccadic mislocalization of visual stimuli

    Get PDF
    Irrelevant sounds can "capture" visual stimuli to change their apparent timing, a phenomenon sometimes termed "temporal ventriloquism". Here we ask whether this auditory capture can alter the time course of spatial mislocalization of visual stimuli during saccades. We first show that during saccades, sounds affect the apparent timing of visual flashes, even more strongly than during fixation. However, this capture does not affect the dynamics of perisaccadic visual distortions. Sounds presented 50 ms before or after a visual bar (that change perceived timing of the bars by more than 40 ms) had no measurable effect on the time courses of spatial mislocalization of the bars, in four subjects. Control studies showed that with barely visible, low-contrast stimuli, leading, but not trailing, sounds can have a small effect on mislocalization, most likely attributable to attentional effects rather than auditory capture. These findings support previous studies showing that integration of multisensory information occurs at a relatively late stage of sensory processing, after visual representations have undergone the distortions induced by saccades

    The effect of emotions on brand recall by gender using voice emotion response with optimal data analysis

    Get PDF
    Purpose—To analyses the effect of emotions obtained by oral reproduction of advertising slogans established via Voice Emotion Response software on brand recall by gender; and to show the relevance for marketing communication of combining “human–computer Interaction (HCI)” with “affective computing (AC)” as part of their mission. Design/methodology/approach—A qualitative data analysis did the review of the scientific literature retrieved from Web-of-Science Core Collection (WoSCC), using CiteSpace’ scientometric technique; the quantitative data analysis did the analysis of brand recall over a sample of Taiwan’ participants by “optimal data analysis”. Findings—Advertising effectiveness has a positive association with emotions; brand recall varies with gender; and “HCI” connected with “AC” is an emerging area of research. Research limitations/implications—The selection of articles obtained depend on the terms used in WoSCC, and this study used only five emotions. Still the richness of the data gives some compensation. Practical implications—Marketers involved with brands need a body of knowledge on which to base their marketing communication intelligence gathering and strategic planning. Originality/value—It provides exploratory research findings related to the use of automatic tools capable of mining emotions by gender in real time, which could enhance the feedback of customers toward their brands.info:eu-repo/semantics/acceptedVersio
    • 

    corecore