125 research outputs found
The Perception of Integrated Events in Autism Spectrum Disorders: The Role of Semantic Relatedness and Timing
AbstractAutism Spectrum Disorders (ASD) has been associated with impaired multisensory processing, however research on the topic has been inconclusive. For instance, research on the synchrony perception of complex stimuli has shown that children with ASD have impaired integration of audiovisual speech (e.g., a woman counting or telling a story) but normal non speech integration (e.g., a ball moving through a series of plastic ramps and cliffs; Bebko et al., 2006), while research on adult ASD patients has shown impaired integration for both speech (e.g., syllables) and non speech events (e.g., flash-beeps, handclap; de Boer- Schellekens et al., 2013). Studies utilizing simple stimuli and illusory paradigms such as the double-flash illusion have suggested that individuals with ASD exhibit an extended temporal integration window as compared to healthy participants (Foss-Feig et al., 2010; Kwakye et al., 2011), while others have shown no such effect (e.g., Van der Smagt et al., 2007). It is as yet unclear, therefore, whether or not individuals with ASD have impaired integration mechanisms and whether this impairment is due to the stimuli presented (simple vs. complex; social vs. non social), the population used (adult vs. children, severity of symptoms), and/or the tasks utilized (e.g., preferential looking paradigm vs. temporal order judgments). Additionally, it is as yet unclear whether individuals with ASD are impaired in terms of timing or binding per se (e.g., Freeman et al., 2013). In order to elucidate this issue, we aim to further examine the nature of these deficits through a well-formed group of children with similar symptom severity and two types of tasks. Initially, using a reaction time (RT) task, we will assess the audiovisual integration capabilities of children with ASD as compared to typically developing (TD) children without the involvement of timing (i.e., no timing differences will be introduced). According to the ‘unity effect’, a multisensory event is perceived as an integrated multisensory event (rather than multiple unimodal events) when signals are present close in time and space and due to other factors (e.g., informational relatedness; e.g., Vatakis & Spence, 2007). In the RT experiment, therefore, we will not modulate space and time, but informational relatedness. Specifically, participants will be asked to complete speeded detection of two targets. The targets will be audiovisual, visual, and auditory and for the audiovisual cases the steams will be presented in congruent and incongruent format.Subsequently, the same group of individuals will be tested in a simultaneity judgment (SJ) task where the temporal window of integration will be assessed. The SJ task will target the evaluation of temporal processing. For both tasks, three types of stimuli will be used: a) simple stimuli in order to minimize the meaningful context (Bien et al., 2013), b) stimuli with emotional context depicted through human faces in order to assess ASD processing of facial social stimuli, and c) stimuli with emotional context using body expressions (instead of faces). The use of the RT and SJ tasks we allow the evaluation of the interaction of multisensory integration and synchrony perception in ASD as well as the role of stimulus type (semantic vs. non semantic, social vs. non social) in multisensory processing
Crossmodal Binding Rivalry: An Alternative Hypothesis for the Double Flash Illusion
AbstractExtensive research on multisensory processing has established that temporal and/or spatial proximity of sensory information lead to the percept of a unified multisensory event, both at a behavioral and neuronal level (e.g., Stein, Huneycutt, & Meredith, 1988). Binding of multiple sensory inputs has also been demonstrated for stimuli presented with a certain degree of temporal disparity (e.g., Vatakis & Spence, 2010). A classical example of crossmodal interaction is the well-known sound induced flash illusion (SIFI; whereby a brief flash paired with two auditory beeps is actually perceived as two distinct flashes; Shams, Kamitani, & Shimojo, 2000). SIFI is considered an example of auditory dominance, where auditory stimulation modulates visual perception for audiovisual presentations that fall within the temporal window of integration. Studies on the SIFI (Andersen, Tiipana, & Sams, 2004; Shams, Kamitani, & Shimojo, 2002) have shown diminished performance in 1 flash-2 beeps (SIFI illusion) and 2 flash-1 beep presentations (considered different from SIFI but as yet not elucidated), while the performance in 1 flash-1 beep is excellent. That is, the close in time-space presence of 2 versus 1 input from different sensory modalities affects participant performance, while this is not the case for presentations of equal number of sensory stimulus inputs. We claim that the diminished performance in 1 flash-2 beeps and 2 flash-1 beep conditions are not two different illusions but they both represent examples of the crossmodal binding rivalry between the unequal number of sensory inputs presented. That is, presentations of multiple sensory inputs in close spatial and temporal proximity lead to a rivalry between the sensory inputs that are to be integrated. This rivalry will be weaker or stronger depending on a number of findings related to multisensory integration.As has been previously shown, a unified multisensory percept is more robust if the visual input is presented slightly before or in synchrony with the auditory input (Keetels & Vroomen, 2012; van Wassenhove, Grant, & Poeppel, 2007; Vatakis & Spence, 2007, 2008). In cases where the auditory input precedes the visual, binding is weaker leading to a less integrated percept. Moreover, binding is highly dependent on timing with temporally proximal presentations taking precedence over distal presentations (e.g., Vatakis & Spence, 2010). Thus, presentations of asynchronous stimuli even if presented within the temporal window of integration represent binding types of different strength with synchronous presentations being the ones leading to higher binding. These findings drive our crossmodal binding rivalry hypothesis and we support that the rivalry between the unequal number of sensory inputs will vary according to their binding robustness. So, for inputs where the visual is in synchrony or leading the auditory input, the binding is robust leading to a stronger rivalry with the spare stimulus. This rivalry results to a lower percent of illusory percepts and slower reaction times. On the other hand, if the binding between the auditory and visual inputs is weak, then the rivalry between them and the spare stimulus is less intense, thus resulting in quicker responding and higher illusory experiences. Generally, during illusion conditions, we expect to have slower reaction times than in conditions with equal number of visual and auditory inputs and in bimodal conditions (equal number of inputs) more accurate responses than in unimodal conditions.We have tested directly the rivalry hypothesis by utilizing the classical SIFI but with multiple timing presentations (never tested before in one experimental set-up). More specifically, we have used 0, 25, 50, and 100ms onset asynchronies of auditory beep before and after the visual flash. Illusion conditions and test conditions were intermixed in order to avoid biased responding (in terms of the number of flashes) and to be sure that the task is not too difficult for the participants to carry out. The proposed project will allow us to evaluate the rivalry hypothesis for multiple audiovisual inputs, which will provide a common explanation for both 1 flash-2 beeps and 2 flash-1 beep presentations, while at the same time it will allow the revisiting of the role of auditory dominance in the double flash illusion
Assisted spatial navigation: new directions
Blockchain technology brings new possibilities in assisted spatial navigation. Decentralized map building enables collaboration between users around the world, while providing researchers with a common reference map for extending the capabilities of navigational systems towards more intuitive and accurate landmark navigation assistance. Research on landmark navigation has been mainly focused on the visual characteristics of landmarks. Human behavior, however, has systematically been shown to be enhanced in the presence of multisensory unified events. We propose, therefore, the enhancement of spatial assisted navigation by utilizing landmarks that are multisensory and semantically congruent. Further, our research will provide insights in terms of the auditory parameters that could be combined with a given visual landmark, so as to facilitate landmark retrieval algorithms and user satisfaction during assisted spatial navigation
Timing and Time Perception: Procedures, Measures, and Applications
Timing and Time Perception: Procedures, Measures, and Applications is a one-of-a-kind, collective effort to present the most utilized and known methods on timing and time perception. Specifically, it covers methods and analysis on circadian timing, synchrony perception, reaction/response time, time estimation, and alternative methods for clinical/developmental research. The book includes experimental protocols, programming code, and sample results and the content ranges from very introductory to more advanced so as to cover the needs of both junior and senior researchers. We hope that this will be the first step in future efforts to document experimental methods and analysis both in a theoretical and in a practical manner
Opportunities and Limitations of Mobile Neuroimaging Technologies in Educational Neuroscience.
Funder: European Association for Research on Learning and InstructionFunder: Jacobs Foundation; Id: http://dx.doi.org/10.13039/501100003986As the field of educational neuroscience continues to grow, questions have emerged regarding the ecological validity and applicability of this research to educational practice. Recent advances in mobile neuroimaging technologies have made it possible to conduct neuroscientific studies directly in naturalistic learning environments. We propose that embedding mobile neuroimaging research in a cycle (Matusz, Dikker, Huth, & Perrodin, 2019), involving lab-based, seminaturalistic, and fully naturalistic experiments, is well suited for addressing educational questions. With this review, we take a cautious approach, by discussing the valuable insights that can be gained from mobile neuroimaging technology, including electroencephalography and functional near-infrared spectroscopy, as well as the challenges posed by bringing neuroscientific methods into the classroom. Research paradigms used alongside mobile neuroimaging technology vary considerably. To illustrate this point, studies are discussed with increasingly naturalistic designs. We conclude with several ethical considerations that should be taken into account in this unique area of research
The Blursday database as a resource to study subjective temporalities during COVID-19
The COVID-19 pandemic and associated lockdowns triggered worldwide changes in the daily routines of human experience. The Blursday database provides repeated measures of subjective time and related processes from participants in nine countries tested on 14 questionnaires and 15 behavioural tasks during the COVID-19 pandemic. A total of 2,840 participants completed at least one task, and 439 participants completed all tasks in the first session. The database and all data collection tools are accessible to researchers for studying the effects of social isolation on temporal information processing, time perspective, decision-making, sleep, metacognition, attention, memory, self-perception and mindfulness. Blursday includes quantitative statistics such as sleep patterns, personality traits, psychological well-being and lockdown indices. The database provides quantitative insights on the effects of lockdown (stringency and mobility) and subjective confinement on time perception (duration, passage of time and temporal distances). Perceived isolation affects time perception, and we report an inter-individual central tendency effect in retrospective duration estimation
Tears evoke the intention to offer social support: A systematic investigation of the interpersonal effects of emotional crying across 41 countries
Tearful crying is a ubiquitous and likely uniquely human phenomenon. Scholars have argued that emotional tears serve an attachment function: Tears are thought to act as a social glue by evoking social support intentions. Initial experimental studies supported this proposition across several methodologies, but these were conducted almost exclusively on participants from North America and Europe, resulting in limited generalizability. This project examined the tears-social support intentions effect and possible mediating and moderating variables in a fully pre-registered study across 7007 participants (24,886 ratings) and 41 countries spanning all populated continents. Participants were presented with four pictures out of 100 possible targets with or without digitally-added tears. We confirmed the main prediction that seeing a tearful individual elicits the intention to support, d = 0.49 [0.43, 0.55]. Our data suggest that this effect could be mediated by perceiving the crying target as warmer and more helpless, feeling more connected, as well as feeling more empathic concern for the crier, but not by an increase in personal distress of the observer. The effect was moderated by the situational valence, identifying the target as part of one's group, and trait empathic concern. A neutral situation, high trait empathic concern, and low identification increased the effect. We observed high heterogeneity across countries that was, via split-half validation, best explained by country-level GDP per capita and subjective well-being with stronger effects for higher-scoring countries. These findings suggest that tears can function as social glue, providing one possible explanation why emotional crying persists into adulthood.</p
- …