294,384 research outputs found
Driving Rhythm Method for Driving Comfort Analysis on Rural Highways
Driving comfort is of great significance for rural highways, since the variation characteristics of driving speed are comparatively complex on rural highways. Earlier studies about driving comfort were usually based on the actual geometric road alignments and automobiles, without considering the driverâs visual perception. However, some scholars have shown that there is a discrepancy between actual and perceived geometric alignments, especially on rural highways. Moreover, few studies focus on rural highways. Therefore, in this paper the driverâs visual lane model was established based on the Catmull-Rom spline, in order to describe the driverâs visual perception of rural highways. The real vehicle experiment was conducted on 100 km rural highways in Tibet. The driving rhythm was presented to signify the information during the driving process. Shape parameters of the driverâs visual lane model were chosen as input variables to predict the driving rhythm by BP neural network. Wavelet transform was used to explore which part of the driving rhythm is related to the driving comfort. Then the probabilities of good, fair and bad driving comfort can be calculated by wavelets of the driving rhythm. This work not only provides a new perspective into driving comfort analysis and quantifies the driverâs visual perception, but also pays attention to the unique characteristics of rural highways.</p
Developmental coordination disorder: a focus on handwriting
Background. Developmental coordination disorder (DCD), is the term used to refer to children who present with motor coordination difficulties, unexplained by a general-medical condition, intellectual disability or known neurological impairment. Difficulties with handwriting are often included in descriptions of DCD, including that provided in DSM-5 (APA, 2013). However, surprisingly few studies have examined handwriting in DCD in a systematic way. Those that are available, have been conducted outside of the UK, in alphabets other than the Latin based alphabet. In order to gain a better understanding of the nature of 'slowness' so commonly reported in children with DCD, this thesis aimed to examine the handwriting of children with DCD in detail by considering the handwriting product, the process, the child's perspective, the teacher's perspective and some popular clinical measures including strength, visual perception and force variability. Compositional quality was also evaluated to examine the impact of poor handwriting on the wider task of writing.
Method. Twenty-eight 8-14 year-old children with a diagnosis of DCD participated in the study, with 28 typically developing age and gender matched controls. Participants completed the four handwriting tasks from the Detailed Assessment of Speed of Handwriting (DASH) and wrote their own name; all on a digitising writing tablet. The number of words written, speed of pen movements and the time spent pausing during the tasks were calculated. Participants were also assessed in spelling, reading, receptive vocabulary, visual perception, visual motor integration, grip strength and the quality of their composition.
Results. The findings confirmed what many professionals report, that children with DCD produce less text than their peers. However, this was not due to slow movement execution, but rather a higher percentage of time spent pausing, in particular, pauses over 10 seconds. The location of the pauses within words indicated a lack of automaticity in the handwriting of children with DCD. The DCD group scored below their peers on legibility, grip strength, measures of visual perception and had poorer compositional quality. Individual data highlighted heterogeneous performance profiles in children with DCD and there was little agreement/no significant association between teacher and therapist's measures of handwriting.
Conclusions. A new model incorporating handwriting within the broader context of writing was proposed as a lens through which therapists can consider handwriting in children with DCD. The model incorporates the findings from this thesis and discusses avenues for future research in this area
Auditory-Visual Integration during the Perception of Spoken Arabic
This thesis aimed to investigate the effect of visual speech cues on auditory-visual integration during speech perception in Arabic. Four experiments were conducted two of which were cross linguistic studies using Arabic and English listeners. To compare the influence of visual speech in Arabic and English listeners chapter 3 investigated the use of visual components of auditory-visual stimuli in native versus non-native speech using the McGurk effect. The experiment suggested that Arabic listenersâ speech perception was influenced by visual components of speech to a lesser degree compared to English listeners. Furthermore, auditory and visual assimilation was observed for non-native speech cues. Additionally when the visual cue was an emphatic phoneme the Arabic listeners incorporated the emphatic visual cue in their McGurk response.
Chapter 4, investigated whether the lower McGurk effect response in Arabic listeners found in chapter 3 was due to a bottom-up mechanism of visual processing speed. Chapter 4, using auditory-visual temporal asynchronous conditions, concluded that the differences in McGurk response percentage was not due to bottom-up mechanism of visual processing speed. This led to the question of whether the difference in auditory-visual integration of speech could be due to more ambiguous visual cues in Arabic compared to English. To explore this question it was first necessary to identify visemes in Arabic. Chapter 5 identified 13 viseme categories in Arabic, some emphatic visemes were visually distinct from their non-emphatic counterparts and a greater number of phonemes within the guttural viseme category were found compared to English.
Chapter 6 evaluated the visual speech influence across the 13 viseme categories in Arabic measured by the McGurk effect. It was concluded that the predictive power of visual cues and the contrast between visual and auditory speech components will lead to an increase in the McGurk response percentage in Arabic
Neural representation of complex motion in the primate cortex
This dissertation is concerned with how information about the environment is represented by neural activity in the primate brain. More specifically, it contains several studies that explore the representation of visual motion in the brains of humans and nonhuman primates through behavioral and physiological measures.
The majority of this work is focused on the activity of individual neurons in the medial superior temporal area (MST) â a high-level, extrastriate area of the primate visual cortex.
The first two studies provide an extensive review of the scientific literature on area MST. The areaâs prominent role at the intersection of low-level, bottom-up, sensory processing and high-level, top-down mechanisms is highlighted. Furthermore, a specific article on how information about self-motion and object motion can be decoded from a population of MSTd neurons is reviewed in more detail.
The third study describes a published and annotated dataset of MST neuronsâ responses to a series of different motion stimuli.
This dataset is analyzed using a variety of different analysis approaches in the fifth study. Classical tuning curve approaches confirm that MST neurons have large, but well-defined spatial receptive fields and are independently tuned for linear and spiral motion, as well as speed. We also confirm that the tuning for spiral motion is position invariant in a majority of MST neurons. A bias-free characterization of receptive field profiles based on a new stimulus that generates smooth, complex motion patterns turned out to be predictive of some of the tuning properties of MST neurons, but was generally less informative than similar approaches have been in earlier visual areas.
The fifth study introduces a new motion stimulus that consists of hexgonal segments and presents an optimization algorithm for an adaptive online analysis of neurophysiological recordings. Preliminary physiological data and simulations show these tools to have a strong potential in characterizing the response functions of MST neurons.
The final study describes a behavioral experiment with human subjects that explores how different stimulus features, such as size and contrast, affect motion perception and discusses what conclusions can be drawn from that about the representation of visual motion in the human brain.
Together these studies highlight the visual motion processing pathway of the primate brain as an excellent model system for studying more complex relations of neural activity and external stimuli. Area MST in particular emerges as a gateway between perception, cognition, and action planning.2021-11-1
Orientation dependent modulation of apparent speed: a model based on the dynamics of feed-forward and horizontal connectivity in V1 cortex
AbstractPsychophysical and physiological studies suggest that long-range horizontal connections in primary visual cortex participate in spatial integration and contour processing. Until recently, little attention has been paid to their intrinsic temporal properties. Recent physiological studies indicate, however, that the propagation of activity through long-range horizontal connections is slow, with time scales comparable to the perceptual scales involved in motion processing. Using a simple model of V1 connectivity, we explore some of the implications of this slow dynamics. The model predicts that V1 responses to a stimulus in the receptive field can be modulated by a previous stimulation, a few milliseconds to a few tens of milliseconds before, in the surround. We analyze this phenomenon and its possible consequences on speed perception, as a function of the spatio-temporal configuration of the visual inputs (relative orientation, spatial separation, temporal interval between the elements, sequence speed). We show that the dynamical interactions between feed-forward and horizontal signals in V1 can explain why the perceived speed of fast apparent motion sequences strongly depends on the orientation of their elements relative to the motion axis and can account for the range of speed for which this perceptual effect occurs (Georges, SeriÚs, Frégnac and Lorenceau, this issue)
Chinese Tones: Can You Listen With Your Eyes?:The Influence of Visual Information on Auditory Perception of Chinese Tones
CHINESE TONES: CAN YOU LISTEN WITH YOUR EYES? The Influence of Visual Information on Auditory Perception of Chinese Tones YUEQIAO HAN Summary Considering the fact that more than half of the languages spoken in the world (60%-70%) are so-called tone languages (Yip, 2002), and tone is notoriously difficult to learn for westerners, this dissertation focused on tone perception in Mandarin Chinese by tone-naĂŻve speakers. Moreover, it has been shown that speech perception is more than just an auditory phenomenon, especially in situations when the speakerâs face is visible. Therefore, the aim of this dissertation is to also study the value of visual information (over and above that of acoustic information) in Mandarin tone perception for tone-naĂŻve perceivers, in combination with other contextual (such as speaking style) and individual factors (such as musical background). Consequently, this dissertation assesses the relative strength of acoustic and visual information in tone perception and tone classification. In the first two empirical and exploratory studies in Chapter 2 and 3 , we set out to investigate to what extent tone-naĂŻve perceivers are able to identify Mandarin Chinese tones in isolated words, and whether or not they can benefit from (seeing) the speakersâ face, and what the contribution is of a hyperarticulated speaking style, and/or their own musical experience. Respectively, in Chapter 2 we investigated the effect of visual cues (comparing audio-only with audio-visual presentations) and speaking style (comparing a natural speaking style with a teaching speaking style) on the perception of Mandarin tones by tone-naĂŻve listeners, looking both at the relative strength of these two factors and their possible interactions; Chapter 3 was concerned with the effects of musicality of the participants (combined with modality) on Mandarin tone perception. In both of these studies, a Mandarin Chinese tone identification experiment was conducted: native speakers of a non-tonal language were asked to distinguish Mandarin Chinese tones based on audio (-only) or video (audio-visual) materials. In order to include variations, the experimental stimuli were recorded using four different speakers in imagined natural and teaching speaking scenarios. The proportion of correct responses (and average reaction times) of the participants were reported. The tone identification experiment presented in Chapter 2 showed that the video conditions (audio-visual natural and audio-visual teaching) resulted in an overall higher accuracy in tone perception than the auditory-only conditions (audio-only natural and audio-only teaching), but no better performance was observed in the audio-visual conditions in terms of reaction time, compared to the auditory-only conditions. Teaching style turned out to make no difference on the speed or accuracy of Mandarin tone perception (as compared to a natural speaking style). Further on, we presented the same experimental materials and procedure in Chapter 3 , but now with musicians and non-musicians as participants. The Goldsmith Musical Sophistication Index (Gold-MSI) was used to assess the musical aptitude of the participants. The data showed that overall, musicians outperformed non-musicians in the tone identification task in both auditory-visual and auditory-only conditions. Both groups identified tones more accurately in the auditory-visual conditions than in the auditory-only conditions. These results provided further evidence for the view that the availability of visual cues along with auditory information is useful for people who have no knowledge of Mandarin Chinese tones when they need to learn to identify these tones. Out of all the musical skills measured by Gold-MSI, the amount of musical training was the only predictor that had an impact on the accuracy of Mandarin tone perception. These findings suggest that learning to perceive Mandarin tones benefits from musical expertise, and visual information can facilitate Mandarin tone identification, but mainly for tone-naĂŻve non-musicians. In addition, performance differed by tone: musicality improves accuracy for every tone; some tones are easier to identify than others: in particular, the identification of tone 3 (a low-falling-rising) proved to be the easiest, while tone 4 (a high-falling tone) was the most difficult to identify for all participants. The results of the first two experiments presented in chapters 2 and 3 showed that adding visual cues to clear auditory information facilitated the tone identification for tone-naĂŻve perceivers (there is a significantly higher accuracy in audio-visual condition(s) than in auditory-only condition(s)). This visual facilitation was unaffected by the presence of (hyperarticulated) speaking style or the musical skill of the participants. Moreover, variations in speakers and tones had effects on the accurate identification of Mandarin tones by tone-naĂŻve perceivers. In Chapter 4 , we compared the relative contribution of auditory and visual information during Mandarin Chinese tone perception. More specifically, we aimed to answer two questions: firstly, whether or not there is audio-visual integration at the tone level (i.e., we explored perceptual fusion between auditory and visual information). Secondly, we studied how visual information affects tone perception for native speakers and non-native (tone-naĂŻve) speakers. To do this, we constructed various tone combinations of congruent (e.g., an auditory tone 1 paired with a visual tone 1, written as AxVx) and incongruent (e.g., an auditory tone 1 paired with a visual tone 2, written as AxVy) auditory-visual materials and presented them to native speakers of Mandarin Chinese and speakers of tone-naĂŻve languages. Accuracy, defined as the percentage correct identification of a tone based on its auditory realization, was reported. When comparing the relative contribution of auditory and visual information during Mandarin Chinese tone perception with congruent and incongruent auditory and visual Chinese material for native speakers of Chinese and non-tonal languages, we found that visual information did not significantly contribute to the tone identification for native speakers of Mandarin Chinese. When there is a discrepancy between visual cues and acoustic information, (native and tone-naĂŻve) participants tend to rely more on the auditory input than on the visual cues. Unlike the native speakers of Mandarin Chinese, tone-naĂŻve participants were significantly influenced by the visual information during their auditory-visual integration, and they identified tones more accurately in congruent stimuli than in incongruent stimuli. In line with our previous work, the tone confusion matrix showed that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. The results did not show evidence for auditory-visual integration among native participants, while visual information was helpful for tone-naĂŻve participants. However, even for this group, visual information only marginally increased the accuracy in the tone identification task, and this increase depended on the tone in question. Chapter 5 is another chapter that zooms in on the relative strength of auditory and visual information for tone-naĂŻve perceivers, but from the aspect of tone classification. In this chapter, we studied the acoustic and visual features of the tones produced by native speakers of Mandarin Chinese. Computational models based on acoustic features, visual features and acoustic-visual features were constructed to automatically classify Mandarin tones. Moreover, this study examined what perceivers pick up (perception) from what a speaker does (production, facial expression) by studying both production and perception. To be more specific, this chapter set out to answer: (1) which acoustic and visual features of tones produced by native speakers could be used to automatically classify Mandarin tones. Furthermore, (2) whether or not the features used in tone production are similar to or different from the ones that have cue value for tone-naĂŻve perceivers when they categorize tones; and (3) whether and how visual information (i.e., facial expression and facial pose) contributes to the classification of Mandarin tones over and above the information provided by the acoustic signal. To address these questions, the stimuli that had been recorded (and described in chapter 2) and the response data that had been collected (and reported on in chapter 3) were used. Basic acoustic and visual features were extracted. Based on them, we used Random Forest classification to identify the most important acoustic and visual features for classifying the tones. The classifiers were trained on produced tone classification (given a set of auditory and visual features, predict the produced tone) and on perceived/responded tone classification (given a set of features, predict the corresponding tone as identified by the participant). The results showed that acoustic features outperformed visual features for tone classification, both for the classification of the produced and the perceived tone. However, tone-naĂŻve perceivers did revert to the use of visual information in certain cases (when they gave wrong responses). So, visual information does not seem to play a significant role in native speakersâ tone production, but tone-naĂŻve perceivers do sometimes consider visual information in their tone identification. These findings provided additional evidence that auditory information is more important than visual information in Mandarin tone perception and tone classification. Notably, visual features contributed to the participantsâ erroneous performance. This suggests that visual information actually misled tone-naĂŻve perceivers in their task of tone identification. To some extent, this is consistent with our claim that visual cues do influence tone perception. In addition, the ranking of the auditory features and visual features in tone perception showed that the factor perceiver (i.e., the participant) was responsible for the largest amount of variance explained in the responses by our tone-naĂŻve participants, indicating the importance of individual differences in tone perception. To sum up, perceivers who do not have tone in their language background tend to make use of visual cues from the speakersâ faces for their perception of unknown tones (Mandarin Chinese in this dissertation), in addition to the auditory information they clearly also use. However, auditory cues are still the primary source they rely on. There is a consistent finding across the studies that the variations between tones, speakers and participants have an effect on the accuracy of tone identification for tone-naĂŻve speaker
Optimal measurement of visual motion across spatial and temporal scales
Sensory systems use limited resources to mediate the perception of a great
variety of objects and events. Here a normative framework is presented for
exploring how the problem of efficient allocation of resources can be solved in
visual perception. Starting with a basic property of every measurement,
captured by Gabor's uncertainty relation about the location and frequency
content of signals, prescriptions are developed for optimal allocation of
sensors for reliable perception of visual motion. This study reveals that a
large-scale characteristic of human vision (the spatiotemporal contrast
sensitivity function) is similar to the optimal prescription, and it suggests
that some previously puzzling phenomena of visual sensitivity, adaptation, and
perceptual organization have simple principled explanations.Comment: 28 pages, 10 figures, 2 appendices; in press in Favorskaya MN and
Jain LC (Eds), Computer Vision in Advanced Control Systems using Conventional
and Intelligent Paradigms, Intelligent Systems Reference Library,
Springer-Verlag, Berli
Recommended from our members
HEAD STABILIZATION AND CORTICAL ACTIVATION IN CONTACT SPORT ATHLETES DURING WALKING UNDER DIFFERENT VISUAL TASK CONSTRAINTS
Contact sport participation exposes athletes to repetitive sub-concussive head impacts, which have been shown to elicit cortical neurophysiologic, cognitive, and motor performance alterations that have the potential to disrupt visual perception. Despite the growing concern regarding sub-concussive impacts, our understanding of their implications on motor performance and risk for further injury is limited. A stable head provides a consistent perceptual platform for the visual and vestibular sensory systems, but the effects of contact sport participation on head stability and visual perception remain poorly understood. The goal of this dissertation was to understand whether contact sport participation modifies athletesâ ability to stabilize their head in space and the cortical mechanisms associated with these modifications. To address this goal, we asked the following questions: 1) how does contact sport exposure impact an athleteâs perception-action capabilities during visually demanding locomotor tasks; and 2) whether changes in cortical activity would relate to these motor performance changes. To address our questions, a stepwise approach was taken in three studies to understand how repetitive head impact exposure affects movement control, and how this xi is related to changes in visual perception and cortical activity. First, athletes completed a series of treadmill walking tasks with varying levels of visual task constraints; these constraints increased walking task difficulty to gain insights into how contact sport exposure disrupts the underlying dynamics associated with locomotor tasks and visual perception. By examining coordination, coordination variability, local dynamic stability, and dynamic visual acuity, a more complete understanding of the effects of cumulative sub-concussive impact exposure on locomotor dynamics, and how they relate to perceptual awareness was achieved. Next, to establish whether cortical alterations were associated with changes in motor performance, cortical neurophysiology was assessed while athletes completed a series of two postural and two locomotor tasks, some of which (one postural and one locomotor) included a visually demanding cognitive challenge. We aimed to assess cortical activity in athletes during cognitive and motor challenges to further our understanding of the cortical mechanisms associated with behavioral performance in ecologically relevant environments; this was done through lab-based protocols that attempted to mimic real world environments. Results from the first study indicated contact sport exposure modifies head control. Contact athletes reduced mediolateral head displacement while increasing vertical head and trunk displacement during locomotor tasks. This may be reflective of the reduced independent head control in the transverse plane, revealed through a coordination assessment. The findings from study two highlighted group differences during more demanding fast baseline walking, as indicated by reduced vertical head local dynamic stability in contact sport athletes compared to noncontact athletes during walking at higher speed than preferred. In addition, contact athletes significantly reduced xii both upper and lower body coordination variability during locomotor tasks. This lower variability in the contact group for trunk-head coordination was observed while performing the visual Landolt-C task was imposed, while group differences in lower body variability were present across all conditions. Also, contact athletes exhibited more frequent reductions in lower body variability during visual tasks compared to noncontact athletes. While the beneficial aspects of variability may be task dependent, in the context of sport performance and visual perception, higher variability may be indicative of exploiting abundant degrees of freedom, while reductions in variability are suggestive of sub-optimal performance and/or potential for injury. These findings highlight consistent reductions in movement flexibility and adaptability that may result from contact sport participation. In study three, no statistically significant changes in dorsolateral prefrontal cortex activity were present between groups, but moderate differences were observed during postural tasks, where on average, noncontact athletes increased cortical activity in response to a visual working memory task while contact athletes did not show a response. Similarly, while no statistically significant differences in motor performance were observed, moderate effects were observed for both postural and locomotor motor performance. Specifically, contact athletes displayed greater average mediolateral Center of Mass velocities during postural tasks, and reduced mediolateral head and trunk local dynamic stability during baseline gait compared to noncontact athletes. Collectively across all three studies, no differences in dynamic visual acuity or visual working memory performance were observed. The present studies utilized treadmill walking tasks with varying levels of visual task constraints to assess the consequences of contact sport participation on visual xiii perception, motor performance and cortical neurophysiology. Contact sport athletes exhibited distinct movement dynamics compared to noncontact athletes, including changes in whole-body coordination, variability, and local dynamic stability. Contact athletes reduced independent head control and whole-body coordination variability, which may suggest a modified ability to control joint and segmental degrees of freedom independently. Similarly, while limited within task differences were observed, how athletes responded to the changing constraints differed based on speculated repetitive sub-concussive head impact exposure. Despite prior reports, which identify cortical neurophysiologic alterations associated with increased head impact exposure, we observed no statistically significant group differences in dorsolateral prefrontal cortex oxyhemoglobin concentration changes in response to visual working memory and locomotor tasks imposed in the present study. These findings collectively underscore the intricate nature of the effects of subconcussive head impact exposure on cortical neurophysiology, motor performance, cognition, and visual perception. While visual task performance did not differ, contact athletes demonstrated reductions in independent segment control and movement adaptability during visually demanding motor tasks, which could potentially heighten the risk of injury during sport-specific tasks, though further study is needed
The influence of external and internal motor processes on human auditory rhythm perception
Musical rhythm is composed of organized temporal patterns, and the processes underlying rhythm perception are found to engage both auditory and motor systems. Despite behavioral and neuroscience evidence converging to this audio-motor interaction, relatively little is known about the effect of specific motor processes on auditory rhythm perception. This doctoral thesis was devoted to investigating the influence of both external and internal motor processes on the way we perceive an auditory rhythm. The first half of the thesis intended to establish whether overt body movement had a facilitatory effect on our ability to perceive the auditory rhythmic structure, and whether this effect was modulated by musical training. To this end, musicians and non-musicians performed a pulse-finding task either using natural body movement or through listening only, and produced their identified pulse by finger tapping. The results showed that overt movement benefited rhythm (pulse) perception especially for non-musicians, confirming the facilitatory role of external motor activities in hearing the rhythm, as well as its interaction with musical training. The second half of the thesis tested the idea that indirect, covert motor input, such as that transformed from the visual stimuli, could influence our perceived structure of an auditory rhythm. Three experiments examined the subjectively perceived tempo of an auditory sequence under different visual motion stimulations, while the auditory and visual streams were presented independently of each other. The results revealed that the perceived auditory tempo was accordingly influenced by the concurrent visual motion conditions, and the effect was related to the increment or decrement of visual motion speed. This supported the hypothesis that the internal motor information extracted from the visuomotor stimulation could be incorporated into the percept of an auditory rhythm. Taken together, the present thesis concludes that, rather than as a mere reaction to the given auditory input, our motor system plays an important role in contributing to the perceptual process of the auditory rhythm. This can occur via both external and internal motor activities, and may not only influence how we hear a rhythm but also under some circumstances improve our ability to hear the rhythm.Musikalische Rhythmen bestehen aus zeitlich strukturierten Mustern akustischer Stimuli. Es konnte gezeigt werden, dass die Prozesse, welche der Rhythmuswahrnehmung zugrunde liegen, sowohl motorische als auch auditive Systeme nutzen. Obwohl sich fĂŒr diese auditiv-motorischen Interaktionen sowohl in den Verhaltenswissenschaften als auch Neurowissenschaften ĂŒbereinstimmende Belege finden, weiĂ man bislang relativ wenig ĂŒber die Auswirkungen spezifischer motorischer Prozesse auf die auditive Rhythmuswahrnehmung. Diese Doktorarbeit untersucht den Einfluss externaler und internaler motorischer Prozesse auf die Art und Weise, wie auditive Rhythmen wahrgenommen werden. Der erste Teil der Arbeit diente dem Ziel herauszufinden, ob körperliche Bewegungen es dem Gehirn erleichtern können, die Struktur von auditiven Rhythmen zu erkennen, und, wenn ja, ob dieser Effekt durch ein musikalisches Training beeinflusst wird. Um dies herauszufinden wurde Musikern und Nichtmusikern die Aufgabe gegeben, innerhalb von prĂ€sentierten auditiven Stimuli den Puls zu finden, wobei ein Teil der Probanden wĂ€hrenddessen Körperbewegungen ausfĂŒhren sollte und der andere Teil nur zuhören sollte. AnschlieĂend sollten die Probanden den gefundenen Puls durch Finger-Tapping ausfĂŒhren, wobei die Reizgaben sowie die Reaktionen mittels eines computerisierten Systems kontrolliert wurden. Die Ergebnisse zeigen, dass offen ausgefĂŒhrte Bewegungen die Wahrnehmung des Pulses vor allem bei Nichtmusikern verbesserten. Diese Ergebnisse bestĂ€tigen, dass Bewegungen beim Hören von Rhythmen unterstĂŒtzend wirken. AuĂerdem zeigte sich, dass hier eine Wechselwirkung mit dem musikalischen Training besteht. Der zweite Teil der Doktorarbeit ĂŒberprĂŒfte die Idee, dass indirekte, verdeckte Bewegungsinformationen, wie sie z.B. in visuellen Stimuli enthalten sind, die wahrgenommene Struktur von auditiven Rhythmen beeinflussen können. Drei Experimente untersuchten, inwiefern das subjektiv wahrgenommene Tempo einer akustischen Sequenz durch die PrĂ€sentation unterschiedlicher visueller Bewegungsreize beeinflusst wird, wobei die akustischen und optischen Stimuli unabhĂ€ngig voneinander prĂ€sentiert wurden. Die Ergebnisse zeigten, dass das wahrgenommene auditive Tempo durch die visuellen Bewegungsinformationen beeinflusst wird, und dass der Effekt in Verbindung mit der Zunahme oder Abnahme der visuellen Geschwindigkeit steht. Dies unterstĂŒtzt die Hypothese, dass internale Bewegungsinformationen, welche aus visuomotorischen Reizen extrahiert werden, in die Wahrnehmung eines auditiven Rhythmus integriert werden können. Zusammen genommen,
5
zeigt die vorgestellte Arbeit, dass unser motorisches System eine wichtige Rolle im Wahrnehmungsprozess von auditiven Rhythmen spielt. Dies kann sowohl durch Ă€uĂere als auch durch internale motorische AktivitĂ€ten geschehen, und beeinflusst nicht nur die Art, wie wir Rhythmen hören, sondern verbessert unter bestimmten Bedingungen auch unsere FĂ€higkeit Rhythmen zu identifizieren
- âŠ