359 research outputs found
Relativistic Stellar Pulsations With Near-Zone Boundary Conditions
A new method is presented here for evaluating approximately the pulsation
modes of relativistic stellar models. This approximation relies on the fact
that gravitational radiation influences these modes only on timescales that are
much longer than the basic hydrodynamic timescale of the system. This makes it
possible to impose the boundary conditions on the gravitational potentials at
the surface of the star rather than in the asymptotic wave zone of the
gravitational field. This approximation is tested here by predicting the
frequencies of the outgoing non-radial hydrodynamic modes of non-rotating
stars. The real parts of the frequencies are determined with an accuracy that
is better than our knowledge of the exact frequencies (about 0.01%) except in
the most relativistic models where it decreases to about 0.1%. The imaginary
parts of the frequencies are determined with an accuracy of approximately M/R,
where M is the mass and R is the radius of the star in question.Comment: 10 pages (REVTeX 3.1), 5 figs., 1 table, fixed minor typos, published
in Phys. Rev. D 56, 2118 (1997
Inducing a Concurrent Motor Load Reduces Categorization Precision for Facial Expressions
Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base
Recommended from our members
Individual differences in multisensory integration and timing
The senses have traditionally been studied separately, but it is now recognised that the brain is just as richly multisensory as is our natural environment. This creates fresh challenges for understanding how complex multisensory information is organised and coordinated around the brain. Take timing for example: the sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their neural signals from each modality arrive at different multisensory areas in the brain at different times. How do we nevertheless perceive the synchrony of the original events correctly? It is popularly assumed that this is achieved via some mechanism of multisensory temporal recalibration. But recent work from my lab on normal and pathological individual differences show that sight and sound are nevertheless markedly out of synch by different amounts for each individual and even for different tasks performed by the same individual. Indeed, the more an individual perceive the same multisensory event as having an auditory lead and an auditory lag at the same time. This evidence of apparent temporal disunity sheds new light on the deep problem of understanding how neural timing relates to perceptual timing of multisensory events. It also leads to concrete therapeutic applications: for example, we may now be able to improve an individualâs speech comprehension by simply delaying sound or vision to compensate for their individual perceptual asynchrony
On the Geometry of Planar Domain Walls
The Geometry of planar domain walls is studied. It is argued that the planar
walls indeed have plane symmetry. In the Minkowski coordinates the walls are
mapped into revolution paraboloids.Comment: 11 paghoj, Late
Recommended from our members
Correlation of individual differences in audiovisual asynchrony across stimuli and tasks: new constraints on Temporal Renormalization theory
Sight and sound are out of synch in different people by different amounts for different tasks. But surprisingly, different concurrent measures of perceptual asynchrony correlate negatively (Freeman, Ipser et al, 2013. Cortex 49, 2875â2887): thus if vision subjectively leads audition in one individual, the same individual might show a visual lag in other measures of audiovisual integration (e.g. McGurk illusion, Stream-Bounce illusion).
This curious negative correlation was first observed between explicit temporal order judgements and implicit phoneme identification tasks, performed concurrently as a dual task, using incongruent McGurk stimuli. Here we used a new set of different of explicit and implicit tasks and congruent stimuli, to test whether this negative correlation persists across testing sessions, and whether it might be an artefact of using specific incongruent stimuli. None of these manipulations eliminated the negative correlation between explicit and implicit measures. This supports the generalisability and validity of the phenomenon, and offers new theoretical insights into its explanation.
Our previously proposed âtemporal renormalizationâ theory assumes that the timings of sensory events registered within the brainâs different multimodal sub-networks are each perceived relative to a representation of the typical average timing of such events across the wider network. Our new data suggest that this representation is stable and generic, rather than dependent on specific stimuli or task contexts, and that it may be acquired through experience with a variety of simultaneous stimuli. Our results also add further evidence that speech comprehension may be improved in some individuals by artificially delaying voices relative to lip-movements
Cosmic balloons
Cosmic balloons, consisting of relativistic particles trapped inside a
spherical domain wall, may be created in the early universe. We calculate the
balloon mass as a function of the radius and the energy density
profile, , including the effects of gravity. At the maximum balloon
mass for any value of the mass density of the wall.Comment: 9 pages, LaTeX, 2 figures in separate file, UPTP-93-1
Exemplar variance supports robust learning of facial identity
Differences in the visual processing of familiar and unfamiliar faces have prompted considerable interest in face learning, the process by which unfamiliar faces become familiar. Previous work indicates that face learning is determined in part by exposure duration; unsurprisingly, viewing faces for longer affords superior performance on subsequent recognition tests. However, there has been further speculation that exemplar variation, experience of different exemplars of the same facial identity, contributes to face learning independently of viewing time. Several leading accounts of face learning, including the averaging and pictorial coding models, predict an exemplar variation advantage. Nevertheless, the exemplar variation hypothesis currently lacks empirical support. The present study therefore sought to test this prediction by comparing the effects of unique exemplar face learning - a condition rich in exemplar variation - and repeated exemplar face learning Ć a condition that equates viewing time, but constrains exemplar variation. Crucially, observers who received unique exemplar learning displayed better recognition of novel exemplars of the learned identities at test, than observers in the repeated exemplar condition. These results have important theoretical and substantive implications for models of face learning and for approaches to face training in applied contexts
Recommended from our members
Individual differences in audiovisual integration and timing
Sight and sound are processed in different parts of the brain and at different times, creating discrepancies between the relative arrival time of auditory and visual information at primary and multisensory cortices. Despite this, a commonly accepted view is that the brain strives for and achieves temporal unity across different sensory modalities. Using individual differences in subjective synchrony and audiovisual temporal processing, this thesis examines whether audiovisual synchronisation across different audiovisual processes is ever actually achieved and whether the timing of multisensory events is supported by unified or disparate mechanisms. Chapter 2 examines whether estimates of subjective synchrony across audiovisual integration and explicit temporal judgements are consistent within and between individuals. This chapter finds remarkable disunity in subjective audiovisual timing within individuals, characterised by negatively correlated estimates of perceptual asynchrony across tasks, which challenge existing accounts of how the nervous system maintains temporal coherence. Instead, a new theory of temporal renormalisation is proposed, whereby the relative timing of audiovisual signals within different mechanisms is perceived relative to the average timing across mechanisms. Chapter 3 reveals that individual differences in audiovisual synchronisation across different tasks are reflected in the structural variability of distinct brain clusters, suggesting that audiovisual relative timing is processed by multiple task-specific temporal mechanisms, whose performance is supported by distinct neural substrates. Chapter 4 explores the possibility that these perceptual mechanisms might contribute to reading ability, which is audiovisual in nature. Aspects of audiovisual temporal processing are found to be impaired in dyslexia and linearly related to reading ability. Altogether this thesis provides novel contributions to the understanding of the underlying mechanisms of audiovisual temporal processing as well as of its relationship to higher cognitive functions
Possible types of the evolution of vacuum shells around the de Sitter space
All possible evolution scenarios of a thin vacuum shell surrounding the
spherically symmetric de Sitter space have been determined and the
corresponding global geometries have been constructed. Such configurations can
appear at the final stage of the cosmological phase transition, when isolated
regions (islands) of the old vacuum remain. The islands of the old vacuum are
absorbed by the new vacuum, expand unlimitedly, or form black holes and
wormholes depending on the sizes of the islands as well as on the density and
velocity of the shells surrounding the islands.Comment: 3 pages, 1 figur
- âŠ