218 research outputs found
Time Distortions in Mind
Time Distortions in Mind brings together current research on temporal processing in clinical populations to elucidate the interdependence between perturbations in timing and disturbances in the mind and brain. For the student, the scientist, and the stepping-stone for further research
Antitumor activity from antigen-specific CD8 T cells generated in vivo from genetically engineered human hematopoietic stem cells
The goal of cancer immunotherapy is the generation of an effective, stable, and self-renewing antitumor T-cell population. One such approach involves the use of high-affinity cancer-specific T-cell receptors in gene-therapy protocols. Here, we present the generation of functional tumor-specific human T cells in vivo from genetically modified human hematopoietic stem cells (hHSC) using a human/mouse chimera model. Transduced hHSC expressing an HLA-A*0201–restricted melanoma-specific T-cell receptor were introduced into humanized mice, resulting in the generation of a sizeable melanoma-specific naïve CD8^+ T-cell population. Following tumor challenge, these transgenic CD8^+ T cells, in the absence of additional manipulation, limited and cleared human melanoma tumors in vivo. Furthermore, the genetically enhanced T cells underwent proper thymic selection, because we did not observe any responses against non–HLA-matched tumors, and no killing of any kind occurred in the absence of a human thymus. Finally, the transduced hHSC established long-term bone marrow engraftment. These studies present a potential therapeutic approach and an important tool to understand better and to optimize the human immune response to melanoma and, potentially, to other types of cancer
Grouping by feature of cross-modal flankers in temporal ventriloquism
Signals in one sensory modality can influence perception of another, for example the bias of visual timing by audition: temporal ventriloquism. Strong accounts of temporal ventriloquism hold that the sensory representation of visual signal timing changes to that of the nearby sound. Alternatively, underlying sensory representations do not change. Rather, perceptual grouping processes based on spatial, temporal, and featural information produce best-estimates of global event properties. In support of this interpretation, when feature-based perceptual grouping conflicts with temporal information-based in scenarios that reveal temporal ventriloquism, the effect is abolished. However, previous demonstrations of this disruption used long-range visual apparent-motion stimuli. We investigated whether similar manipulations of feature grouping could also disrupt the classical temporal ventriloquism demonstration, which occurs over a short temporal range. We estimated the precision of participants’ reports of which of two visual bars occurred first. The bars were accompanied by different cross-modal signals that onset synchronously or asynchronously with each bar. Participants’ performance improved with asynchronous presentation relative to synchronous - temporal ventriloquism - however, unlike the long-range apparent motion paradigm, this was unaffected by different combinations of cross-modal feature, suggesting that featural similarity of cross-modal signals may not modulate cross-modal temporal influences in short time scales
Altered multisensory temporal integration in obesity
Eating is a multisensory behavior. The act of placing food in the mouth provides us with a variety of sensory information, including gustatory, olfactory, somatosensory, visual, and auditory. Evidence suggests altered eating behavior in obesity. Nonetheless, multisensory integration in obesity has been scantily investigated so far. Starting from this gap in the literature, we seek to provide the first comprehensive investigation of multisensory integration in obesity. Twenty male obese participants and twenty male healthy-weight participants took part in the study aimed at describing the multisensory temporal binding window (TBW). The TBW is defined as the range of stimulus onset asynchrony in which multiple sensory inputs have a high probability of being integrated. To investigate possible multisensory temporal processing deficits in obesity, we investigated performance in two multisensory audiovisual temporal tasks, namely simultaneity judgment and temporal order judgment. Results showed a wider TBW in obese participants as compared to healthy-weight controls. This holds true for both the simultaneity judgment and the temporal order judgment tasks. An explanatory hypothesis would regard the effect of metabolic alterations and low-grade inflammatory state, clinically observed in obesity, on the temporal organization of brain ongoing activity, which one of the neural mechanisms enabling multisensory integration
The COGs (context, object, and goals) in multisensory processing
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications
Viral complementation allows HIV-1 replication without integration
<p>Abstract</p> <p>Background</p> <p>The integration of HIV-1 DNA into cellular chromatin is required for high levels of viral gene expression and for the production of new virions. However, the majority of HIV-1 DNA remains unintegrated and is generally considered a replicative dead-end. A limited amount of early gene expression from unintegrated DNA has been reported, but viral replication does not proceed further in cells which contain only unintegrated DNA. Multiple infection of cells is common, and cells that are productively infected with an integrated provirus frequently also contain unintegrated HIV-1 DNA. Here we examine the influence of an integrated provirus on unintegrated HIV-1 DNA (uDNA).</p> <p>Results</p> <p>We employed reporter viruses and quantitative real time PCR to examine gene expression and virus replication during coinfection with integrating and non-integrating HIV-1. Most cells which contained only uDNA displayed no detected expression from fluorescent reporter genes inserted into early (Rev-independent) and late (Rev-dependent) locations in the HIV-1 genome. Coinfection with an integrated provirus resulted in a several fold increase in the number of cells displaying uDNA early gene expression and efficiently drove uDNA into late gene expression. We found that coinfection generates virions which package and deliver uDNA-derived genomes into cells; in this way uDNA completes its replication cycle by viral complementation. uDNA-derived genomes undergo recombination with the integrated provirus-derived genomes during second round infection.</p> <p>Conclusion</p> <p>This novel mode of retroviral replication allows survival of viruses which would otherwise be lost because of a failure to integrate, amplifies the effective amount of cellular coinfection, increases the replicating HIV-1 gene pool, and enhances the opportunity for diversification through errors of polymerization and recombination.</p
Do the colors of your letters depend on your language? Language-dependent and universal influences on grapheme-color synesthesia in seven languages
Grapheme-color synesthetes experience graphemes as having a consistent color (e.g., “N is turquoise”). Synesthetes’ specific associations (which letter is which color) are often influenced by linguistic properties such as phonetic similarity, color terms (“Y is yellow”), and semantic associations (“D is for dog and dogs are brown”). However, most studies of synesthesia use only English-speaking synesthetes. Here, we measure the effect of color terms, semantic associations, and non-linguistic shape-color associations on synesthetic associations in Dutch, English, Greek, Japanese, Korean, Russian, and Spanish. The effect size of linguistic influences (color terms, semantic associations) differed significantly between languages. In contrast, the effect size of nonlinguistic influences (shape-color associations), which we predicted to be universal, indeed did not differ between languages. We conclude that language matters (outcomes are influenced by the synesthete’s language) and that synesthesia offers an exceptional opportunity to study influences on letter representations in different languages.Depto. de Psicobiología y Metodología en Ciencias del ComportamientoFac. de PsicologíaTRUEpu
No effect of synesthetic congruency on temporal ventriloquism
A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of that light (temporal ventriloquism). Recent studies have suggested that pitch–size synesthetic congruency (i.e., a natural association between the relative pitch of a sound and the relative size of a visual stimulus) might affect this phenomenon. To reexamine this, participants made temporal order judgements about small- and large-sized visual stimuli while high- or low-pitched tones were presented before the first and after the second light. We replicated a previous study showing that, at large sound–light intervals, sensitivity for visual temporal order was better for synesthetically congruent than for incongruent pairs. However, this congruency effect could not be attributed to temporal ventriloquism, since it disappeared at short sound–light intervals, if compared with a synchronous audiovisual baseline condition that excluded response biases. In addition, synesthetic congruency did not affect temporal ventriloquism even if participants were made explicitly aware of congruency before testing. Our results thus challenge the view that synesthetic congruency affects temporal ventriloquism
Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions
Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room
Exposure to delayed visual feedback of the hand changes motor-sensory synchrony perception
We examined whether the brain can adapt to temporal delays between a self-initiated action and the naturalistic visual feedback of that action. During an exposure phase, participants tapped with their index finger while seeing their own hand in real time (~0 ms delay) or delayed at 40, 80, or 120 ms. Following exposure, participants were tested with a simultaneity judgment (SJ) task in which they judged whether the video of their hand was synchronous or asynchronous with respect to their finger taps. The locations of the seen and the real hand were either different (Experiment 1) or aligned (Experiment 2). In both cases, the point of subjective simultaneity (PSS) was uniformly shifted in the direction of the exposure lags while sensitivity to visual-motor asynchrony decreased with longer exposure delays. These findings demonstrate that the brain is quite flexible in adjusting the timing relation between a motor action and the otherwise naturalistic visual feedback that this action engenders
- …
