22 research outputs found

    Research and Education in Parallel: Scientific Outreach through On-site Experiments at the Museum of Science Boston Living Laboratory

    Get PDF
    Our world is multisensory, as is our perception of it: we see lips move while we listen to what is being said, we smell food as we taste it, we touch a surface as we feel its texture as we look at. Cross-modal sensory processing, the way information from our different senses interacts and influences our conscious perceptual experience is ubiquitous in everyday life, yet, it is not well understood, compared to unimodal sensory processing, processing information from a single sensory system. One of our research goals is to standardize a set of computer-generated stimuli and procedures to study sound-shape correspondences across different ages, ranging from young children, adolescents, younger adults, to elderly participants

    Cross-Modal Attention Influences Auditory Contrast Sensitivity: Decreasing Visual Load Improves Auditory Thresholds for Amplitude- and Frequency-Modulated Sounds

    Get PDF
    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation—two consecutive intervals of streams of visual letters—and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower—that is, auditory sensitivity was improved—for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention

    Look where you go: characterizing eye movements toward optic flow

    No full text

    An Examination of the Interaction of Attention and Sound-Shape Correspondences with Developmental and Neurophysiological Approaches

    No full text
    Sound-shape correspondence or the ‘bouba-kiki’ effect refers to the non-arbitrary association between spiky (or rounded) abstract shapes with nonsense words like /kiki/ (or /bouba/), reported across cultures, languages, and ages. Current models explaining the ‘bouba-kiki’ effect have, for the most part, omitted the role of selective attention, the selection of information, and how it might impact the processing of features across the senses. To address this research gap regarding the role of attention on the ‘bouba-kiki’ effect, I conducted three series of experiments that examined how attentional factors modulated explicit and implicit measures of the ‘bouba-kiki’ effect. In the first two series of experiments (Chapters 2 & 3), I explored what features were associated, and how strongly these features were associated, in children and adults at the Living Laboratory ®, Museum of Science Boston. Participants were asked to match seen abstract shapes or haptically explored unseen abstract shapes with auditorily-presented nonsense words, an explicit task. When not given information about what shape feature to attend to, 6-8 year-olds either exhibited associations based on different visual features compared to older children and young adults, or exhibited at-chance associations. When children’s attention was directed towards the relevant shape features, either via contextual cues or visual experience with abstract shapes they would later explore haptically, 6-8 year-olds exhibited adult-like association, highlighting the role of attention on explicit representations of sound-shape correspondences. In the third series of experiments (Chapter 4), I examined how neural processing of visual shapes could be boosted by sounds in accord with sound-shape correspondence in adults by incorporating steady-state visual evoked potentials measured by electroencephalography, an implicit measure not requiring conscious report of matching features from participants. I found that voluntary attention towards shapes and sounds is not needed to establish an implicit representation of sound-shape correspondence in adults. Nonetheless, this implicit response could be impaired if participants’ attentional resources were depleted. Together these findings converge to suggest that selective attention plays a critical role to the representation of sound-shape correspondence, measured explicitly and implicitly, with implications towards a unified model integrating selective attention and general multisensory processing

    The effect of perceptual grouping on selective attention

    No full text
    Perceptual grouping plays an indispensable role on attention distribution. An example of this interaction is the impaired visual search performance when the target overlaps with a task-irrelevant salient distractor organized to a snake-like configuration by collinear bars, and when the collinear distractor is long enough (Jingling & Tseng, 2013). This phenomenon is puzzling because it is opposite to our understanding of attention capture which predicts search facilitation instead of impairment. As an attempt to fully understand the interaction between perceptual grouping and attention, the current research probed the possible neural stage of this collinear search impairment effect. In Study 1, the distractor column of the search display was split into two eyes: one eye saw a distractor with varied length (= 1, 5, or 9 bars) while the other eye saw the rest of the distractor column. When both eyes were properly fused, observers saw a search display containing a 9-bar distractor. Observers were asked to identify the orientation of a target gap that could be overlapping or non-overlapping with the distractor. It was found that search impairment was dominated by monocular collinear distractor length. In Study 2, a 9-bar distractor was shown to one eye of observers and strong flashing color patches were shown to the other eye (Continuous Flash Suppression) such that part of the distractor was suppressed from observers’ awareness. It was found that invisible collinear distractor parts enhanced search impairment, suggesting awareness of the distractor is not necessary for the effect. Results from both studies converge to suggest that the effect of collinear grouping on attention is likely to be at early visual sites like V1 where monocular information but not awareness is processed. It highlights the need to incorporate perceptual grouping into current salience-based attention models.published_or_final_versionPsychologyMasterMaster of Philosoph

    Noise Exclusion Ability in Infants

    No full text
    An important perceptual ability is to filter out background distractions from relevant information. However, prior research has not identified when this begins in humans. Our study aims to investigate whether noise exclusion ability occurs in infancy. Infants' contrast sensitivity function (CSF) was measured by a Baynesian adaptive inference method. Infants' attention was directed to the middle of a monitor where an 8.72 degree static Gabor grating was presented on the left or right side of the monitor. In half the trials, the grating was presented against a gray background; in the other half, against a 16% contrast random-dot noise background. The experimenter and two independent coders judged which side the infants gazed at (force-choice preferential looking paradigm). One-hundred babies aged from 4 to 10 months satisfied the 70% interrater consistency criterion for inclusion. Four parameters defined the best-fitted CSF for each infant. Of these, peak spatial frequency, bandwidth and truncation of CSF were similar in conditions with and without noise. The peak gain estimate was most significantly impaired by external noise, but a marked 31% improvement was observed in 7- to 10-month-olds. This may be the first sign of development of human's noise exclusion ability, and is worth further study

    The Effects of Emotional Target and Mood State of Participants on Attentional Blink

    No full text
    Previous studies have found that attentional blink (AB), a failure to report targets temporally close to each other, can be attenuated separately by (1) emotionally significant test stimuli (T2) and (2) the emotional state of the observer. In the present study, we asked whether and how the (1) and (2) interact. Participants were induced with either positive or negative music and asked to complete an AB task which consisted of low-arousal positive, neutral and negative words as T2. We found low arousal negative words significantly reduced AB more than did other words, while no main nor interaction effect for mood was observed. However, on repeating the experiment and replacing low arousal words with high-arousal ones we not only were able to replicate the finding of an advantage of negative words over others, but detected an effect for the mood of the observer: participants who were induced to become happier using music performed better in detecting T2 across lags and word categories than did participants who became sadder. Our findings suggest an interaction of arousal level of emotional target with the induced mood of participants although the underlying mechanisms responsible for this effect need further investigation

    Arginine Is a Novel Drug Target for Arginine Decarboxylase in Human Colorectal Cancer Cells

    No full text
    Colorectal cancer (CRC) has been proven to be highly reliant on arginine availability. Limiting arginine-rich foods or treating patients with arginine-depleting enzymes arginine deiminase (ADI) or arginase can suppress colon cancer. However, arginase and ADI are not the best drug candidates for CRC. Ornithine, the product of arginase, can enhance the supply of polyamine, which favors CRC cell growth, while citrulline, the product of ADI, faces the problem of arginine recycling due to the overexpression of argininosuccinate synthetase (ASS). Biosynthetic arginine decarboxylase (ADC), an enzyme that catalyzes the conversion of arginine to agmatine and carbon dioxide, may be a better choice as it combines both arginine depletion and suppression of intracellular polyamine synthesis via its product agmatine. ADC has anti-tumor potential yet has received much less attention than the other two arginine-depleting enzymes. In order to gain a better understanding of ADC, the preparation and the anti-cancer properties of this enzyme were explored in this study. When tested in vitro, ADC inhibited the proliferation of three colorectal cancer cell lines regardless of their ASS cellular expression. In contrast, ADC had a lesser cytotoxic effect on the human foreskin fibroblasts and rat primary hepatocytes. Further in vitro studies revealed that ADC induced S and G2/M phase cell-cycle arrest and apoptosis in HCT116 and LoVo cells. ADC-induced apoptosis in HCT116 cells followed the mitochondrial apoptotic pathway and was caspase-3-dependent. With all results obtained, we suggest that arginine is a potential target for treating colorectal cancer with ADC, and the anti-cancer properties of ADC should be more deeply investigated in the future

    Overexpression of the ASN1

    No full text
    corecore