71 research outputs found

    Sequencing of Tuta absoluta genome to develop SNP genotyping assays for species identification

    Get PDF
    Tuta absoluta is one of the most devastating pests of fresh market and processing tomatoes. Native to South America, its detection was confined to that continent until 2006 when it was identified in Spain. It has now spread to almost every continent, threatening countries whose economies rely heavily on tomatoes. This insect causes damage to all developmental stages of its host plant, leading to crop losses as high as 80–100%. Although T. absoluta has yet to be found in the USA and China, which makes up a large portion of the tomato production in the world, computer models project a high likelihood of invasion. To halt the continued spread of T. absoluta and limit economic loss associated with tomato supply chain, it is necessary to develop accurate and efficient methods to identify T. absoluta and strengthen surveillance programs. Current identification of T. absoluta relies on examination of morphology and assessment of host plant damage, which are difficult to differentiate from that of native tomato pests. To address this need, we sequenced the genomes of T. absoluta and two closely related Gelechiidae, Keiferia lycopersicella and Phthorimaea operculella, and developed a bioinformatic pipeline to design a panel of 21-SNP markers for species identification. The accuracy of the SNP panel was validated in a multiplex format using the iPLEX chemistry of Agena MassARRAY system. Finally, the new T. absoluta genomic resources we generated can be leveraged to study T. absoluta biology and develop species-specific management strategies.info:eu-repo/semantics/acceptedVersio

    Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    Get PDF
    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established

    Multilab EcoFAB study shows highly reproducible physiology and depletion of soil metabolites by a model grass

    Get PDF
    There is a dynamic reciprocity between plants and their environment: soil physiochemical properties influence plant morphology and metabolism, and root morphology and exudates shape the environment surrounding roots. Here, we investigate the reproducibility of plant trait changes in response to three growth environments. We utilized fabricated ecosystem (EcoFAB) devices to grow the model grass Brachypodium distachyon in three distinct media across four laboratories: phosphate-sufficient and -deficient mineral media allowed assessment of the effects of phosphate starvation, and a complex, sterile soil extract represented a more natural environment with yet uncharacterized effects on plant growth and metabolism. Tissue weight and phosphate content, total root length, and root tissue and exudate metabolic profiles were consistent across laboratories and distinct between experimental treatments. Plants grown in soil extract were morphologically and metabolically distinct, with root hairs four times longer than with other growth conditions. Further, plants depleted half of the metabolites investigated from the soil extract. To interact with their environment, plants not only adapt morphology and release complex metabolite mixtures, but also selectively deplete a range of soil-derived metabolites. The EcoFABs utilized here generated high interlaboratory reproducibility, demonstrating their value in standardized investigations of plant traits

    Depth cues and perceived audiovisual synchrony of biological motion

    Get PDF
    Due to their different propagation times, visual and auditory signals from external events arrive at the human sensory receptors with a disparate delay. This delay consistently varies with distance, but, despite such variability, most events are perceived as synchronic. There is, however, contradictory data and claims regarding the existence of compensatory mechanisms for distance in simultaneity judgments. Principal Findings: In this paper we have used familiar audiovisual events – a visual walker and footstep sounds – and manipulated the number of depth cues. In a simultaneity judgment task we presented a large range of stimulus onset asynchronies corresponding to distances of up to 35 meters. We found an effect of distance over the simultaneity estimates, with greater distances requiring larger stimulus onset asynchronies, and vision always leading. This effect was stronger when both visual and auditory cues were present but was interestingly not found when depth cues were impoverished. Significance: These findings reveal that there should be an internal mechanism to compensate for audiovisual delays, which critically depends on the depth information available.FEDERFundação para a Ciência e a Tecnologia (FCT

    Audio-Visual Speech Timing Sensitivity Is Enhanced in Cluttered Conditions

    Get PDF
    Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room

    Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning

    Get PDF
    Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference

    Neural Correlates of Visual Motion Prediction

    Get PDF
    Predicting the trajectories of moving objects in our surroundings is important for many life scenarios, such as driving, walking, reaching, hunting and combat. We determined human subjects’ performance and task-related brain activity in a motion trajectory prediction task. The task required spatial and motion working memory as well as the ability to extrapolate motion information in time to predict future object locations. We showed that the neural circuits associated with motion prediction included frontal, parietal and insular cortex, as well as the thalamus and the visual cortex. Interestingly, deactivation of many of these regions seemed to be more closely related to task performance. The differential activity during motion prediction vs. direct observation was also correlated with task performance. The neural networks involved in our visual motion prediction task are significantly different from those that underlie visual motion memory and imagery. Our results set the stage for the examination of the effects of deficiencies in these networks, such as those caused by aging and mental disorders, on visual motion prediction and its consequences on mobility related daily activities

    Effect of Audiovisual Training on Monaural Spatial Hearing in Horizontal Plane

    Get PDF
    The article aims to test the hypothesis that audiovisual integration can improve spatial hearing in monaural conditions when interaural difference cues are not available. We trained one group of subjects with an audiovisual task, where a flash was presented in parallel with the sound and another group in an auditory task, where only sound from different spatial locations was presented. To check whether the observed audiovisual effect was similar to feedback, the third group was trained using the visual feedback paradigm. Training sessions were administered once per day, for 5 days. The performance level in each group was compared for auditory only stimulation on the first and the last day of practice. Improvement after audiovisual training was several times higher than after auditory practice. The group trained with visual feedback demonstrated a different effect of training with the improvement smaller than the group with audiovisual training. We conclude that cross-modal facilitation is highly important to improve spatial hearing in monaural conditions and may be applied to the rehabilitation of patients with unilateral deafness and after unilateral cochlear implantation

    Incidental sounds of locomotion in animal cognition

    Get PDF
    The highly synchronized formations that characterize schooling in fish and the flight of certain bird groups have frequently been explained as reducing energy expenditure. I present an alternative, or complimentary, hypothesis that synchronization of group movements may improve hearing perception. Although incidental sounds produced as a by-product of locomotion (ISOL) will be an almost constant presence to most animals, the impact on perception and cognition has been little discussed. A consequence of ISOL may be masking of critical sound signals in the surroundings. Birds in flight may generate significant noise; some produce wing beats that are readily heard on the ground at some distance from the source. Synchronization of group movements might reduce auditory masking through periods of relative silence and facilitate auditory grouping processes. Respiratory locomotor coupling and intermittent flight may be other means of reducing masking and improving hearing perception. A distinct border between ISOL and communicative signals is difficult to delineate. ISOL seems to be used by schooling fish as an aid to staying in formation and avoiding collisions. Bird and bat flocks may use ISOL in an analogous way. ISOL and interaction with animal perception, cognition, and synchronized behavior provide an interesting area for future study
    corecore