25 research outputs found

    UV-B Pre-treatment Alters Phenolics Response to Monilinia fructicola Infection in a Structure-Dependent Way in Peach Skin

    Get PDF
    Phenolic compounds represent a large class of secondary metabolites, involved in multiple functions not only in plant life cycle, but also in fruit during post-harvest. phenolics play a key role in the response to biotic and abiotic stresses, thus their accumulation is regulated by the presence of environmental stimuli. The present work aimed to investigate how different pre-UV-B-exposures can modulate the phenolic response of peach fruit infected with Monilinia fructicola. Through HPLC-DAD-MSn, several procyanidins, phenolic acids, flavonols, and anthocyanins were detected. Both UV-B radiation and fungal infection were able to stimulate the accumulation of phenolics, dependent on the chemical structure. Regarding UV-B exposure, inoculated with sterile water, 3 h of UV-B radiation highest concentration of phenolics was found, especially flavonols and cyanidin-3-glucoside far from the wound. However, wounding decreased the phenolics in the region nearby. When peaches were pre-treated with 1 h of UV-B radiation, the fungus had an additive effect in phenolic accumulation far from the infection, while it had a subtractive effect with 3 h of UV-B radiation, especially for flavonols. Canonical discriminant analysis and Pearson correlation revealed that all phenolic compounds, except procyanidin dimer, were highly regulated by UV-B radiation, with particularly strong correlation for quercetin and kaempferol glycosides, while phenolics correlated with the fungus infection were quercetin-3-galactoside, quercetin-3-glucoside, kaempferol-3-galactoside and isorhamnetin-3-glucoside. Modulation of pathogen-induced phenolics also far from inoculation site might suggest a migration of signaling molecules from the infected area to healthy tissues

    CPP_NARPS

    No full text
    Participation of the CPPL to the NARPS project (https://www.narps.info/

    pre-registration

    No full text
    pre-registratio

    The role of vision in the development of sound symbolism.

    No full text
    Sound symbolism is the universal capacity to readily match sounds with non acoustic domains, like shape or size. There is debate on whether these associations are innate, or get established during the ontogenesis on the basis of experience. In line with the ontogenetic hypothesis a recent study suggested that the lack of visual experience may prevent the development of sound-symbolic associations. Indeed, other studies have shown a deficit in multisensory (auditory-tactile) integration in early blind people, and the capacity of integrate information across senses might be at the basis of sound symbolism. In this study we tested this hypothesis in two experiments with sighted and early blind participants. Experiment 1 was a 3-D version of the classic Bouba-Kiki experiment, in which spiky and round shapes are spontaneously associated with “spiky” or “round” sounds. Experiment 2 was an Implicit Assosiation Test (IAT) in which participants had to rapidly classify object names (e.g., coin, vase) as “big” or “small” together with the two pseudo-words Mil and Mal (that convey sound symbolic reference to size). Sound symbolic effects emerged both in sighted and blind participants in both experiments and no significant difference was found across the groups. Our results suggest that sound symbolism develop despite early lack of vision and cast doubts on theories attributing a role of vision in the development of (non visual) cross-modal correspondences

    Sound symbolism in sighted and blind. The role of vision and orthography in sound-shape correspondences

    No full text
    Non-arbitrary sound-shape correspondences (SSC), such as the "bouba-kiki" effect, have been consistently observed across languages and together with other sound-symbolic phenomena challenge the classic linguistic dictum of the arbitrariness of the sign. Yet, it is unclear what makes a sound "round" or "spiky" to the human mind. Here we tested the hypothesis that visual experience is necessary for the emergence of SSC, supported by empirical evidence showing reduced SSC in visually impaired people. Results of two experiments comparing early blind and sighted individuals showed that SSC emerged strongly in both groups. Experiment 2, however, showed a partially different pattern of SSC in sighted and blind, that was mostly explained by a different effect of orthographic letter shape: The shape of written letters (spontaneously activated by spoken words) influenced SSC in the sighted, but not in the blind, who are exposed to an orthography (Braille) in which letters do not have spiky or round outlines. In sum, early blindness does not prevent the emergence of SSC, and differences between sighted and visually impaired people may be due the indirect influence (or lack thereof) of orthographic letter shape

    Sound Symbolism in sighted and Blind. The role of orthography and vision in sound-shape.

    No full text
    Introduction​:​ ​​Previous​ ​studies​ ​have​ ​suggested​ ​that​ ​non-arbitrary​ ​and​ ​seemingly​ ​universal​ ​sound-shape correspondences​ ​(sound​ ​symbolism;​ ​e.g.​ ​the​ ​bouba-kiki​ ​effect)​ ​are​ ​based​ ​on​ ​crossmodal​ ​mappings​ ​between​ ​the sound​ ​and​ ​the​ ​motor​ ​program​ ​needed​ ​to​ ​articulate​ ​that​ ​sound​ ​(Ramachandran​ ​et​ ​al.,​ ​2001).​ ​Yet,​ ​previous​ ​studies showed​ ​different​ ​patterns​ ​from​ ​the​ ​predictions​ ​of​ ​a​ ​pure​ ​articulatory​ ​account,​ ​calling​ ​for​ ​the​ ​effect​ ​of​ ​other factors.​ ​We​ ​suggest​ ​that​ ​deviation​ ​from​ ​articulation-based​ ​predictions​ ​may​ ​be​ ​due​ ​to​ ​the​ ​effect​ ​of​ ​letter​ ​shape. When​ ​listening​ ​to​ ​words​ ​people​ ​spontaneously​ ​activate​ ​their​ ​orthographic​ ​representation​ ​and​ ​the​ ​shape​ ​of​ ​written letters​ ​influences​ ​cross-modal​ ​mappings​ ​between​ ​shape​ ​and​ ​sound.​ ​We​ ​called​ ​this​ ​the​ ​‘Blending​ ​Orthography​ ​and Articulation​ ​Hypothesis’​ ​(BOAH). Methods​:​ ​​We​ ​tested​ ​39​ ​early​ ​blind​ ​(EB)​ ​and​ ​39​ ​sighted​ ​controls​ ​(SC).​ ​EB​ ​are​ ​exposed​ ​to​ ​an​ ​orthography (Braille)​ ​in​ ​which​ ​letters​ ​do​ ​not​ ​have​ ​spiky​ ​or​ ​round​ ​outlines.​ ​Exp.​ ​1​ ​assessed​ ​whether​ ​vision​ ​is​ ​necessary​ ​to develop​ ​sound​ ​symbolic​ ​association.​ ​Blind​ ​and​ ​blindfolded​ ​sighted​ ​participants​ ​(30​ ​per​ ​group)​ ​were​ ​asked​ ​to associate​ ​the​ ​words​ ​‘maluma’​ ​and​ ​‘takete’​ ​with​ ​various​ ​3D​ ​shapes. Exp.​ ​2​ ​was​ ​a​ ​direct​ ​test​ ​of​ ​BOAH.​ ​Participants​ ​(18​ ​per​ ​group)​ ​rated​ ​240​ ​verbal​ ​sounds​ ​as​ ​‘round’​ ​or​ ​‘pointy’. The​ ​sounds​ ​were​ ​composed​ ​by​ ​different​ ​consonant​ ​classes​ ​that​ ​are​ ​associated​ ​with​ ​various​ ​degrees​ ​of​ ​articulatory and​ ​orthographic​ ​spikiness.​ ​Mixed​ ​Effect​ ​Models​ ​were​ ​used​ ​to​ ​assess​ ​the​ ​relative​ ​weight​ ​of​ ​articulatory​ ​and orthographic​ ​spikiness​ ​in​ ​predicting​ ​sound-shape​ ​associations​ ​in​ ​both​ ​sighted​ ​and​ ​blind. Results​:​ ​​In​ ​Experiment​ ​1​ ​both​ ​groups​ ​showed​ ​a​ ​sound​ ​symbolic​ ​effect​ ​mapping​ ​shapes​ ​to​ ​words​ ​in​ ​the​ ​expected manner​ ​demonstrating​ ​that​ ​early​ ​loss​ ​of​ ​vision​ ​does​ ​not​ ​prevent​ ​the​ ​development​ ​of​ ​sound​ ​symbolism.​ ​Our​ ​results contrast​ ​with​ ​previous​ ​data​ ​which​ ​showed​ ​a​ ​significantly​ ​reduced​ ​effect​ ​in​ ​a​ ​heterogeneous​ ​population​ ​of​ ​visual impaired​ ​people​ ​(Fryer​ ​et.​ ​al,​ ​2014).​ ​In​ ​Experiment​ ​2,​ ​as​ ​predicted​ ​by​ ​BOAH,​ ​sound-shape​ ​associations​ ​were better​ ​explained​ ​in​ ​the​ ​blind​ ​by​ ​a​ ​pure​ ​articulatory​ ​model​ ​and​ ​in​ ​the​ ​sighted​ ​by​ ​a​ ​model​ ​that​ ​integrates articulatory​ ​and​ ​orthographic​ ​factors. Discussion​:​We​ ​suggest​ ​that​ ​previously​ ​reported​ ​pattern​ ​of​ ​shape-phoneme​ ​association​ ​can​ ​be​ ​better​ ​explained by​ ​an​ ​account​ ​that​ ​blends​ ​articulatory​ ​and​ ​orthographic​ ​information​ ​(BOAH).​ ​Phonological​ ​processing​ ​of​ ​words activates​ ​word’s​ ​orthographic​ ​representations​ ​that​ ​can​ ​influence​ ​the​ ​mapping​ ​between​ ​phonology​ ​and​ ​shape. Our​ ​results​ ​also​ ​demonstrate​ ​that​ ​early​ ​loss​ ​of​ ​vision​ ​does​ ​not​ ​prevent​ ​the​ ​development​ ​of​ ​sound​ ​symbolism,​ ​on the​ ​contrary​ ​early​ ​blinds​ ​may​ ​be​ ​highly​ ​sensitive​ ​to​ ​some​ ​iconic​ ​features​ ​of​ ​language.​ ​Indeed,​ ​sound​ ​symbolism may​ ​help​ ​blind​ ​children​ ​to​ ​solve​ ​referential​ ​ambiguity​ ​problems​ ​during​ ​language​ ​learning​ ​that​ ​are​ ​largely​ ​based on​ ​visual​ ​statistics​ ​in​ ​sighted​ ​(Smith​ ​et​ ​al.,​ ​2010). References​:​ ​​Ramachandran​ ​VS​ ​et​ ​al.​ ​J​ ​Consc​ ​Studies.​ ​(2001)​ ​8(12),​ ​3 Fryer​ ​L​ ​et​ ​al.​ ​Cogn​ ​(2014)​ ​132(2),​ ​164 Smith​ ​LB​ ​et​ ​al.​ ​Cog​ ​science​ ​(2010)​ ​34(7),​ ​128

    Auditory and visual motion processing in hMT+/V5: preliminary results at 7T fMRI.

    No full text
    The ability of the brain to integrate motion information originating from separate sensory modalities is fundamental to efficiently interact with our dynamic environment. The human occipito-temporal region hMT+/V5 is known to be highly specialized to process visual motion directions. In addition to its role in processing the dominant visual information, it was recently suggested that this region may also engage in crossmodal motion processing from the auditory modality. How multisensory information is represented in this region remains however poorly understood. To further investigate the multisensory nature of hMT+/V5, we characterized single-subject activity with ultra-high field (UHF) fMRI when participants processed horizontal and vertical motion stimuli delivered through vision, audition, or a combination of both modalities simultaneously. Our preliminary results confirmed that in addition to a robust selectivity for visual motion, portion of hMT+/V5 selectively responds to moving sounds. We are now further characterizing the brain activity in the cortical depths using UHF fMRI combined with vascular space occupancy (VASO) recording at high spatial resolution (.75mm isotropic). We hypothesize that hMT+/V5 might encode auditory and visual motion information in separate cortical layers, reflecting the feed-forward versus feed-back nature of how sensory information flows into those regions. This project will shed new lights on how crossmodal information is represented across the depth of the cortical layers of motion selective human brain areas
    corecore