36 research outputs found

    Defect Engineering into Metal–Organic Frameworks for the Rapid and Sequential Installation of Functionalities

    No full text
    Postsynthetic treatments are well-known and important functionalization tools of metal–organic frameworks (MOFs). Herein, we have developed a practical and rapid postsynthetic ligand exchange (PSE) strategy using a defect-controlled MOF. An increase in the number of defects amounts to MOFs with enhanced rates of ligand exchange in a shorter time frame. An almost quantitative exchange was achieved by using the most defective MOFs. This PSE strategy is a straightforward method to introduce a functionality into MOFs including bulky or catalytically relevant moieties. Furthermore, some mechanistic insights into PSE were revealed, allowing for a sequential ligand exchange and the development of multifunctional MOFs with controlled ligand ratios

    Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex

    Get PDF
    <div><p>Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3–7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior—i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension.</p></div

    Comparison of symptomatic and objective parameters before and after peroral endoscopic myotomy.

    No full text
    <p>Comparison of symptomatic and objective parameters before and after peroral endoscopic myotomy.</p

    Multivariate analysis of the clinical factors related to recovery of esophageal body peristalsis after peroral endoscopic myotomy.

    No full text
    <p>Multivariate analysis of the clinical factors related to recovery of esophageal body peristalsis after peroral endoscopic myotomy.</p

    MI between auditory and visual speech signals.

    No full text
    <p>(A) To investigate PID in “AV congruent” condition, first MI between auditory speech and visual speech signals was computed separately for matching and nonmatching signals. MI for matching auditory-visual speech signals shows a peak around 5 Hz (red line), whereas MI for nonmatching signals is flat (blue line). The underlying data for this figure are available from the Open Science Framework (<a href="https://osf.io/hpcj8/" target="_blank">https://osf.io/hpcj8/</a>). (B) Analysis of PID is shown for “AV congruent” condition in which both matching and nonmatching auditory-visual speech signals are present on the same brain response (MEG data). Two external speech signals (auditory speech envelope and lip movement signal) and brain signals were used in the PID computation. Each signal was band-pass filtered, followed by Hilbert transform. MEG, magnetoencephalography; MI, mutual information; PID, partial information decomposition.</p

    PID of audiovisual speech processing in the brain.

    No full text
    <p>(A) Information structure of multisensory audio and visual inputs (sound envelope and lip movement signal) predicting brain response (MEG signal). Ellipses indicate total mutual information I(MEG;A,V), mutual information I(MEG;A), and mutual information I(MEG;V); and the four distinct regions indicate unique information of auditory speech I<sub>uni</sub>(MEG;A), unique information of visual speech I<sub>uni</sub>(MEG;V), redundancy I<sub>red</sub>(MEG;A,V), and synergy I<sub>syn</sub>(MEG;A,V). See Materials and methods for details. See also Ince [<a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2006558#pbio.2006558.ref015" target="_blank">15</a>], Barrett [<a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2006558#pbio.2006558.ref021" target="_blank">21</a>], and Wibral and colleagues [<a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2006558#pbio.2006558.ref022" target="_blank">22</a>] for general aspects of the PID analysis. (B) Unique information of visual speech and auditory speech was compared to determine the dominant modality in different areas (see <a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2006558#pbio.2006558.s001" target="_blank">S1 Fig</a> for more details). Stronger unique information for auditory speech was found in bilateral auditory, temporal, and inferior frontal areas, and stronger unique information for visual speech was found in bilateral visual cortex (<i>P</i> < 0.05, FDR corrected). The underlying data for this figure are available from the Open Science Framework (<a href="https://osf.io/hpcj8/" target="_blank">https://osf.io/hpcj8/</a>). <i>Figure modified from [<a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2006558#pbio.2006558.ref015" target="_blank">15</a>, <a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2006558#pbio.2006558.ref021" target="_blank">21</a>, <a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2006558#pbio.2006558.ref022" target="_blank">22</a>] to illustrate the relationship between stimuli in the present study.</i> FDR, false discovery rate; MEG, magnetoencephalography; PID, partial information decomposition.</p

    Redundancy and synergy in attention effect.

    No full text
    <p>Redundancy and synergy in attention (“AV congruent” > “All congruent”) are analyzed. Further, to explore whether this effect is specific to “AV congruent” condition (not because of decreased information in “All congruent” condition), we extracted raw values of each information map at the local maximum voxel and correlated it with speech comprehension accuracy across subjects. (A) Redundancy for attention effect was observed in left auditory and temporal (superior and middle temporal cortices and pSTG/S) areas and right inferior frontal and superior temporal cortex (<i>Z</i>-difference map at <i>P</i> < 0.005). (B) Synergistic information for attention effect was localized in left motor cortex, inferior temporal cortex, and parieto-occipital areas (<i>Z</i>-difference map at <i>P</i> < 0.005). (C) Redundancy at the left posterior superior temporal region in “AV congruent” condition was found to be positively correlated with speech comprehension accuracy (<i>R</i> = 0.43, <i>P</i> = 0.003). However, this redundant representation was not found for left motor cortex where synergistic information was represented (<i>R</i> = 0.21, <i>P</i> = 0.18). (D) Synergy at the left motor cortex in “AV congruent” condition was also positively correlated with speech comprehension accuracy across subjects (<i>R</i> = 0.34, <i>P</i> = 0.02). Likewise, synergistic representation was not found to be related to comprehension in the left posterior superior temporal region where redundant information was represented (<i>R</i> = 0.04, <i>P</i> = 0.81). This finding suggests that redundant information in the left posterior superior temporal region and synergistic information in the left motor cortex in a challenging audiovisual speech condition support better speech comprehension. The underlying data for this figure are available from the Open Science Framework (<a href="https://osf.io/hpcj8/" target="_blank">https://osf.io/hpcj8/</a>). N.S., not significant; pSTG/S, posterior superior temporal gyrus/sulcus.</p

    New diagnoses of post-peroral endoscopic myotomy(POEM) motility patterns according to pre-POEM achalasia subtype.

    No full text
    <p>New diagnoses of post-peroral endoscopic myotomy(POEM) motility patterns according to pre-POEM achalasia subtype.</p

    High resolution manometry showing post-POEM recovery of peristalsis.

    No full text
    <p>A, Patient with type II achalasia. B, Patient with type III achalasia.</p
    corecore