1,357 research outputs found

    The relation of H2CO, 12CO, and 13CO in molecular clouds

    Full text link
    Aims. We seek to understand how the 4.8 GHz formaldehyde absorption line is distributed in the MON R2, S156, DR17/L906, and M17/M18 regions. More specifically, we look for the relationship among the H2CO, 12CO, and 13CO spectral lines. Methods. The four regions of MON R2 (60'x90'), S156 (5'0x70'), DR17/L906 (40'x60'), and M17 /M18 (70'x80')were observed for H2CO (beam 10'), H110a recombination (beam 10'), 6 cm continuum (beam 10'), 12CO (beam 1'), and 13CO (beam 1'). We compared the H2CO,12CO,13CO, and continuum distributions, and also the spectra line parameters of H2CO,12CO, and 13CO. Column densities of H2CO,13CO, and H2 were also estimated. Results. We found out that the H2CO distribution is similar to the 12CO and the 13CO distributions on a large scale. The correlation between the 13 CO and the H2CO distributions is better than between the 12CO and H2CO distributions. The H2CO and the 13CO tracers systematically provide consistent views of the dense regions. T heir maps have similar shapes, sizes, peak positions, and molecular spectra and present similar centr al velocities and line widths. Such good agreement indicates that the H2CO and the 13CO arise from similar regions.Comment: 21 pages, 12 figures published, 201

    MVP: Multi-task Supervised Pre-training for Natural Language Generation

    Full text link
    Pre-trained language models (PLMs) have achieved remarkable success in natural language generation (NLG) tasks. Up to now, most NLG-oriented PLMs are pre-trained in an unsupervised manner using the large-scale general corpus. In the meanwhile, an increasing number of models pre-trained with labeled data (i.e. "supervised pre-training") showcase superior performance compared to unsupervised pre-trained models. Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation. We collect a large-scale natural language generation corpus, MVPCorpus, from 7777 datasets over 1111 diverse NLG tasks. Then we unify these examples into a general text-to-text format to pre-train the text generation model MVP in a supervised manner. For each task, we further pre-train specific soft prompts to stimulate the model's capacity to perform a specific task. Our MVP model can be seen as a practice that utilizes recent instruction tuning on relatively small PLMs. Extensive experiments have demonstrated the effectiveness and generality of our MVP model in a number of NLG tasks, which achieves state-of-the-art performance on 1313 out of 1717 datasets, outperforming BART by 9.3%9.3\% and Flan-T5 by 5.8%5.8\%.Comment: Accepted by ACL 202

    Far-field Super-resolution Chemical Microscopy

    Full text link
    Far-field chemical microscopy providing molecular electronic or vibrational fingerprint information opens a new window for the study of three-dimensional biological, material, and chemical systems. Chemical microscopy provides a nondestructive way of chemical identification without exterior labels. However, the diffraction limit of optics hindered it from discovering more details under the resolution limit. Recent development of super-resolution techniques gives enlightenment to open this door behind far-field chemical microscopy. Here, we review recent advances that have pushed the boundary of far-field chemical microscopy in terms of spatial resolution. We further highlight applications in biomedical research, material characterization, environmental study, cultural heritage conservation, and integrated chip inspection.Comment: 34 pages, 8 figures,1 tabl
    • …
    corecore