406 research outputs found

    Highly sensitive transient absorption imaging of graphene and graphene oxide in living cells and circulating blood

    Get PDF
    We report a transient absorption (TA) imaging method for fast visualization and quantitative layer analysis of graphene and GO. Forward and backward imaging of graphene on various substrates under ambient condition was imaged with a speed of 2 μs per pixel. The TA intensity linearly increased with the layer number of graphene. Real-time TA imaging of GO in vitro with capability of quantitative analysis of intracellular concentration and ex vivo in circulating blood were demonstrated. These results suggest that TA microscopy is a valid tool for the study of graphene based materials

    Prospects of Searching for Type Ia Supernovae with 2.5-m Wide Field Survey Telescope

    Full text link
    Type Ia Supernovae (SNe Ia) are the thermonuclear explosion of a carbon-oxygen white dwarf (WD) and are well-known as a distance indicator. However, it is still unclear how WDs increase their mass near the Chandrasekhar limit and how the thermonuclear runaway happens. The observational clues associated with these open questions, such as the photometric data within hours to days since the explosion, are scarce. Thus, an essential way is to discover SNe Ia at specific epochs with optimal surveys. The 2.5-m Wide Field Survey Telescope (WFST) is an upcoming survey facility deployed in western China. In this paper, we assess the detecability of SNe Ia with mock observations of WFST. Followed by the volumetric rate, we generate a spectral series of SNe Ia based on a data-based model and introduce the line-of-sight extinction to calculate the brightness from the observer. By comparing with the detection limit of WFST, which is affected by the observing conditions, we can count the number of SNe Ia discovered by mock WFST observations. We expect that WFST can find more than 3.0×1043.0\times10^{4} pre-maximum SNe Ia within one-year running. In particular, WFST could discover about 45 bright SNe Ia, 99 early-phase SNe Ia, or 1.1×1041.1\times10^{4} well-observed SNe Ia with the hypothesized Wide, Deep, or Medium mode, respectively, suggesting WFST will be an influential facility in time-domain astronomy.Comment: Accepted by Univers

    Label-free quantitative imaging of cholesterol in intact tissues by hyperspectral stimulated Raman scattering microscopy

    Get PDF
    A finger on the pulse: Current molecular analysis of cells and tissues routinely relies on separation, enrichment, and subsequent measurements by various assays. Now, a platform of hyperspectral stimulated Raman scattering microscopy has been developed for the fast, quantitative, and label-free imaging of biomolecules in intact tissues using spectroscopic fingerprints as the contrast mechanism

    MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training Model

    Full text link
    Multimodal semantic understanding often has to deal with uncertainty, which means the obtained messages tend to refer to multiple targets. Such uncertainty is problematic for our interpretation, including inter- and intra-modal uncertainty. Little effort has studied the modeling of this uncertainty, particularly in pre-training on unlabeled datasets and fine-tuning in task-specific downstream datasets. In this paper, we project the representations of all modalities as probabilistic distributions via a Probability Distribution Encoder (PDE) by utilizing sequence-level interactions. Compared to the existing deterministic methods, such uncertainty modeling can convey richer multimodal semantic information and more complex relationships. Furthermore, we integrate uncertainty modeling with popular pre-training frameworks and propose suitable pre-training tasks: Distribution-based Vision-Language Contrastive learning (D-VLC), Distribution-based Masked Language Modeling (D-MLM), and Distribution-based Image-Text Matching (D-ITM). The fine-tuned models are applied to challenging downstream tasks, including image-text retrieval, visual question answering, visual reasoning, and visual entailment, and achieve state-of-the-art results.Comment: CVPR 2023 accep
    • …
    corecore