325 research outputs found

    Exciton Binding Energy of Monolayer WS2

    Get PDF
    The optical properties of monolayer transition metal dichalcogenides (TMDC) feature prominent excitonic natures. Here we report an experimental approach toward measuring the exciton binding energy of monolayer WS2 with linear differential transmission spectroscopy and two-photon photoluminescence excitation spectroscopy (TP-PLE). TP-PLE measurements show the exciton binding energy of 0.71eV around K valley in the Brillouin zone. The trion binding energy of 34meV, two-photon absorption cross section 4X10^{4}cm^{2}W^{-2}S^{-1} at 780nm and exciton-exciton annihilation rate around 0.5cm^{2}/s are experimentally obtained.Comment: 5page,3 figure

    Assessing the Option Value of Retrofitting a 200MW Power Plant to Oxyfuel CO2 Capture

    Get PDF
    AbstractAn advantage of oxyfuel capture technology is the flexibility of capable of retrofitting existing conventional coal-fired power plants. This analysis investigates the option value of retrofitting a 200MW coal-fired power plant to Oxyfuel CO2 capture power plant. The initial retrofit option value is the theoretical financial value for pre- investment (Oxyfuel CO2 Capture Ready) to keep the oxyfuel CO2 capture retrofit option open. The study assumes carbon price (either carbon tax or carbon allowance market) is the only driver for oxyfuel CO2 capture retrofit decision and there are no other operational or investment options in the decision making process

    Explicit Visual Prompting for Universal Foreground Segmentations

    Full text link
    Foreground segmentation is a fundamental problem in computer vision, which includes salient object detection, forgery detection, defocus blur detection, shadow detection, and camouflage object detection. Previous works have typically relied on domain-specific solutions to address accuracy and robustness issues in those applications. In this paper, we present a unified framework for a number of foreground segmentation tasks without any task-specific designs. We take inspiration from the widely-used pre-training and then prompt tuning protocols in NLP and propose a new visual prompting model, named Explicit Visual Prompting (EVP). Different from the previous visual prompting which is typically a dataset-level implicit embedding, our key insight is to enforce the tunable parameters focusing on the explicit visual content from each individual image, i.e., the features from frozen patch embeddings and high-frequency components. Our method freezes a pre-trained model and then learns task-specific knowledge using a few extra parameters. Despite introducing only a small number of tunable parameters, EVP achieves superior performance than full fine-tuning and other parameter-efficient fine-tuning methods. Experiments in fourteen datasets across five tasks show the proposed method outperforms other task-specific methods while being considerably simple. The proposed method demonstrates the scalability in different architectures, pre-trained weights, and tasks. The code is available at: https://github.com/NiFangBaAGe/Explicit-Visual-Prompt.Comment: arXiv admin note: substantial text overlap with arXiv:2303.1088

    Ambient gold-catalyzed O-vinylation of cyclic 1,3-diketone: A vinyl ether synthesis

    Get PDF
    Abstract Gold-catalyzed O-vinylation of cyclic 1,3-diketones has been achieved for the first time, which provides direct access to various vinyl ethers. A catalytic amount of copper triflate was identified as the significant additive in promoting this transformation. Both aromatic and aliphatic alkynes are suitable substrates with good to excellent yields. 253

    LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation

    Full text link
    Gestures are non-verbal but important behaviors accompanying people's speech. While previous methods are able to generate speech rhythm-synchronized gestures, the semantic context of the speech is generally lacking in the gesticulations. Although semantic gestures do not occur very regularly in human speech, they are indeed the key for the audience to understand the speech context in a more immersive environment. Hence, we introduce LivelySpeaker, a framework that realizes semantics-aware co-speech gesture generation and offers several control handles. In particular, our method decouples the task into two stages: script-based gesture generation and audio-guided rhythm refinement. Specifically, the script-based gesture generation leverages the pre-trained CLIP text embeddings as the guidance for generating gestures that are highly semantically aligned with the script. Then, we devise a simple but effective diffusion-based gesture generation backbone simply using pure MLPs, that is conditioned on only audio signals and learns to gesticulate with realistic motions. We utilize such powerful prior to rhyme the script-guided gestures with the audio signals, notably in a zero-shot setting. Our novel two-stage generation framework also enables several applications, such as changing the gesticulation style, editing the co-speech gestures via textual prompting, and controlling the semantic awareness and rhythm alignment with guided diffusion. Extensive experiments demonstrate the advantages of the proposed framework over competing methods. In addition, our core diffusion-based generative model also achieves state-of-the-art performance on two benchmarks. The code and model will be released to facilitate future research.Comment: Accepted by ICCV 202

    Depth-aware Test-Time Training for Zero-shot Video Object Segmentation

    Full text link
    Zero-shot Video Object Segmentation (ZSVOS) aims at segmenting the primary moving object without any human annotations. Mainstream solutions mainly focus on learning a single model on large-scale video datasets, which struggle to generalize to unseen videos. In this work, we introduce a test-time training (TTT) strategy to address the problem. Our key insight is to enforce the model to predict consistent depth during the TTT process. In detail, we first train a single network to perform both segmentation and depth prediction tasks. This can be effectively learned with our specifically designed depth modulation layer. Then, for the TTT process, the model is updated by predicting consistent depth maps for the same frame under different data augmentations. In addition, we explore different TTT weight updating strategies. Our empirical results suggest that the momentum-based weight initialization and looping-based training scheme lead to more stable improvements. Experiments show that the proposed method achieves clear improvements on ZSVOS. Our proposed video TTT strategy provides significant superiority over state-of-the-art TTT methods. Our code is available at: https://nifangbaage.github.io/DATTT.Comment: Accepted by CVPR 202
    • …
    corecore