53 research outputs found

    Nanoscale organization of luminescent materials and their polarization properties investigated by two-dimensional polarization imaging

    Get PDF
    Semiconductor materials (e.g., conjugated polymers, metal halide perovskites) have been widely used in solar cells, light-emitting diodes, and photodetectors. Organic conjugated systems have high mechanical flexibility and low costs for production. Metal halide perovskites have the advantage of strong light absorption, long charge-carrier diffusion lengths, and low intrinsic surface recombination.Polarization-sensitive single-molecule methods have been extensively used to study the chromophore organization and excitation energy transfer (EET) process. Our novel polarization technique, two-dimensional polarization imaging (2D POLIM) is designed to simultaneously measure and control both the excitation and emission polarization characteristics of an individual object. A model based on single funnel approximation (SFA) is applied to fit the 2D polarization portrait obtained from 2D POLIM measurements. 2D POLIM in combination with the SFA model allows the quantitative characterization of EET efficiency. Overall, A large number of polarization parameters, e.g., modulation depths, phases, luminescence shift, fluorescence anisotropy, energy funneling efficiency, and properties of the EET-emitter, can be extracted from 2D polarization portraits. They give a full picture of chromophoresā€™ organization and a quantitative measure of the EET process.In this thesis, we applied the 2D POLIM technique to investigate the fundamental optoelectronic process in different types of luminescent materials. H-aggregates forming in spin-cast conjugated films are visualized by modulation depth and phase imaging contrast. Light-harvesting efficiency shows the efficient ET within the amorphous phase and poor ET between H-aggregates due to the less overlap between absorption and emission spectra. Together with single-molecule spectroscopy and scanning electron microscope, we studied the polarization property of individual MAPbBr3 aggregates, which shows the well-known dielectric screening effect cannot fully explain the absorption polarization from weakly elongated objects (even with irregular shapes). We propose that power dependent quantum yield can further increase the modulation depth of excitation. 2D POLIM was also applied to explore the aggregation state of proteins in the biological system. Furthermore, we did a series of computer experiments to examine and improve the SFA model. We break the limit of energy funneling efficiency and propose an asymmetric three-dipole model, which is more applicable for multi-chromophore systems. In the future, quantitative phase-contrast imaging and time-resolved 2D POLIM might be further develope

    Advances in Fertility Options of Azoospermic Men

    Get PDF

    Seasonal variability does not impact in vitro fertilization success

    Get PDF
    Peer reviewedPublisher PD

    Reasoning over Hierarchical Question Decomposition Tree for Explainable Question Answering

    Full text link
    Explainable question answering (XQA) aims to answer a given question and provide an explanation why the answer is selected. Existing XQA methods focus on reasoning on a single knowledge source, e.g., structured knowledge bases, unstructured corpora, etc. However, integrating information from heterogeneous knowledge sources is essential to answer complex questions. In this paper, we propose to leverage question decomposing for heterogeneous knowledge integration, by breaking down a complex question into simpler ones, and selecting the appropriate knowledge source for each sub-question. To facilitate reasoning, we propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT). First, we build the Hierarchical Question Decomposition Tree (HQDT) to understand the semantics of a complex question; then, we conduct probabilistic reasoning over HQDT from root to leaves recursively, to aggregate heterogeneous knowledge at different tree levels and search for a best solution considering the decomposing and answering probabilities. The experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly, demonstrating the effectiveness of leveraging question decomposing for knowledge integration and our RoHT framework.Comment: has been accepted by ACL202

    Effectiveness of atosiban in women with previous single implantation failure undergoing frozen-thawed blastocyst transfer : study protocol for a randomised controlled trial

    Get PDF
    Ā© Author(s) (or their employer(s)) 2023. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.Peer reviewedPublisher PD

    KQA Pro: A Large-Scale Dataset with Interpretable Programs and Accurate SPARQLs for Complex Question Answering over Knowledge Base

    Full text link
    Complex question answering over knowledge base (Complex KBQA) is challenging because it requires various compositional reasoning capabilities, such as multi-hop inference, attribute comparison, set operation, and etc. Existing benchmarks have some shortcomings that limit the development of Complex KBQA: 1) they only provide QA pairs without explicit reasoning processes; 2) questions are either generated by templates, leading to poor diversity, or on a small scale. To this end, we introduce KQA Pro, a large-scale dataset for Complex KBQA. We define a compositional and highly-interpretable formal format, named Program, to represent the reasoning process of complex questions. We propose compositional strategies to generate questions, corresponding SPARQLs, and Programs with a small number of templates, and then paraphrase the generated questions to natural language questions (NLQ) by crowdsourcing, giving rise to around 120K diverse instances. SPARQL and Program depict two complementary solutions to answer complex questions, which can benefit a large spectrum of QA methods. Besides the QA task, KQA Pro can also serves for the semantic parsing task. As far as we know, it is currently the largest corpus of NLQ-to-SPARQL and NLQ-to-Program. We conduct extensive experiments to evaluate whether machines can learn to answer our complex questions in different cases, that is, with only QA supervision or with intermediate SPARQL/Program supervision. We find that state-of-the-art KBQA methods learnt from only QA pairs perform very poor on our dataset, implying our questions are more challenging than previous datasets. However, pretrained models learnt from our NLQ-to-SPARQL and NLQ-to-Program annotations surprisingly achieve about 90\% answering accuracy, which is even close to the human expert performance..

    Guiding the PLMs with Semantic Anchors as Intermediate Supervision: Towards Interpretable Semantic Parsing

    Full text link
    The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task. Despite the promising performance, previous PLM-based approaches often suffer from hallucination problems due to their negligence of the structural information contained in the sentence, which essentially constitutes the key semantics of the logical forms. Furthermore, most works treat PLM as a black box in which the generation process of the target logical form is hidden beneath the decoder modules, which greatly hinders the model's intrinsic interpretability. To address these two issues, we propose to incorporate the current PLMs with a hierarchical decoder network. By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks, namely Semantic Anchor Extraction and Semantic Anchor Alignment, for training the hierarchical decoders and probing the model intermediate representations in a self-adaptive manner alongside the fine-tuning process. We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines. More importantly, by analyzing the intermediate representations of the hierarchical decoders, our approach also makes a huge step toward the intrinsic interpretability of PLMs in the domain of semantic parsing
    • ā€¦
    corecore