1,504 research outputs found

    Stability and Mechanical Properties of w1-X Mox b4.2 (X=0.0-1.0) From First Principles

    Full text link
    Heavy transition-metal tetraborides (e.g., tungsten tetraboride, molybdenum tetraboride, and molybdenum-doped tungsten tetraboride) exhibit superior mechanical properties, but solving their complex crystal structures has been a long-standing challenge. Recent experimental x-ray and neutron diffraction measurements combined with first-principles structural searches have identified a complex structure model for tungsten tetraboride that contains a boron trimer as an unusual structural unit with a stoichiometry of 1:4.2. In this paper, we expand the study to binary MoB4.2 and ternary W1-xMoxB4.2 (x=0.0-1.0) compounds to assess their thermodynamic stability and mechanical properties using a tailor-designed crystal structure search method in conjunction with first-principles energetic calculations. Our results reveal that an orthorhombic MoB4.2 structure in Cmcm symmetry matches well the experimental x-ray diffraction patterns. For the synthesized ternary Mo-doped tungsten tetraborides, a series of W1-xMoxB4.2 structures are theoretically designed using a random substitution approach by replacing the W to Mo atoms in the Cmcm binary crystal structure. This approach leads to the discovery of several W1-xMoxB4.2 structures that are energetically superior and stable against decomposition into binary WB4.2 and MoB4.2. The structural and mechanical properties of these low-energy W1-xMoxB4.2 structures largely follow the Vegard\u27s law. Under changing composition parameter x=0.0-1.0, the superior mechanical properties of W1-xMoxB4.2 stay in a narrow range. This unusual phenomenon stems from the strong covalent network with directional bonding configurations formed by boron atoms to resist elastic deformation. The findings offer insights into the fundamental structural and physical properties of ternary W1-xMoxB4.2 in relation to the binary WB4.2/MoB4.2 compounds, which open a promising avenue for further rational optimization of the functional performance of transition-metal borides that can be synthesized under favorable experimental conditions for wide applications

    Clinical observation on fibrin glue technique in pterygium surgery performed with limbal autograft transplantation

    Get PDF
    AIM: To compare the efficiency and safety of fibrin glue to suture technique in pterygium surgery performed with limbal autograft.<p>METHODS: A prospective randomized clinical trial was carried out in 60 eyes of 48 patients operated for primary nasal pterygium. Autologous limbal graft taken from the superotemporal limbus was used to cover the sclera after pterygium excision under local anesthesia with 2% lidocaine. In 22 cases(30 eyes), the transplant was attached to the sclera with a fibrin tissue adhesive(group 1)and in 26 cases(30 eyes)with 10-0 Virgin silk sutures(group 2). Patients were followed up at least for 3 months. Time of operation, matching degree of graft and visual analogue scale(VAS)score were mainly observed and recorded. <p>RESULTS: Patient symptoms were significantly less and biomicroscopic findings were better in group 1. Pterygium recurrence was seen in 1 case of group 1, and 1 case of group 2. Average surgery time was shorter(<i>P</i><0.01)in fibrin group. <p>CONCLUSION: Using fibrin glue for graft fixation in pterygium surgery causes significantly less postoperative pain and shortens surgery time significantly

    LION : Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge

    Full text link
    Multimodal Large Language Models (MLLMs) have endowed LLMs with the ability to perceive and understand multi-modal signals. However, most of the existing MLLMs mainly adopt vision encoders pretrained on coarsely aligned image-text pairs, leading to insufficient extraction and reasoning of visual knowledge. To address this issue, we devise a dual-Level vIsual knOwledge eNhanced Multimodal Large Language Model (LION), which empowers the MLLM by injecting visual knowledge in two levels. 1) Progressive incorporation of fine-grained spatial-aware visual knowledge. We design a vision aggregator cooperated with region-level vision-language (VL) tasks to incorporate fine-grained spatial-aware visual knowledge into the MLLM. To alleviate the conflict between image-level and region-level VL tasks during incorporation, we devise a dedicated stage-wise instruction-tuning strategy with mixture-of-adapters. This progressive incorporation scheme contributes to the mutual promotion between these two kinds of VL tasks. 2) Soft prompting of high-level semantic visual evidence. We facilitate the MLLM with high-level semantic visual evidence by leveraging diverse image tags. To mitigate the potential influence caused by imperfect predicted tags, we propose a soft prompting method by embedding a learnable token into the tailored text instruction. Comprehensive experiments on several multi-modal benchmarks demonstrate the superiority of our model (e.g., improvement of 5% accuracy on VSR and 3% CIDEr on TextCaps over InstructBLIP, 5% accuracy on RefCOCOg over Kosmos-2).Comment: Technical Report. Project page: https://rshaojimmy.github.io/Projects/JiuTian-LION Code: https://github.com/rshaojimmy/JiuTia
    corecore