87 research outputs found

    Toroidal and Coiled Carbon Nanotubes

    Get PDF

    FERM domain-containing unconventional myosin VIIA interacts with integrin β5 subunit and regulates αvβ5-mediated cell adhesion and migration

    Get PDF
    AbstractUnconventional myosin VIIA (Myo7a) has been known to associate with hereditary deafness. Here we present a novel function of Myo7a by identifying that Myo7a directly interacts with integrin β5 subunit and regulates cell adhesion and motility in an integrin-dependent manner. We found that Myo7a bound to the cytoplasmic tail of integrin β5. Further, we pinpointed an integrin-binding domain at F3 of the first FERM domain and F1 of the second FERM domain. Functionally, Myo7a-induced cell adhesion and migration were mediated by integrin αvβ5. These findings indicated that Myo7a interacts with integrin β5 and selectively promotes integrin αvβ5-mediated cell migration

    Pricing Options and Convertible Bonds Based on an Actuarial Approach

    Get PDF
    This paper discusses the pricing problem of European options and convertible bonds using an actuarial approach. We get the pricing formula of European options, extend the pricing results to the case with continuous dividend, and then derive the call-put parity relation. Furthermore, we get the general expression of convertible bond price. Finally, we conduct a comparative analysis of numerical simulation and make an empirical analysis between the B-S model and the actuarial model using the actual data in the Chinese stock market. The empirical results show that the efficiency of the actuarial model is superior to the B-S model

    Contrastive Vision-Language Alignment Makes Efficient Instruction Learner

    Full text link
    We study the task of extending the large language model (LLM) into a vision-language instruction-following model. This task is crucial but challenging since the LLM is trained on text modality only, making it hard to effectively digest the visual modality. To address this, existing methods typically train a visual adapter to align the representation between a pre-trained vision transformer (ViT) and the LLM by a generative image captioning loss. However, we find that the generative objective can only produce weak alignment for vision and language, making the aligned vision-language model very hungry for the instruction fine-tuning data. In this paper, we propose CG-VLM that applies both Contrastive and Generative alignment objectives to effectively align the representation of ViT and LLM. Different from image level and sentence level alignment in common contrastive learning settings, CG-VLM aligns the image-patch level features and text-token level embeddings, which, however, is very hard to achieve as no explicit grounding patch-token relation provided in standard image captioning datasets. To address this issue, we propose to maximize the averaged similarity between pooled image-patch features and text-token embeddings. Extensive experiments demonstrate that the proposed CG-VLM produces strong vision-language alignment and is an efficient instruction learner. For example, using only 10% instruction tuning data, we reach 95% performance of state-of-the-art method LLaVA [29] on the zero-shot ScienceQA-Image benchmark.Comment: 17 pages, 10 pages for main paper, 7 pages for supplementar

    CPCM: Contextual Point Cloud Modeling for Weakly-supervised Point Cloud Semantic Segmentation

    Full text link
    We study the task of weakly-supervised point cloud semantic segmentation with sparse annotations (e.g., less than 0.1% points are labeled), aiming to reduce the expensive cost of dense annotations. Unfortunately, with extremely sparse annotated points, it is very difficult to extract both contextual and object information for scene understanding such as semantic segmentation. Motivated by masked modeling (e.g., MAE) in image and video representation learning, we seek to endow the power of masked modeling to learn contextual information from sparsely-annotated points. However, directly applying MAE to 3D point clouds with sparse annotations may fail to work. First, it is nontrivial to effectively mask out the informative visual context from 3D point clouds. Second, how to fully exploit the sparse annotations for context modeling remains an open question. In this paper, we propose a simple yet effective Contextual Point Cloud Modeling (CPCM) method that consists of two parts: a region-wise masking (RegionMask) strategy and a contextual masked training (CMT) method. Specifically, RegionMask masks the point cloud continuously in geometric space to construct a meaningful masked prediction task for subsequent context learning. CMT disentangles the learning of supervised segmentation and unsupervised masked context prediction for effectively learning the very limited labeled points and mass unlabeled points, respectively. Extensive experiments on the widely-tested ScanNet V2 and S3DIS benchmarks demonstrate the superiority of CPCM over the state-of-the-art.Comment: Accepted by ICCV 202

    Theoretical study of the influence of doped niobium on the electronic properties of CsPbBr3

    Get PDF
    In the family of inorganic perovskite solar cells (PSCs), CsPbBr3 has attracted widespread attention due to its excellent stability under high humidity and high temperature conditions. However, power conversion efficiency (PCE) improvement of CsPbBr3-based PSCs is markedly limited by the large optical absorption loss coming from the wide band gap and serious charge recombination at interfaces and/or within the perovskite film. In this work, using density functional theory calculations, we systemically studied the electronic properties of niobium (Nb)-doped CsPbBr3 with different concentration ratios. As a result, it is found that doped CsPbBr3 compounds are metallic at high Nb doping concentration but semiconducting at low Nb doping concentration. The calculated electronic density of states shows that the conduction band is predominantly constructed of doped Nb. These characteristics make them very suitable for solar cell and energy storage applications
    • …
    corecore