12 research outputs found

    3DAxiesPrompts: Unleashing the 3D Spatial Task Capabilities of GPT-4V

    Full text link
    In this work, we present a new visual prompting method called 3DAxiesPrompts (3DAP) to unleash the capabilities of GPT-4V in performing 3D spatial tasks. Our investigation reveals that while GPT-4V exhibits proficiency in discerning the position and interrelations of 2D entities through current visual prompting techniques, its abilities in handling 3D spatial tasks have yet to be explored. In our approach, we create a 3D coordinate system tailored to 3D imagery, complete with annotated scale information. By presenting images infused with the 3DAP visual prompt as inputs, we empower GPT-4V to ascertain the spatial positioning information of the given 3D target image with a high degree of precision. Through experiments, We identified three tasks that could be stably completed using the 3DAP method, namely, 2D to 3D Point Reconstruction, 2D to 3D point matching, and 3D Object Detection. We perform experiments on our proposed dataset 3DAP-Data, the results from these experiments validate the efficacy of 3DAP-enhanced GPT-4V inputs, marking a significant stride in 3D spatial task execution

    LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark

    Full text link
    Large language models have become a potential pathway toward achieving artificial general intelligence. Recent works on multi-modal large language models have demonstrated their effectiveness in handling visual modalities. In this work, we extend the research of MLLMs to point clouds and present the LAMM-Dataset and LAMM-Benchmark for 2D image and 3D point cloud understanding. We also establish an extensible framework to facilitate the extension of MLLMs to additional modalities. Our main contribution is three-fold: 1) We present the LAMM-Dataset and LAMM-Benchmark, which cover almost all high-level vision tasks for 2D and 3D vision. Extensive experiments validate the effectiveness of our dataset and benchmark. 2) We demonstrate the detailed methods of constructing instruction-tuning datasets and benchmarks for MLLMs, which will enable future research on MLLMs to scale up and extend to other domains, tasks, and modalities faster. 3) We provide a primary but potential MLLM training framework optimized for modalities' extension. We also provide baseline models, comprehensive experimental observations, and analysis to accelerate future research. Codes and datasets are now available at https://github.com/OpenLAMM/LAMM.Comment: 37 pages, 33 figures. Code available at https://github.com/OpenLAMM/LAMM ; Project page: https://openlamm.github.io

    New Insight of Maximum Transferred Power by Matching Capacitance of a Wireless Power Transfer System

    No full text
    Most research on wireless power transfer (WPT) has been focused on how to achieve a high-efficiency power transfer. Our study found that under the impedance matching for achieving maximum WPT efficiency, the power transferred to the load cannot reach the maximum when a WPT system is supplied by an AC voltage source with constant amplitude. However, the load power or the voltage across the load is essential for a low-power electric device such as the implanted medical device where the transfer efficiency is not the priority to be considered. The paper presents a method for achieving maximum power on the load by matching capacitance in a WPT system with given two-coupled-coils. Three sets of matching capacitances for extreme load power were deduced based on the circuit model considering the coil\u27s resistance, and all these three matching make the WPT system operate at the resonant state. Two sets can make the system achieve the global maximum of load power. One set can make the system achieve the local maximum of load power and reach the power transfer efficiency next to 1. Experimental results verified the theoretical calculations. The results can contribute to the compensation design of a practical WPT system for transferring the maximum power to the load

    Labyrinthin Expression Is Associated with Poor Prognosis in Patients with Non-Small-Cell Lung Cancer

    No full text
    To determine Labyrinthin (LAB) expression in non-small-cell lung cancer (NSCLC), we immunostained and scored for LAB immunohistochemistry (IHC) expression on sections of tissue microarrays (TMAs) prepared from 256 archival tissue blocks of NSCLC. Propensity-score-weighted Kaplan–Meier curves and weighted Cox models were used to associate LAB expression with overall survival. LAB mRNA expression was assessed in The Cancer Genome Atlas (TCGA) and correlated with clinical phenotype and outcome. Positive LAB IHC expression (>5% of tumor cells) was detected in 208/256 (81.3%) of NSCLC samples, and found in both lung adenocarcinomas (LUAD) and lung squamous cell cancer (LUSC). LAB positivity was associated with poor overall survival (HR = 3.56, 95% CI: 2.3–5.4; p first-in-human phase I trial evaluating the LAB vaccines (UCDCC#296, NCT051013560)
    corecore