14 research outputs found

    OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding

    Full text link
    We introduce OpenShape, a method for learning multi-modal joint representations of text, image, and point clouds. We adopt the commonly used multi-modal contrastive learning framework for representation alignment, but with a specific focus on scaling up 3D representations to enable open-world 3D shape understanding. To achieve this, we scale up training data by ensembling multiple 3D datasets and propose several strategies to automatically filter and enrich noisy text descriptions. We also explore and compare strategies for scaling 3D backbone networks and introduce a novel hard negative mining module for more efficient training. We evaluate OpenShape on zero-shot 3D classification benchmarks and demonstrate its superior capabilities for open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than 10% for existing methods. OpenShape also achieves an accuracy of 85.3% on ModelNet40, outperforming previous zero-shot baseline methods by 20% and performing on par with some fully-supervised methods. Furthermore, we show that our learned embeddings encode a wide range of visual and semantic concepts (e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D and image-3D interactions. Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.Comment: Project Website: https://colin97.github.io/OpenShape

    Rapid Turnover of 2-LTR HIV-1 DNA during Early Stage of Highly Active Antiretroviral Therapy

    Get PDF
    BACKGROUND: Despite prolonged treatment with highly active antiretroviral therapy (HAART), the infectious HIV-1 continues to replicate and resides latently in the resting memory CD4+ T lymphocytes, which blocks the eradication of HIV-1. The viral persistence of HIV-1 is mainly caused by its proviral DNA being either linear nonintegrated, circular nonintegrated, or integrated. Previous reports have largely focused on the dynamics of HIV-1 DNA from the samples collected with relatively long time intervals during the process of disease and HAART treatment, which may have missed the intricate changes during the intervals in early treatment. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we investigated the dynamics of HIV-1 DNA in patients during the early phase of HARRT treatment. Using optimized real time PCR, we observed significant changes in 2-LTR during the first 12-week of treatment, while total and integrated HIV-1 DNA remained stable. The doubling time and half-life of 2-LTR were not correlated with the baseline and the rate of changes in plasma viral load and various CD4+ T-cell populations. Longitudinal analyses on 2-LTR sequences and plasma lipopolysaccharide (LPS) levels did not reveal any significant changes in the same treatment period. CONCLUSIONS/SIGNIFICANCE: Our study revealed the rapid changes in 2-LTR concentration in a relatively large number of patients during the early HAART treatment. The rapid changes indicate the rapid infusion and clearance of cells bearing 2-LTR in the peripheral blood. Those changes are not expected to be caused by the blocking of viral integration, as our study did not include the integrase inhibitor raltegravir. Our study helps better understand the dynamics of HIV-DNA and its potential role as a biomarker for the diseases and for the treatment efficacy of HAART

    Genomic Analyses Reveal Mutational Signatures and Frequently Altered Genes in Esophageal Squamous Cell Carcinoma

    Get PDF
    Esophageal squamous cell carcinoma (ESCC) is one of the most common cancers worldwide and the fourth most lethal cancer in China. However, although genomic studies have identified some mutations associated with ESCC, we know little of the mutational processes responsible. To identify genome-wide mutational signatures, we performed either whole-genome sequencing (WGS) or whole-exome sequencing (WES) on 104 ESCC individuals and combined our data with those of 88 previously reported samples. An APOBEC-mediated mutational signature in 47% of 192 tumors suggests that APOBEC-catalyzed deamination provides a source of DNA damage in ESCC. Moreover, PIK3CA hotspot mutations (c.1624G>A [p.Glu542Lys] and c.1633G>A [p.Glu545Lys]) were enriched in APOBEC-signature tumors, and no smoking-associated signature was observed in ESCC. In the samples analyzed by WGS, we identified focal (<100 kb) amplifications of CBX4 and CBX8. In our combined cohort, we identified frequent inactivating mutations in AJUBA, ZNF750, and PTCH1 and the chromatin-remodeling genes CREBBP and BAP1, in addition to known mutations. Functional analyses suggest roles for several genes (CBX4, CBX8, AJUBA, and ZNF750) in ESCC. Notably, high activity of hedgehog signaling and the PI3K pathway in approximately 60% of 104 ESCC tumors indicates that therapies targeting these pathways might be particularly promising strategies for ESCC. Collectively, our data provide comprehensive insights into the mutational signatures of ESCC and identify markers for early diagnosis and potential therapeutic targets

    Open X-Embodiment:Robotic learning datasets and RT-X models

    Get PDF
    Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train "generalist" X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. The project website is robotics-transformer-x.github.io

    Sequence analysis of 2-LTR circle junction from selected patients in Group A and B.

    No full text
    <p>The circle junction, 3′U5 and 5′U3 were aligned and numbered against HXB2. PCR products amplified from samples at week 0 and 4 were cloned and sequenced. Sequences for patients from group A are above the line while those from group B are under the line.</p

    Linear regression analysis on the decay rate of the total HIV DNA.

    No full text
    <p>During the first 12 weeks of treatment, each patient in Group A (A) and Group B (B) was analyzed, as well as the averages for the two groups (C). Three patients (p5, p7 and p9) in Group A and two (p17 and p18) in Group B did not have applicable decay rates (<i>t</i><sub>1/2 </sub><i>na</i>) and were therefore not included in the calculation of average shown in panel C.</p

    The patients' HIV-1 RNA loads, CD4 cell counts, and ART.

    No full text
    a<p>Viral load was measured by the Amplicor HIV-1 monitor ultrasensitive Method (Roche), with a detection limit of 40 copies/ml of plasma.</p
    corecore