182 research outputs found
Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?
Vision Transformers (ViTs) have proven to be effective, in solving 2D image
understanding tasks by training over large-scale image datasets; and meanwhile
as a somehow separate track, in modeling the 3D visual world too such as voxels
or point clouds. However, with the growing hope that transformers can become
the "universal" modeling tool for heterogeneous data, ViTs for 2D and 3D tasks
have so far adopted vastly different architecture designs that are hardly
transferable. That invites an (over-)ambitious question: can we close the gap
between the 2D and 3D ViT architectures? As a piloting study, this paper
demonstrates the appealing promise to understand the 3D visual world, using a
standard 2D ViT architecture, with only minimal customization at the input and
output levels without redesigning the pipeline. To build a 3D ViT from its 2D
sibling, we "inflate" the patch embedding and token sequence, accompanied with
new positional encoding mechanisms designed to match the 3D data geometry. The
resultant "minimalist" 3D ViT, named Simple3D-Former, performs surprisingly
robustly on popular 3D tasks such as object classification, point cloud
segmentation and indoor scene detection, compared to highly customized
3D-specific designs. It can hence act as a strong baseline for new 3D ViTs.
Moreover, we note that pursing a unified 2D-3D ViT design has practical
relevance besides just scientific curiosity. Specifically, we demonstrate that
Simple3D-Former naturally enables to exploit the wealth of pre-trained weights
from large-scale realistic 2D images (e.g., ImageNet), which can be plugged in
to enhancing the 3D task performance "for free"
ProtChatGPT: Towards Understanding Proteins with Large Language Models
Protein research is crucial in various fundamental disciplines, but
understanding their intricate structure-function relationships remains
challenging. Recent Large Language Models (LLMs) have made significant strides
in comprehending task-specific knowledge, suggesting the potential for
ChatGPT-like systems specialized in protein to facilitate basic research. In
this work, we introduce ProtChatGPT, which aims at learning and understanding
protein structures via natural languages. ProtChatGPT enables users to upload
proteins, ask questions, and engage in interactive conversations to produce
comprehensive answers. The system comprises protein encoders, a
Protein-Language Pertaining Transformer (PLP-former), a projection adapter, and
an LLM. The protein first undergoes protein encoders and PLP-former to produce
protein embeddings, which are then projected by the adapter to conform with the
LLM. The LLM finally combines user questions with projected embeddings to
generate informative answers. Experiments show that ProtChatGPT can produce
promising responses to proteins and their corresponding questions. We hope that
ProtChatGPT could form the basis for further exploration and application in
protein research. Code and our pre-trained model will be publicly available
Taking a Respite from Representation Learning for Molecular Property Prediction
Artificial intelligence (AI) has been widely applied in drug discovery with a
major task as molecular property prediction. Despite the boom of AI techniques
in molecular representation learning, some key aspects underlying molecular
property prediction haven't been carefully examined yet. In this study, we
conducted a systematic comparison on three representative models, random
forest, MolBERT and GROVER, which utilize three major molecular
representations, extended-connectivity fingerprints, SMILES strings and
molecular graphs, respectively. Notably, MolBERT and GROVER, are pretrained on
large-scale unlabelled molecule corpuses in a self-supervised manner. In
addition to the commonly used MoleculeNet benchmark datasets, we also assembled
a suite of opioids-related datasets for downstream prediction evaluation. We
first conducted dataset profiling on label distribution and structural
analyses; we also examined the activity cliffs issue in the opioids-related
datasets. Then, we trained 4,320 predictive models and evaluated the usefulness
of the learned representations. Furthermore, we explored into the model
evaluation by studying the effect of statistical tests, evaluation metrics and
task settings. Finally, we dissected the chemical space generalization into
inter-scaffold and intra-scaffold generalization and measured prediction
performance to evaluate model generalizbility under both settings. By taking
this respite, we reflected on the key aspects underlying molecular property
prediction, the awareness of which can, hopefully, bring better AI techniques
in this field
Clustering for Protein Representation Learning
Protein representation learning is a challenging task that aims to capture
the structure and function of proteins from their amino acid sequences.
Previous methods largely ignored the fact that not all amino acids are equally
important for protein folding and activity. In this article, we propose a
neural clustering framework that can automatically discover the critical
components of a protein by considering both its primary and tertiary structure
information. Our framework treats a protein as a graph, where each node
represents an amino acid and each edge represents a spatial or sequential
connection between amino acids. We then apply an iterative clustering strategy
to group the nodes into clusters based on their 1D and 3D positions and assign
scores to each cluster. We select the highest-scoring clusters and use their
medoid nodes for the next iteration of clustering, until we obtain a
hierarchical and informative representation of the protein. We evaluate on four
protein-related tasks: protein fold classification, enzyme reaction
classification, gene ontology term prediction, and enzyme commission number
prediction. Experimental results demonstrate that our method achieves
state-of-the-art performance.Comment: Accepted to CVPR202
Point Contrastive Prediction with Semantic Clustering for Self-Supervised Learning on Point Cloud Videos
We propose a unified point cloud video self-supervised learning framework for
object-centric and scene-centric data. Previous methods commonly conduct
representation learning at the clip or frame level and cannot well capture
fine-grained semantics. Instead of contrasting the representations of clips or
frames, in this paper, we propose a unified self-supervised framework by
conducting contrastive learning at the point level. Moreover, we introduce a
new pretext task by achieving semantic alignment of superpoints, which further
facilitates the representations to capture semantic cues at multiple scales. In
addition, due to the high redundancy in the temporal dimension of dynamic point
clouds, directly conducting contrastive learning at the point level usually
leads to massive undesired negatives and insufficient modeling of positive
representations. To remedy this, we propose a selection strategy to retain
proper negatives and make use of high-similarity samples from other instances
as positive supplements. Extensive experiments show that our method outperforms
supervised counterparts on a wide range of downstream tasks and demonstrates
the superior transferability of the learned representations.Comment: Accepted by ICCV 202
Masked Spatio-Temporal Structure Prediction for Self-supervised Learning on Point Cloud Videos
Recently, the community has made tremendous progress in developing effective
methods for point cloud video understanding that learn from massive amounts of
labeled data. However, annotating point cloud videos is usually notoriously
expensive. Moreover, training via one or only a few traditional tasks (e.g.,
classification) may be insufficient to learn subtle details of the
spatio-temporal structure existing in point cloud videos. In this paper, we
propose a Masked Spatio-Temporal Structure Prediction (MaST-Pre) method to
capture the structure of point cloud videos without human annotations. MaST-Pre
is based on spatio-temporal point-tube masking and consists of two
self-supervised learning tasks. First, by reconstructing masked point tubes,
our method is able to capture the appearance information of point cloud videos.
Second, to learn motion, we propose a temporal cardinality difference
prediction task that estimates the change in the number of points within a
point tube. In this way, MaST-Pre is forced to model the spatial and temporal
structure in point cloud videos. Extensive experiments on MSRAction-3D,
NTU-RGBD, NvGesture, and SHREC'17 demonstrate the effectiveness of the proposed
method.Comment: Accepted by ICCV 202
A Novel Statistical Method for Interpreting the Pathogenicity of Rare Variants
PURPOSE: To achieve the ultimate goal of personalized treatment of patients, accurate molecular diagnosis and precise interpretation of the impact of genetic variants on gene function is essential. With sequencing cost becoming increasingly affordable, the accurate distinguishing of benign from pathogenic variants becomes the major bottleneck. Although large normal population sequence databases have become a key resource in filtering benign variants, they are not effective at filtering extremely rare variants.
METHODS: To address this challenge, we developed a novel statistical test by combining sequencing data from a patient cohort with a normal control population database. By comparing the expected and observed allele frequency in the patient cohort, variants that are likely benign can be identified.
RESULTS: The performance of this new method is evaluated on both simulated and real data sets coupled with experimental validation. As a result, we demonstrate this new test is well powered to identify benign variants, and is particularly effective for variants with low frequency in the normal population.
CONCLUSION: Overall, as a general test that can be applied to any type of variants in the context of all Mendelian diseases, our work provides a general framework for filtering benign variants with very low population allele frequency
Molecular cloning and in silico analysis of the duck (Anas platyrhynchos) MEF2A gene cDNA and its expression profile in muscle tissues during fetal development
The role of myogenic enhancer transcription factor 2a (MEF2A) in avian muscle during fetal development is unknown. In this work, we cloned the duck MEF2A cDNA sequence (GenBank accession no. HM460752) and examined its developmental expression profiles in cardiac muscle, non-vascular smooth muscle and skeletal muscle. Duck MEF2A cDNA comprised 1479 bp encoding 492 amino acid residues. In silico analysis showed that MEF2A contained MADS (MCM1, AGAMOUS, DEFICIENS and SRF - serum response factor), MEF2 and mitogen-activated protein kinase (MAPK) transcription domains with high homology to related proteins in other species. Modified sites in these domains were conserved among species and several variants were found. Quantitative PCR showed that MEF2A was expressed in all three muscles at each developmental stage examined, with the expression in smooth muscle being higher than in the other muscles. These results indicate that the conserved domains of duck MEF2A, including the MADS and MEF2 domains, are important for MEF2A transcription factor function. The expression of MEF2A in duck smooth muscle and cardiac muscle suggests that MEF2A plays a role in these two tissues
Recent advances in copper-based catalysts for electrocatalytic CO2 reduction toward multi-carbon products
Electrocatalytic carbon dioxide reduction reaction (CO2RR) holds the promise of both overcoming the greenhouse effect and synthesizing a wealth of chemicals. Electrocatalytic CO2 reduction toward carbon-containing products, including C1 products (carbon monoxide, formic acid, etc), C2 products (ethylene, ethanol, etc.) and multi-carbon products (e.g., n-propanol), provides beneficial fuel and chemicals for industrial production. The complexity of the multi-proton transfer processes and difficulties of C-C coupling in electrochemical CO2 reduction toward multi-carbon(C2+) products have attracted increasing concerns on the design of catalysts in comparison with those of C1 products. In this paper, we review the main advances in the syntheses of multi-carbon products through electrocatalytic carbon dioxide reduction in recent years, introduce the basic principles of electrocatalytic CO2RR, and detailly elucidate two widely accepted mechanisms of C-C coupling reactions. Among abundant nanomaterials, copper-based catalysts are outstanding catalysts for the preparation of multi-carbon chemicals in electrochemical CO2RR attributing to effective C-C coupling reactions. Regarding the different selectivity of multi-carbon chemicals but extensively applied copper-based catalysts, we classify and summarize various Cu-based catalysts through separating diverse multi-carbon products, where the modification of spatial and electronic structures is beneficial to increase the coverage of CO or lower the activation energy barrier for forming C-C bond to form the key intermediates and increase the production of multi-carbon products. Challenges and prospects involving the fundamental and development of copper-based catalysts in electrochemical CO2 reduction reaction are also proposed
- …