7,462 research outputs found
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Graph Representation Learning in Biomedicine
Biomedical networks are universal descriptors of systems of interacting
elements, from protein interactions to disease networks, all the way to
healthcare systems and scientific knowledge. With the remarkable success of
representation learning in providing powerful predictions and insights, we have
witnessed a rapid expansion of representation learning techniques into
modeling, analyzing, and learning with such networks. In this review, we put
forward an observation that long-standing principles of networks in biology and
medicine -- while often unspoken in machine learning research -- can provide
the conceptual grounding for representation learning, explain its current
successes and limitations, and inform future advances. We synthesize a spectrum
of algorithmic approaches that, at their core, leverage graph topology to embed
networks into compact vector spaces, and capture the breadth of ways in which
representation learning is proving useful. Areas of profound impact include
identifying variants underlying complex traits, disentangling behaviors of
single cells and their effects on health, assisting in diagnosis and treatment
of patients, and developing safe and effective medicines
Single-Cell Multimodal Prediction via Transformers
The recent development of multimodal single-cell technology has made the
possibility of acquiring multiple omics data from individual cells, thereby
enabling a deeper understanding of cellular states and dynamics. Nevertheless,
the proliferation of multimodal single-cell data also introduces tremendous
challenges in modeling the complex interactions among different modalities. The
recently advanced methods focus on constructing static interaction graphs and
applying graph neural networks (GNNs) to learn from multimodal data. However,
such static graphs can be suboptimal as they do not take advantage of the
downstream task information; meanwhile GNNs also have some inherent limitations
when deeply stacking GNN layers. To tackle these issues, in this work, we
investigate how to leverage transformers for multimodal single-cell data in an
end-to-end manner while exploiting downstream task information. In particular,
we propose a scMoFormer framework which can readily incorporate external domain
knowledge and model the interactions within each modality and cross modalities.
Extensive experiments demonstrate that scMoFormer achieves superior performance
on various benchmark datasets. Remarkably, scMoFormer won a Kaggle silver medal
with the rank of 24/1221 (Top 2%) without ensemble in a NeurIPS 2022
competition. Our implementation is publicly available at Github.Comment: CIKM 202
- …