11 research outputs found
Keypoint-Augmented Self-Supervised Learning for Medical Image Segmentation with Limited Annotation
Pretraining CNN models (i.e., UNet) through self-supervision has become a
powerful approach to facilitate medical image segmentation under low annotation
regimes. Recent contrastive learning methods encourage similar global
representations when the same image undergoes different transformations, or
enforce invariance across different image/patch features that are intrinsically
correlated. However, CNN-extracted global and local features are limited in
capturing long-range spatial dependencies that are essential in biological
anatomy. To this end, we present a keypoint-augmented fusion layer that
extracts representations preserving both short- and long-range self-attention.
In particular, we augment the CNN feature map at multiple scales by
incorporating an additional input that learns long-range spatial self-attention
among localized keypoint features. Further, we introduce both global and local
self-supervised pretraining for the framework. At the global scale, we obtain
global representations from both the bottleneck of the UNet, and by aggregating
multiscale keypoint features. These global features are subsequently
regularized through image-level contrastive objectives. At the local scale, we
define a distance-based criterion to first establish correspondences among
keypoints and encourage similarity between their features. Through extensive
experiments on both MRI and CT segmentation tasks, we demonstrate the
architectural advantages of our proposed method in comparison to both CNN and
Transformer-based UNets, when all architectures are trained with randomly
initialized weights. With our proposed pretraining strategy, our method further
outperforms existing SSL methods by producing more robust self-attention and
achieving state-of-the-art segmentation results. The code is available at
https://github.com/zshyang/kaf.git.Comment: Camera ready for NeurIPS 2023. Code available at
https://github.com/zshyang/kaf.gi
TetCNN: Convolutional Neural Networks on Tetrahedral Meshes
Convolutional neural networks (CNN) have been broadly studied on images,
videos, graphs, and triangular meshes. However, it has seldom been studied on
tetrahedral meshes. Given the merits of using volumetric meshes in applications
like brain image analysis, we introduce a novel interpretable graph CNN
framework for the tetrahedral mesh structure. Inspired by ChebyNet, our model
exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over
commonly used graph Laplacian which lacks the Riemannian metric information of
3D manifolds. For pooling adaptation, we introduce new objective functions for
localized minimum cuts in the Graclus algorithm based on the LBO. We employ a
piece-wise constant approximation scheme that uses the clustering assignment
matrix to estimate the LBO on sampled meshes after each pooling. Finally,
adapting the Gradient-weighted Class Activation Mapping algorithm for
tetrahedral meshes, we use the obtained heatmaps to visualize discovered
regions-of-interest as biomarkers. We demonstrate the effectiveness of our
model on cortical tetrahedral meshes from patients with Alzheimer's disease, as
there is scientific evidence showing the correlation of cortical thickness to
neurodegenerative disease progression. Our results show the superiority of our
LBO-based convolution layer and adapted pooling over the conventionally used
unitary cortical thickness, graph Laplacian, and point cloud representation.Comment: Accepted as a conference paper to Information Processing in Medical
Imaging (IPMI 2023) conferenc
Envisioning a Next Generation Extended Reality Conferencing System with Efficient Photorealistic Human Rendering
Meeting online is becoming the new normal. Creating an immersive experience
for online meetings is a necessity towards more diverse and seamless
environments. Efficient photorealistic rendering of human 3D dynamics is the
core of immersive meetings. Current popular applications achieve real-time
conferencing but fall short in delivering photorealistic human dynamics, either
due to limited 2D space or the use of avatars that lack realistic interactions
between participants. Recent advances in neural rendering, such as the Neural
Radiance Field (NeRF), offer the potential for greater realism in metaverse
meetings. However, the slow rendering speed of NeRF poses challenges for
real-time conferencing. We envision a pipeline for a future extended reality
metaverse conferencing system that leverages monocular video acquisition and
free-viewpoint synthesis to enhance data and hardware efficiency. Towards an
immersive conferencing experience, we explore an accelerated NeRF-based
free-viewpoint synthesis algorithm for rendering photorealistic human dynamics
more efficiently. We show that our algorithm achieves comparable rendering
quality while performing training and inference 44.5% and 213% faster than
state-of-the-art methods, respectively. Our exploration provides a design basis
for constructing metaverse conferencing systems that can handle complex
application scenarios, including dynamic scene relighting with customized
themes and multi-user conferencing that harmonizes real-world people into an
extended world.Comment: Accepted to CVPR 2023 ECV Worksho
OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing
Non-mydriatic retinal color fundus photography (CFP) is widely available due
to the advantage of not requiring pupillary dilation, however, is prone to poor
quality due to operators, systemic imperfections, or patient-related causes.
Optimal retinal image quality is mandated for accurate medical diagnoses and
automated analyses. Herein, we leveraged the Optimal Transport (OT) theory to
propose an unpaired image-to-image translation scheme for mapping low-quality
retinal CFPs to high-quality counterparts. Furthermore, to improve the
flexibility, robustness, and applicability of our image enhancement pipeline in
the clinical practice, we generalized a state-of-the-art model-based image
reconstruction method, regularization by denoising, by plugging in priors
learned by our OT-guided image-to-image translation network. We named it as
regularization by enhancing (RE). We validated the integrated framework, OTRE,
on three publicly available retinal image datasets by assessing the quality
after enhancement and their performance on various downstream tasks, including
diabetic retinopathy grading, vessel segmentation, and diabetic lesion
segmentation. The experimental results demonstrated the superiority of our
proposed framework over some state-of-the-art unsupervised competitors and a
state-of-the-art supervised method.Comment: Accepted as a conference paper to The 28th biennial international
conference on Information Processing in Medical Imaging (IPMI 2023
OmniMotionGPT: Animal Motion Generation with Limited Data
Our paper aims to generate diverse and realistic animal motion sequences from
textual descriptions, without a large-scale animal text-motion dataset. While
the task of text-driven human motion synthesis is already extensively studied
and benchmarked, it remains challenging to transfer this success to other
skeleton structures with limited data. In this work, we design a model
architecture that imitates Generative Pretraining Transformer (GPT), utilizing
prior knowledge learned from human data to the animal domain. We jointly train
motion autoencoders for both animal and human motions and at the same time
optimize through the similarity scores among human motion encoding, animal
motion encoding, and text CLIP embedding. Presenting the first solution to this
problem, we are able to generate animal motions with high diversity and
fidelity, quantitatively and qualitatively outperforming the results of
training human motion generation baselines on animal data. Additionally, we
introduce AnimalML3D, the first text-animal motion dataset with 1240 animation
sequences spanning 36 different animal identities. We hope this dataset would
mediate the data scarcity problem in text-driven animal motion generation,
providing a new playground for the research community.Comment: The project page is at https://zshyang.github.io/omgpt-website