47,594 research outputs found
Preserving Specificity in Federated Graph Learning for fMRI-based Neurological Disorder Identification
Resting-state functional magnetic resonance imaging (rs-fMRI) offers a
non-invasive approach to examining abnormal brain connectivity associated with
brain disorders. Graph neural network (GNN) gains popularity in fMRI
representation learning and brain disorder analysis with powerful graph
representation capabilities. Training a general GNN often necessitates a
large-scale dataset from multiple imaging centers/sites, but centralizing
multi-site data generally faces inherent challenges related to data privacy,
security, and storage burden. Federated Learning (FL) enables collaborative
model training without centralized multi-site fMRI data. Unfortunately,
previous FL approaches for fMRI analysis often ignore site-specificity,
including demographic factors such as age, gender, and education level. To this
end, we propose a specificity-aware federated graph learning (SFGL) framework
for rs-fMRI analysis and automated brain disorder identification, with a server
and multiple clients/sites for federated model aggregation and prediction. At
each client, our model consists of a shared and a personalized branch, where
parameters of the shared branch are sent to the server while those of the
personalized branch remain local. This can facilitate knowledge sharing among
sites and also helps preserve site specificity. In the shared branch, we employ
a spatio-temporal attention graph isomorphism network to learn dynamic fMRI
representations. In the personalized branch, we integrate vectorized
demographic information (i.e., age, gender, and education years) and functional
connectivity networks to preserve site-specific characteristics.
Representations generated by the two branches are then fused for
classification. Experimental results on two fMRI datasets with a total of 1,218
subjects suggest that SFGL outperforms several state-of-the-art approaches
XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera
We present a real-time approach for multi-person 3D motion capture at over 30
fps using a single RGB camera. It operates successfully in generic scenes which
may contain occlusions by objects and by other people. Our method operates in
subsequent stages. The first stage is a convolutional neural network (CNN) that
estimates 2D and 3D pose features along with identity assignments for all
visible joints of all individuals.We contribute a new architecture for this
CNN, called SelecSLS Net, that uses novel selective long and short range skip
connections to improve the information flow allowing for a drastically faster
network without compromising accuracy. In the second stage, a fully connected
neural network turns the possibly partial (on account of occlusion) 2Dpose and
3Dpose features for each subject into a complete 3Dpose estimate per
individual. The third stage applies space-time skeletal model fitting to the
predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose,
and enforce temporal coherence. Our method returns the full skeletal pose in
joint angles for each subject. This is a further key distinction from previous
work that do not produce joint angle results of a coherent skeleton in real
time for multi-person scenes. The proposed system runs on consumer hardware at
a previously unseen speed of more than 30 fps given 512x320 images as input
while achieving state-of-the-art accuracy, which we will demonstrate on a range
of challenging real-world scenes.Comment: To appear in ACM Transactions on Graphics (SIGGRAPH) 202
XNect: Real-time Multi-person 3D Human Pose Estimation with a Single RGB Camera
We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates in generic scenes and is robust to difficult occlusions both by other people and objects. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully-connected neural network turns the possibly partial (on account of occlusion) 2D pose and 3D pose features for each subject into a complete 3D pose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that neither extracted global body positions nor joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes
- …