12 research outputs found
{TADA}: {T}axonomy Adaptive Domain Adaptation
Traditional domain adaptation addresses the task of adapting a model to a novel target domain under limited or no additional supervision. While tackling the input domain gap, the standard domain adaptation settings assume no domain change in the output space. In semantic prediction tasks, different datasets are often labeled according to different semantic taxonomies. In many real-world settings, the target domain task requires a different taxonomy than the one imposed by the source domain. We therefore introduce the more general taxonomy adaptive domain adaptation (TADA) problem, allowing for inconsistent taxonomies between the two domains. We further propose an approach that jointly addresses the image-level and label-level domain adaptation. On the label-level, we employ a bilateral mixed sampling strategy to augment the target domain, and a relabelling method to unify and align the label spaces. We address the image-level domain gap by proposing an uncertainty-rectified contrastive learning method, leading to more domain-invariant and class discriminative features. We extensively evaluate the effectiveness of our framework under different TADA settings: open taxonomy, coarse-to-fine taxonomy, and partially-overlapping taxonomy. Our framework outperforms previous state-of-the-art by a large margin, while capable of adapting to target taxonomies
Incremental Non-Rigid Structure-from-Motion with Unknown Focal Length
The perspective camera and the isometric surface prior have recently gathered
increased attention for Non-Rigid Structure-from-Motion (NRSfM). Despite the
recent progress, several challenges remain, particularly the computational
complexity and the unknown camera focal length. In this paper we present a
method for incremental Non-Rigid Structure-from-Motion (NRSfM) with the
perspective camera model and the isometric surface prior with unknown focal
length. In the template-based case, we provide a method to estimate four
parameters of the camera intrinsics. For the template-less scenario of NRSfM,
we propose a method to upgrade reconstructions obtained for one focal length to
another based on local rigidity and the so-called Maximum Depth Heuristics
(MDH). On its basis we propose a method to simultaneously recover the focal
length and the non-rigid shapes. We further solve the problem of incorporating
a large number of points and adding more views in MDH-based NRSfM and
efficiently solve them with Second-Order Cone Programming (SOCP). This does not
require any shape initialization and produces results orders of times faster
than many methods. We provide evaluations on standard sequences with
ground-truth and qualitative reconstructions on challenging YouTube videos.
These evaluations show that our method performs better in both speed and
accuracy than the state of the art.Comment: ECCV 201
Model-free Consensus Maximization for Non-Rigid Shapes
Many computer vision methods use consensus maximization to relate
measurements containing outliers with the correct transformation model. In the
context of rigid shapes, this is typically done using Random Sampling and
Consensus (RANSAC) by estimating an analytical model that agrees with the
largest number of measurements (inliers). However, small parameter models may
not be always available. In this paper, we formulate the model-free consensus
maximization as an Integer Program in a graph using `rules' on measurements. We
then provide a method to solve it optimally using the Branch and Bound (BnB)
paradigm. We focus its application on non-rigid shapes, where we apply the
method to remove outlier 3D correspondences and achieve performance superior to
the state of the art. Our method works with outlier ratio as high as 80\%. We
further derive a similar formulation for 3D template to image matching,
achieving similar or better performance compared to the state of the art.Comment: ECCV1
White matter diffusion estimates in obsessive-compulsive disorder across 1653 individuals: machine learning findings from the ENIGMA OCD Working Group
White matter pathways, typically studied with diffusion tensor imaging (DTI), have been implicated in the neurobiology of obsessive-compulsive disorder (OCD). However, due to limited sample sizes and the predominance of single-site studies, the generalizability of OCD classification based on diffusion white matter estimates remains unclear. Here, we tested classification accuracy using the largest OCD DTI dataset to date, involving 1336 adult participants (690 OCD patients and 646 healthy controls) and 317 pediatric participants (175 OCD patients and 142 healthy controls) from 18 international sites within the ENIGMA OCD Working Group. We used an automatic machine learning pipeline (with feature engineering and selection, and model optimization) and examined the cross-site generalizability of the OCD classification models using leave-one-site-out cross-validation. Our models showed low-to-moderate accuracy in classifying (1) “OCD vs. healthy controls” (Adults, receiver operator characteristic-area under the curve = 57.19 ± 3.47 in the replication set; Children, 59.8 ± 7.39), (2) “unmedicated OCD vs. healthy controls” (Adults, 62.67 ± 3.84; Children, 48.51 ± 10.14), and (3) “medicated OCD vs. unmedicated OCD” (Adults, 76.72 ± 3.97; Children, 72.45 ± 8.87). There was significant site variability in model performance (cross-validated ROC AUC ranges 51.6–79.1 in adults; 35.9–63.2 in children). Machine learning interpretation showed that diffusivity measures of the corpus callosum, internal capsule, and posterior thalamic radiation contributed to the classification of OCD from HC. The classification performance appeared greater than the model trained on grey matter morphometry in the prior ENIGMA OCD study (our study includes subsamples from the morphometry study). Taken together, this study points to the meaningful multivariate patterns of white matter features relevant to the neurobiology of OCD, but with low-to-moderate classification accuracy. The OCD classification performance may be constrained by site variability and medication effects on the white matter integrity, indicating room for improvement for future research