288 research outputs found
Score-PA: Score-based 3D Part Assembly
Autonomous 3D part assembly is a challenging task in the areas of robotics
and 3D computer vision. This task aims to assemble individual components into a
complete shape without relying on predefined instructions. In this paper, we
formulate this task from a novel generative perspective, introducing the
Score-based 3D Part Assembly framework (Score-PA) for 3D part assembly. Knowing
that score-based methods are typically time-consuming during the inference
stage. To address this issue, we introduce a novel algorithm called the Fast
Predictor-Corrector Sampler (FPC) that accelerates the sampling process within
the framework. We employ various metrics to assess assembly quality and
diversity, and our evaluation results demonstrate that our algorithm
outperforms existing state-of-the-art approaches. We release our code at
https://github.com/J-F-Cheng/Score-PA_Score-based-3D-Part-Assembly.Comment: BMVC 202
Netrin-1 attenuates the progression of renal dysfunction by blocking endothelial-to-mesenchymal transition in the 5/6 nephrectomy rat model
Robust Perception through Equivariance
Deep networks for computer vision are not reliable when they encounter
adversarial examples. In this paper, we introduce a framework that uses the
dense intrinsic constraints in natural images to robustify inference. By
introducing constraints at inference time, we can shift the burden of
robustness from training to the inference algorithm, thereby allowing the model
to adjust dynamically to each individual image's unique and potentially novel
characteristics at inference time. Among different constraints, we find that
equivariance-based constraints are most effective, because they allow dense
constraints in the feature space without overly constraining the representation
at a fine-grained level. Our theoretical results validate the importance of
having such dense constraints at inference time. Our empirical experiments show
that restoring feature equivariance at inference time defends against
worst-case adversarial perturbations. The method obtains improved adversarial
robustness on four datasets (ImageNet, Cityscapes, PASCAL VOC, and MS-COCO) on
image recognition, semantic segmentation, and instance segmentation tasks.
Project page is available at equi4robust.cs.columbia.edu
Repeating Ultraluminous X-ray Bursts and Repeating Fast Radio Bursts: A Possible Association?
Ultraluminous X-ray bursts (hereafter ULXBs) are ultraluminous X-ray flares
with a fast rise ( one minute) and a slow decay ( an hour), which
are commonly observed in extragalactic globular clusters. Most ULXBs are
observational one-off bursts, whereas five flares from the same source in NGC
5128 were discovered by Irwin et al. (2016). In this Letter, we propose a
neutron star (NS)-white dwarf (WD) binary model with super-Eddington accretion
rates to explain the repeating behavior of the ULXB source in NGC 5128. With an
eccentric orbit, the mass transfer occurs at the periastron where the WD fills
its Roche lobe. The ultraluminous X-ray flares can be produced by the accretion
column around the NS magnetic poles. On the other hand, some repeating fast
radio bursts (hereafter FRBs) were also found in extragalactic globular
clusters. Repeating ULXBs and repeating FRBs are the most violent bursts in the
X-ray and radio bands, respectively. We propose a possible association between
the repeating ULXBs and the repeating FRBs. Such an association is worth
further investigation by follow-up observations on nearby extragalactic
globular clusters.Comment: 8 pages, 3 figures, accepted for publication in Ap
Distributed bundle adjustment with block-based sparse matrix compression for super large scale datasets
We propose a distributed bundle adjustment (DBA) method using the exact
Levenberg-Marquardt (LM) algorithm for super large-scale datasets. Most of the
existing methods partition the global map to small ones and conduct bundle
adjustment in the submaps. In order to fit the parallel framework, they use
approximate solutions instead of the LM algorithm. However, those methods often
give sub-optimal results. Different from them, we utilize the exact LM
algorithm to conduct global bundle adjustment where the formation of the
reduced camera system (RCS) is actually parallelized and executed in a
distributed way. To store the large RCS, we compress it with a block-based
sparse matrix compression format (BSMC), which fully exploits its block
feature. The BSMC format also enables the distributed storage and updating of
the global RCS. The proposed method is extensively evaluated and compared with
the state-of-the-art pipelines using both synthetic and real datasets.
Preliminary results demonstrate the efficient memory usage and vast scalability
of the proposed method compared with the baselines. For the first time, we
conducted parallel bundle adjustment using LM algorithm on a real datasets with
1.18 million images and a synthetic dataset with 10 million images (about 500
times that of the state-of-the-art LM-based BA) on a distributed computing
system.Comment: camera ready version for ICCV202
- …