3 research outputs found
ZoomNet: Part-Aware Adaptive Zooming Neural Network for 3D Object Detection
3D object detection is an essential task in autonomous driving and robotics.
Though great progress has been made, challenges remain in estimating 3D pose
for distant and occluded objects. In this paper, we present a novel framework
named ZoomNet for stereo imagery-based 3D detection. The pipeline of ZoomNet
begins with an ordinary 2D object detection model which is used to obtain pairs
of left-right bounding boxes. To further exploit the abundant texture cues in
RGB images for more accurate disparity estimation, we introduce a conceptually
straight-forward module -- adaptive zooming, which simultaneously resizes 2D
instance bounding boxes to a unified resolution and adjusts the camera
intrinsic parameters accordingly. In this way, we are able to estimate
higher-quality disparity maps from the resized box images then construct dense
point clouds for both nearby and distant objects. Moreover, we introduce to
learn part locations as complementary features to improve the resistance
against occlusion and put forward the 3D fitting score to better estimate the
3D detection quality. Extensive experiments on the popular KITTI 3D detection
dataset indicate ZoomNet surpasses all previous state-of-the-art methods by
large margins (improved by 9.4% on APbv (IoU=0.7) over pseudo-LiDAR). Ablation
study also demonstrates that our adaptive zooming strategy brings an
improvement of over 10% on AP3d (IoU=0.7). In addition, since the official
KITTI benchmark lacks fine-grained annotations like pixel-wise part locations,
we also present our KFG dataset by augmenting KITTI with detailed instance-wise
annotations including pixel-wise part location, pixel-wise disparity, etc..
Both the KFG dataset and our codes will be publicly available at
https://github.com/detectRecog/ZoomNet.Comment: Accpeted by AAAI 2020 as Oral presentation; The github page will be
updated in March,202
One-shot neural band selection for spectral recovery
Band selection has a great impact on the spectral recovery quality. To solve
this ill-posed inverse problem, most band selection methods adopt hand-crafted
priors or exploit clustering or sparse regularization constraints to find most
prominent bands. These methods are either very slow due to the computational
cost of repeatedly training with respect to different selection frequencies or
different band combinations. Many traditional methods rely on the scene prior
and thus are not applicable to other scenarios. In this paper, we present a
novel one-shot Neural Band Selection (NBS) framework for spectral recovery.
Unlike conventional searching approaches with a discrete search space and a
non-differentiable search strategy, our NBS is based on the continuous
relaxation of the band selection process, thus allowing efficient band search
using gradient descent. To enable the compatibility for se- lecting any number
of bands in one-shot, we further exploit the band-wise correlation matrices to
progressively suppress similar adjacent bands. Extensive evaluations on the
NTIRE 2022 Spectral Reconstruction Challenge demonstrate that our NBS achieves
consistent performance gains over competitive baselines when examined with four
different spectral recov- ery methods. Our code will be publicly available.Comment: Accepted by ICASSP 2023, any questions contact
[email protected]