10,887 research outputs found
Multiparty quantum secret sharing with pure entangled states and decoy photons
We present a scheme for multiparty quantum secret sharing of a private key
with pure entangled states and decoy photons. The boss, say Alice uses the
decoy photons, which are randomly in one of the four nonorthogonal
single-photon states, to prevent a potentially dishonest agent from
eavesdropping freely. This scheme requires the parties of communication to have
neither an ideal single-photon quantum source nor a maximally entangled one,
which makes this scheme more convenient than others in a practical application.
Moreover, it has the advantage of having high intrinsic efficiency for qubits
and exchanging less classical information in principle.Comment: 5 pages, no figure
Environment, morphology and stellar populations of bulgeless low surface brightness galaxies
Based on the Sloan Digital Sky Survey DR 7, we investigate the environment,
morphology and stellar population of bulgeless low surface brightness (LSB)
galaxies in a volume-limited sample with redshift ranging from 0.024 to 0.04
and . The local density parameter is used to
trace their environments. We find that, for bulgeless galaxies, the surface
brightness does not depend on the environment. The stellar populations are
compared for bulgeless LSB galaxies in different environments and for bulgeless
LSB galaxies with different morphologies. The stellar populations of LSB
galaxies in low density regions are similar to those of LSB galaxies in high
density regions. Irregular LSB galaxies have more young stars and are more
metal-poor than regular LSB galaxies. These results suggest that the evolution
of LSB galaxies may be driven by their dynamics including mergers rather than
by their large scale environment.Comment: 12 pages, 13 figures, Accepted by A&
DID-M3D: Decoupling Instance Depth for Monocular 3D Object Detection
Monocular 3D detection has drawn much attention from the community due to its
low cost and setup simplicity. It takes an RGB image as input and predicts 3D
boxes in the 3D space. The most challenging sub-task lies in the instance depth
estimation. Previous works usually use a direct estimation method. However, in
this paper we point out that the instance depth on the RGB image is
non-intuitive. It is coupled by visual depth clues and instance attribute
clues, making it hard to be directly learned in the network. Therefore, we
propose to reformulate the instance depth to the combination of the instance
visual surface depth (visual depth) and the instance attribute depth (attribute
depth). The visual depth is related to objects' appearances and positions on
the image. By contrast, the attribute depth relies on objects' inherent
attributes, which are invariant to the object affine transformation on the
image. Correspondingly, we decouple the 3D location uncertainty into visual
depth uncertainty and attribute depth uncertainty. By combining different types
of depths and associated uncertainties, we can obtain the final instance depth.
Furthermore, data augmentation in monocular 3D detection is usually limited due
to the physical nature, hindering the boost of performance. Based on the
proposed instance depth disentanglement strategy, we can alleviate this
problem. Evaluated on KITTI, our method achieves new state-of-the-art results,
and extensive ablation studies validate the effectiveness of each component in
our method. The codes are released at https://github.com/SPengLiang/DID-M3D.Comment: ECCV 202
General Rotation Invariance Learning for Point Clouds via Weight-Feature Alignment
Compared to 2D images, 3D point clouds are much more sensitive to rotations.
We expect the point features describing certain patterns to keep invariant to
the rotation transformation. There are many recent SOTA works dedicated to
rotation-invariant learning for 3D point clouds. However, current
rotation-invariant methods lack generalizability on the point clouds in the
open scenes due to the reliance on the global distribution, \ie the global
scene and backgrounds. Considering that the output activation is a function of
the pattern and its orientation, we need to eliminate the effect of the
orientation.In this paper, inspired by the idea that the network weights can be
considered a set of points distributed in the same 3D space as the input
points, we propose Weight-Feature Alignment (WFA) to construct a local
Invariant Reference Frame (IRF) via aligning the features with the principal
axes of the network weights. Our WFA algorithm provides a general solution for
the point clouds of all scenes. WFA ensures the model achieves the target that
the response activity is a necessary and sufficient condition of the pattern
matching degree. Practically, we perform experiments on the point clouds of
both single objects and open large-range scenes. The results suggest that our
method almost bridges the gap between rotation invariance learning and normal
methods.Comment: 4 figure
Lidar Point Cloud Guided Monocular 3D Object Detection
Monocular 3D object detection is a challenging task in the self-driving and
computer vision community. As a common practice, most previous works use
manually annotated 3D box labels, where the annotating process is expensive. In
this paper, we find that the precisely and carefully annotated labels may be
unnecessary in monocular 3D detection, which is an interesting and
counterintuitive finding. Using rough labels that are randomly disturbed, the
detector can achieve very close accuracy compared to the one using the
ground-truth labels. We delve into this underlying mechanism and then
empirically find that: concerning the label accuracy, the 3D location part in
the label is preferred compared to other parts of labels. Motivated by the
conclusions above and considering the precise LiDAR 3D measurement, we propose
a simple and effective framework, dubbed LiDAR point cloud guided monocular 3D
object detection (LPCG). This framework is capable of either reducing the
annotation costs or considerably boosting the detection accuracy without
introducing extra annotation costs. Specifically, It generates pseudo labels
from unlabeled LiDAR point clouds. Thanks to accurate LiDAR 3D measurements in
3D space, such pseudo labels can replace manually annotated labels in the
training of monocular 3D detectors, since their 3D location information is
precise. LPCG can be applied into any monocular 3D detector to fully use
massive unlabeled data in a self-driving system. As a result, in KITTI
benchmark, we take the first place on both monocular 3D and BEV
(bird's-eye-view) detection with a significant margin. In Waymo benchmark, our
method using 10% labeled data achieves comparable accuracy to the baseline
detector using 100% labeled data. The codes are released at
https://github.com/SPengLiang/LPCG.Comment: ECCV 202
- …