60 research outputs found
Spin-Orbit Interactions in Electronic Structure Quantum Monte Carlo
We develop generalization of the fixed-phase diffusion Monte Carlo method for
Hamiltonians which explicitly depend on particle spins such as for spin-orbit
interactions. The method is formulated in zero variance manner and is similar
to treatment of nonlocal operators in commonly used static- spin calculations.
Tests on atomic and molecular systems show that it is very accurate, on par
with the fixed-node method. This opens electronic structure quantum Monte Carlo
methods to a vast research area of quantum phenomena in which spin-related
interactions play an important role.Comment: Version 3: Some text additions. Results and conclusions unchanged. 5
pages, 2 figure
Density-invariant Features for Distant Point Cloud Registration
Registration of distant outdoor LiDAR point clouds is crucial to extending
the 3D vision of collaborative autonomous vehicles, and yet is challenging due
to small overlapping area and a huge disparity between observed point
densities. In this paper, we propose Group-wise Contrastive Learning (GCL)
scheme to extract density-invariant geometric features to register distant
outdoor LiDAR point clouds. We mark through theoretical analysis and
experiments that, contrastive positives should be independent and identically
distributed (i.i.d.), in order to train densityinvariant feature extractors. We
propose upon the conclusion a simple yet effective training scheme to force the
feature of multiple point clouds in the same spatial location (referred to as
positive groups) to be similar, which naturally avoids the sampling bias
introduced by a pair of point clouds to conform with the i.i.d. principle. The
resulting fully-convolutional feature extractor is more powerful and
density-invariant than state-of-the-art methods, improving the registration
recall of distant scenarios on KITTI and nuScenes benchmarks by 40.9% and
26.9%, respectively. Code is available at https://github.com/liuQuan98/GCL.Comment: In Proceedings of the IEEE/CVF International Conference on Computer
Vision (ICCV), 202
MoGDE: Boosting Mobile Monocular 3D Object Detection with Ground Depth Estimation
Monocular 3D object detection (Mono3D) in mobile settings (e.g., on a
vehicle, a drone, or a robot) is an important yet challenging task. Due to the
near-far disparity phenomenon of monocular vision and the ever-changing camera
pose, it is hard to acquire high detection accuracy, especially for far
objects. Inspired by the insight that the depth of an object can be well
determined according to the depth of the ground where it stands, in this paper,
we propose a novel Mono3D framework, called MoGDE, which constantly estimates
the corresponding ground depth of an image and then utilizes the estimated
ground depth information to guide Mono3D. To this end, we utilize a pose
detection network to estimate the pose of the camera and then construct a
feature map portraying pixel-level ground depth according to the 3D-to-2D
perspective geometry. Moreover, to improve Mono3D with the estimated ground
depth, we design an RGB-D feature fusion network based on the transformer
structure, where the long-range self-attention mechanism is utilized to
effectively identify ground-contacting points and pin the corresponding ground
depth to the image feature map. We conduct extensive experiments on the
real-world KITTI dataset. The results demonstrate that MoGDE can effectively
improve the Mono3D accuracy and robustness for both near and far objects. MoGDE
yields the best performance compared with the state-of-the-art methods by a
large margin and is ranked number one on the KITTI 3D benchmark.Comment: 36th Conference on Neural Information Processing Systems (NeurIPS),
2022. arXiv admin note: text overlap with arXiv:2303.1301
Efficient Adaptive Activation Rounding for Post-Training Quantization
Post-training quantization attracts increasing attention due to its
convenience in deploying quantized neural networks. Although
rounding-to-nearest remains the prevailing method for DNN quantization, prior
research has demonstrated its suboptimal nature when applied to weight
quantization. They propose optimizing weight rounding schemes by leveraging
output error rather than the traditional weight quantization error. Our study
reveals that similar rounding challenges also extend to activation
quantization. Despite the easy generalization, the challenges lie in the
dynamic nature of activation. Adaptive rounding is expected for varying
activations and the method is subjected to runtime overhead. To tackle this, we
propose the AQuant quantization framework with a novel perspective to reduce
output error by adjusting rounding schemes of activations. Instead of using the
constant rounding border 0.5 of the rounding-to-nearest operation, we make the
border become a function w.r.t. the activation value to change the activation
rounding by the adaptive border. To deal with the runtime overhead, we use a
coarse-grained version of the border function. Finally, we introduce our
framework to optimize the border function. Extensive experiments show that
AQuant achieves notable improvements compared to state-of-the-art works and
pushes the accuracy of ResNet-18 up to 60.31% under the 2-bit weight and
activation quantization
Exploring automated formant analysis for comparative variationist study of Heritage Cantonese and English
We consider the possibility of Cantonese and English reciprocally influencing vowel space in Toronto’s Heritage Cantonese community by comparing Generation1 and Generation2 speakers in both languages. We predict more English-like patterns in Gen2 Cantonese (vs. Gen1) and more Cantonese-like patterns in Gen1 English (vs. Gen2). Methodological innovations include automated forced alignment and formant extraction for Cantonese -- methods increasingly used for English data but not frequently applied to other languages in sociolinguistics. Extension to additional languages provides testing grounds for sociolinguistic generalizations which have been based primarily on English, French and Spanish. FAVE (Rosenfelder et al. 2011) was used to force-align English transcripts to the corresponding .wav. Cantonese transcripts were force-aligned in ProsodyLab (Gorman et al. 2011), using unsupervised machine learning to train acoustic models, customizable for non-English data (unlike FAVE). FAVE was used to extract and normalize English formant measurements (F1, F2) at each vowel midpoint. A custom Praat script did the same for Cantonese. Data consists of ~40,000 measured vowels for each language: all stressed vowels produced by 10 speakers per language during a 1-hour interview. This paper focuses on ~9,000 tokens of /i/. Preliminary results from mixed-effects modeling: o Generation and sex are main effects in Cantonese for both F1 and F2, but only for F2 in English → As predicted; Gen1 speakers haven’t fully acquired social conditioning in English. Contra predictions, Gen2 sustains Gen1-like social conditioning in Cantonese. o As in Homeland Cantonese (Yue-Hashimoto 1972:158), Heritage Cantonese /i/ shows a centralizing effect of following velars; stronger in Gen1 than Gen2 → supports our hypothesis. Neither generation transfers this effect to English. o Without any human correction, the automatically extracted and measured data behaves much as expected → a promising avenue for further investigation. Comparisons to Toronto Anglo English (Boberg 2008, Roeder & Jarmasz 2010, Roeder 2012) will be reported
- …