108 research outputs found

    Density-dependent NN-interaction from subleading chiral 3N-forces: short-range terms and relativistic corrections

    Get PDF
    We derive from the subleading contributions to the chiral three-nucleon force (short-range terms and relativistic corrections, published in Phys. Rev. C84, 054001 (2011)) a density-dependent two-nucleon interaction VmedV_\text{med} in isospin-symmetric nuclear matter. The momentum and kfk_f-dependent potentials associated with the isospin operators (11 and τ1 ⁣ ⁣τ2 \vec\tau_1\!\cdot\!\vec\tau_2) and five independent spin-structures are expressed in terms of loop functions, which are either given in closed analytical form or require at most one numerical integration. Our results for VmedV_\text{med} are most helpful to implement subleading chiral 3N-forces into nuclear many-body calculations.Comment: 16 pages, 5 figure

    Large-eddy simulation of ow and combustion dynamics in a lean partially premixed swirling combustor

    Get PDF
    A lean partially premixed swirling combustor was studied by resolving the complete flow path from the swirl vanes to the chamber outlet with large-eddy simulation (LES). The flow and combustion dynamics for non-reacting and reacting situations was analysed, where the intrinsic effects of swirl vanes and counter flows on the vortex formation, vorticity distribution for non-reacting cases were examined. A modified flame index was introduced to identify the flame regime during the partially premixed combustion. The combustion instability phenomenon was examined by applying Fourier spectra analysis. Several scalar variables were monitored to investigate the combustion dynamics at different operating conditions. The effects of swirl number, equivalence ratio and nitrogen dilution on combustion dynamics and NOx emissions were found to be significant.This work is supported by the UK EPSRC through Grant EP/K036750/1 and the National Natural Science Foundation of China through Grant No. 51376107. The computation is supported by the Tsinghua National Laboratory for Information Science and TechnologyPeer ReviewedPostprint (author's final draft

    UniTR: A Unified and Efficient Multi-Modal Transformer for Bird's-Eye-View Representation

    Full text link
    Jointly processing information from multiple sensors is crucial to achieving accurate and robust perception for reliable autonomous driving systems. However, current 3D perception research follows a modality-specific paradigm, leading to additional computation overheads and inefficient collaboration between different sensor data. In this paper, we present an efficient multi-modal backbone for outdoor 3D perception named UniTR, which processes a variety of modalities with unified modeling and shared parameters. Unlike previous works, UniTR introduces a modality-agnostic transformer encoder to handle these view-discrepant sensor data for parallel modal-wise representation learning and automatic cross-modal interaction without additional fusion steps. More importantly, to make full use of these complementary sensor types, we present a novel multi-modal integration strategy by both considering semantic-abundant 2D perspective and geometry-aware 3D sparse neighborhood relations. UniTR is also a fundamentally task-agnostic backbone that naturally supports different 3D perception tasks. It sets a new state-of-the-art performance on the nuScenes benchmark, achieving +1.1 NDS higher for 3D object detection and +12.0 higher mIoU for BEV map segmentation with lower inference latency. Code will be available at https://github.com/Haiyang-W/UniTR .Comment: Accepted by ICCV202

    CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds

    Full text link
    We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D. Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels with the same semantic predictions, which considers semantic consistency and diverse locality abandoned in previous bottom-up approaches. Then, to recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module to directly aggregate fine-grained spatial information from backbone for further proposal refinement. It is memory-and-computation efficient and can better encode the geometry-specific features of each 3D proposal. Our model achieves state-of-the-art 3D detection performance with remarkable gains of +\textit{3.6\%} on ScanNet V2 and +\textit{2.6}\% on SUN RGB-D in term of [email protected]. Code will be available at https://github.com/Haiyang-W/CAGroup3D.Comment: Accept by NeurIPS202