41 research outputs found
A complete catalogue of broad-line AGNs and double-peaked emission lines from MaNGA integral-field spectroscopy of 10K galaxies: stellar population of AGNs, supermassive black holes, and dual AGNs
We analyse the integral-field spectroscopy data for the
galaxies in final data release of the MaNGA survey. We identify 188 galaxies
for which the emission lines cannot be described by single Gaussian components.
These galaxies can be classified into (1) 38 galaxies with broad and
[OIII] 5007 lines, (2) 101 galaxies with broad lines but no
broad [OIII] 5007 lines, and (3) 49 galaxies with double-peaked narrow
emission lines. Most of the broad line galaxies are classified as Active
Galactic Nuclei (AGN) from their line ratios. The catalogue helps us further
understand the AGN-galaxy coevolution through the stellar population of
broad-line region host galaxies and the relation between broad lines'
properties and the host galaxies' dynamical properties. The stellar population
properties (including mass, age and metallicity) of broad-line host galaxies
suggest there is no significant difference between narrow-line Seyfert-2
galaxies and Type-1 AGN with broad lines. We use the broad-
line width and luminosity to estimate masses of black hole in these galaxies,
and test the relation in Type-1 AGN host galaxies.
Furthermore we find three dual AGN candidates supported by radio images from
the VLA FIRST survey. This sample may be useful for further studies on AGN
activities and feedback processes.Comment: 21 pages, 17 figures, LaTeX. Accepted by MNRA
Towards Label-free Scene Understanding by Vision Foundation Models
Vision foundation models such as Contrastive Vision-Language Pre-training
(CLIP) and Segment Anything (SAM) have demonstrated impressive zero-shot
performance on image classification and segmentation tasks. However, the
incorporation of CLIP and SAM for label-free scene understanding has yet to be
explored. In this paper, we investigate the potential of vision foundation
models in enabling networks to comprehend 2D and 3D worlds without labelled
data. The primary challenge lies in effectively supervising networks under
extremely noisy pseudo labels, which are generated by CLIP and further
exacerbated during the propagation from the 2D to the 3D domain. To tackle
these challenges, we propose a novel Cross-modality Noisy Supervision (CNS)
method that leverages the strengths of CLIP and SAM to supervise 2D and 3D
networks simultaneously. In particular, we introduce a prediction consistency
regularization to co-train 2D and 3D networks, then further impose the
networks' latent space consistency using the SAM's robust feature
representation. Experiments conducted on diverse indoor and outdoor datasets
demonstrate the superior performance of our method in understanding 2D and 3D
open environments. Our 2D and 3D network achieves label-free semantic
segmentation with 28.4% and 33.5% mIoU on ScanNet, improving 4.7% and 7.9%,
respectively. And for nuScenes dataset, our performance is 26.8% with an
improvement of 6%. Code will be released
(https://github.com/runnanchen/Label-Free-Scene-Understanding)
Rethinking Range View Representation for LiDAR Segmentation
LiDAR segmentation is crucial for autonomous driving perception. Recent
trends favor point- or voxel-based methods as they often yield better
performance than the traditional range view representation. In this work, we
unveil several key factors in building powerful range view models. We observe
that the "many-to-one" mapping, semantic incoherence, and shape deformation are
possible impediments against effective learning from range view projections. We
present RangeFormer -- a full-cycle framework comprising novel designs across
network architecture, data augmentation, and post-processing -- that better
handles the learning and processing of LiDAR point clouds from the range view.
We further introduce a Scalable Training from Range view (STR) strategy that
trains on arbitrary low-resolution 2D range images, while still maintaining
satisfactory 3D segmentation accuracy. We show that, for the first time, a
range view method is able to surpass the point, voxel, and multi-view fusion
counterparts in the competing LiDAR semantic and panoptic segmentation
benchmarks, i.e., SemanticKITTI, nuScenes, and ScribbleKITTI.Comment: ICCV 2023; 24 pages, 10 figures, 14 tables; Webpage at
https://ldkong.com/RangeForme
CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP
Contrastive Language-Image Pre-training (CLIP) achieves promising results in
2D zero-shot and few-shot learning. Despite the impressive performance in 2D,
applying CLIP to help the learning in 3D scene understanding has yet to be
explored. In this paper, we make the first attempt to investigate how CLIP
knowledge benefits 3D scene understanding. We propose CLIP2Scene, a simple yet
effective framework that transfers CLIP knowledge from 2D image-text
pre-trained models to a 3D point cloud network. We show that the pre-trained 3D
network yields impressive performance on various downstream tasks, i.e.,
annotation-free and fine-tuning with labelled data for semantic segmentation.
Specifically, built upon CLIP, we design a Semantic-driven Cross-modal
Contrastive Learning framework that pre-trains a 3D network via semantic and
spatial-temporal consistency regularization. For the former, we first leverage
CLIP's text semantics to select the positive and negative point samples and
then employ the contrastive loss to train the 3D network. In terms of the
latter, we force the consistency between the temporally coherent point cloud
features and their corresponding image features. We conduct experiments on
SemanticKITTI, nuScenes, and ScanNet. For the first time, our pre-trained
network achieves annotation-free 3D semantic segmentation with 20.8% and 25.08%
mIoU on nuScenes and ScanNet, respectively. When fine-tuned with 1% or 100%
labelled data, our method significantly outperforms other self-supervised
methods, with improvements of 8% and 1% mIoU, respectively. Furthermore, we
demonstrate the generalizability for handling cross-domain datasets. Code is
publicly available https://github.com/runnanchen/CLIP2Scene.Comment: CVPR 202
UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the OpenPCSeg Codebase
Point-, voxel-, and range-views are three representative forms of point
clouds. All of them have accurate 3D measurements but lack color and texture
information. RGB images are a natural complement to these point cloud views and
fully utilizing the comprehensive information of them benefits more robust
perceptions. In this paper, we present a unified multi-modal LiDAR segmentation
network, termed UniSeg, which leverages the information of RGB images and three
views of the point cloud, and accomplishes semantic segmentation and panoptic
segmentation simultaneously. Specifically, we first design the Learnable
cross-Modal Association (LMA) module to automatically fuse voxel-view and
range-view features with image features, which fully utilize the rich semantic
information of images and are robust to calibration errors. Then, the enhanced
voxel-view and range-view features are transformed to the point space,where
three views of point cloud features are further fused adaptively by the
Learnable cross-View Association module (LVA). Notably, UniSeg achieves
promising results in three public benchmarks, i.e., SemanticKITTI, nuScenes,
and Waymo Open Dataset (WOD); it ranks 1st on two challenges of two benchmarks,
including the LiDAR semantic segmentation challenge of nuScenes and panoptic
segmentation challenges of SemanticKITTI. Besides, we construct the OpenPCSeg
codebase, which is the largest and most comprehensive outdoor LiDAR
segmentation codebase. It contains most of the popular outdoor LiDAR
segmentation algorithms and provides reproducible implementations. The
OpenPCSeg codebase will be made publicly available at
https://github.com/PJLab-ADG/PCSeg.Comment: ICCV 2023; 21 pages; 9 figures; 18 tables; Code at
https://github.com/PJLab-ADG/PCSe
Pd-Catalyzed Homo Cross-dehydrogenative Coupling of 2-Arylpyridines by using I2 as the Sole Oxidant
A palladium-catalyzed homo cross-dehydrogenative coupling (CDC) of 2-arylpyridines via C–H activation is described. This reaction employs I2 as the sole oxidant without any other additives, which complements the hypervalent iodine chemistry, such as of phenyliodonium diacetate (PIDA) or IOAc, in C–H activation research field. A tentative mechanism involving a Pd(II)–Pd(IV) catalytic cycle is proposed to rationalize this homo CDC reaction
simulation and interaction of fluid dynamics
In the fluid simulation, the fluids and their surroundings may greatly change properties such as shape and temperature simultaneously, and different surroundings would characterize different interactions, which would change the shape and motion of the fluids in different ways. On the other hand, interactions among fluid mixtures of different kinds would generate more comprehensive behavior. To investigate the interaction behavior in physically based simulation of fluids, it is of importance to build physically correct models to represent the varying interactions between fluids and the environments, as well as interactions among the mixtures. In this paper, we will make a simple review of the interactions, and focus on those most interesting to us, and model them with various physical solutions. In particular, more detail will be given on the simulation of miscible and immiscible binary mixtures. In some of the methods, it is advantageous to be taken with the graphics processing unit (GPU) to achieve real-time computation for middle-scale simulation
Preparation and Drug-Loading Properties of Amphoteric Cassava Starch Nanoparticles
Based on the characteristics of charge reversal around the isoelectric point (pI) of amphoteric starch-containing anionic and cationic groups, amphoteric cassava starch nanoparticles (CA-CANPs) are prepared by a W/O microemulsion crosslinking method using (3-chloro-2-hydroxypropyl) trimethyl ammonium chloride as a cationic reagent and POCl3 as an anionic reagent, and the effects of preparation conditions on the particle size of the CA-CANPs are studied in detail in the present study. CA-CANPs with a smooth surface and an average diameter of 252 nm are successfully prepared at the following optimised conditions: a crosslinking agent amount of 15 wt%, an aqueous starch concentration of 6.0 wt%, an oil–water ratio of 10:1, a total surfactant amount of 0.20 g·mL−1, and a CHPTAC amount of 4.05 wt%. The pH-responsive value of the CA-CANPs can be regulated by adjusting the nitrogen–phosphorus molar ratio in the CA-CANPs. By using CA-CANPs with a pI of 6.89 as drug carriers and the paclitaxel (PTX) as a model drug, the maximum loading rate of 36.14 mg·g−1 is achieved, and the loading process is consistent with the Langmuir isotherm adsorption, with the calculated thermodynamic parameters of ΔH° = −37.91 kJ·mol−1, ΔS° = −10.96 J·mol−1·K−1 and ΔG° < 0. By testing the release rate in vitro, it is noted that the release rates of PTX in a neutral environment (37.6% after 96 h) and a slightly acidic environment (58.65% after 96 h) are quite different, suggesting that the CA-CANPs have the possibility of being a targeted controlled-release carrier with pH responsiveness for antitumor drugs