18 research outputs found

    Bidirectional Propagation for Cross-Modal 3D Object Detection

    Full text link
    Recent works have revealed the superiority of feature-level fusion for cross-modal 3D object detection, where fine-grained feature propagation from 2D image pixels to 3D LiDAR points has been widely adopted for performance improvement. Still, the potential of heterogeneous feature propagation between 2D and 3D domains has not been fully explored. In this paper, in contrast to existing pixel-to-point feature propagation, we investigate an opposite point-to-pixel direction, allowing point-wise features to flow inversely into the 2D image branch. Thus, when jointly optimizing the 2D and 3D streams, the gradients back-propagated from the 2D image branch can boost the representation ability of the 3D backbone network working on LiDAR point clouds. Then, combining pixel-to-point and point-to-pixel information flow mechanisms, we construct an bidirectional feature propagation framework, dubbed BiProDet. In addition to the architectural design, we also propose normalized local coordinates map estimation, a new 2D auxiliary task for the training of the 2D image branch, which facilitates learning local spatial-aware features from the image modality and implicitly enhances the overall 3D detection performance. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we rank 1st\mathbf{1^{\mathrm{st}}} on the highly competitive KITTI benchmark on the cyclist class by the time of submission. The source code is available at https://github.com/Eaphan/BiProDet.Comment: Accepted by ICLR2023. Code is avaliable at https://github.com/Eaphan/BiProDe

    GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation

    Full text link
    The inherent ambiguity in ground-truth annotations of 3D bounding boxes caused by occlusions, signal missing, or manual annotation errors can confuse deep 3D object detectors during training, thus deteriorating the detection accuracy. However, existing methods overlook such issues to some extent and treat the labels as deterministic. In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects, then propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables. The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors to build probabilistic detectors and supervise the learning of the localization uncertainty. Besides, we propose an uncertainty-aware quality estimator architecture in probabilistic detectors to guide the training of IoU-branch with predicted localization uncertainty. We incorporate the proposed methods into various popular base 3D detectors and demonstrate significant and consistent performance gains on both KITTI and Waymo benchmark datasets. Especially, the proposed GLENet-VR outperforms all published LiDAR-based approaches by a large margin and ranks 1st1^{st} among single-modal methods on the challenging KITTI test set. We will make the source code and pre-trained models publicly available

    Generalized Category Discovery in Semantic Segmentation

    Full text link
    This paper explores a novel setting called Generalized Category Discovery in Semantic Segmentation (GCDSS), aiming to segment unlabeled images given prior knowledge from a labeled set of base classes. The unlabeled images contain pixels of the base class or novel class. In contrast to Novel Category Discovery in Semantic Segmentation (NCDSS), there is no prerequisite for prior knowledge mandating the existence of at least one novel class in each unlabeled image. Besides, we broaden the segmentation scope beyond foreground objects to include the entire image. Existing NCDSS methods rely on the aforementioned priors, making them challenging to truly apply in real-world situations. We propose a straightforward yet effective framework that reinterprets the GCDSS challenge as a task of mask classification. Additionally, we construct a baseline method and introduce the Neighborhood Relations-Guided Mask Clustering Algorithm (NeRG-MaskCA) for mask categorization to address the fragmentation in semantic representation. A benchmark dataset, Cityscapes-GCD, derived from the Cityscapes dataset, is established to evaluate the GCDSS framework. Our method demonstrates the feasibility of the GCDSS problem and the potential for discovering and segmenting novel object classes in unlabeled images. We employ the generated pseudo-labels from our approach as ground truth to supervise the training of other models, thereby enabling them with the ability to segment novel classes. It paves the way for further research in generalized category discovery, broadening the horizons of semantic segmentation and its applications. For details, please visit https://github.com/JethroPeng/GCDS

    Phytophthora Root Rot Resistance in Soybean E00003

    Get PDF
    Phytophthora root rot (PRR) is a devastating disease in soybean [Glycine max (L.) Merr.] production. Michigan elite soybean E00003 is resistant to Phytophthora sojae and has been used as a resistance source in breeding. Genetic control of PRR resistance in this source is unknown. To facilitate marker-assisted selection (MAS), the PRR resistance loci in E00003 and their map locations need to be determined. In this study, a genetic mapping approach was used to identify major PRR -resistant loci in E00003. The mapping population consists of 240 F4–derived lines developed by crossing E00003 with the P. sojae susceptible line PI 567543C. In 2009 and 2010, the mapping population was evaluated in the greenhouse for PRR resistance against P. sojae races 1, 4, and 7, using modified rice (Oryza sativa L.) grain inoculation method. The population was genotyped with seven simple sequence repeat (SSR) and three single nucleotide polymorphism (SNP) markers derived from bulk segregant analysis. The heritability of resistance in the population ranged from 83 to 94%. A major locus, contributing 50 to 76% of the phenotypic variation, was mapped within a 3 cM interval in the Rps1 region. The interval was further saturated with more BARCSOY SSRs and SNPs with TaqMan assays. Two SSRs and three SNPs within the Rps1k gene were highly associated with PRR resistance in the mapping population. The major resistance gene in E00003 is either allelic or tightly linked to Rps1k.The molecular markers located in the Rps1k gene can be used to improve MAS for PRR resistance

    Protein and mRNA expression of estradiol receptors during estrus in yaks (Bos grunniens)

    No full text
    ABSTRACTThe objective of this study was to investigate mRNA by real-time PCR and protein expression by immunofluorescence of the estradiol receptors (ER) in the pineal gland, hypothalamus, pituitary gland, and gonads of yaks (Bos grunniens). The analysis showed that the level of expression of ER mRNA was greater in the pituitary gland tissue than in other glands during estrus. Immunofluorescence analyses showed that ER proteins were located in the pineal cells, synaptic ribbon, and synaptic spherules of the pineal gland. In the hypothalamus, ER proteins were located in the magnocellular and parvocellular neurons. The ER proteins were located in acidophilic cells and basophilic cells in the pituitary gland. In the ovary, ER proteins were present in the ovarian follicle, corpus luteum and Leydig cells. Estradiol exerts its main effects on the pituitary gland during estrus in yak
    corecore