2,167 research outputs found

    3D Model-free Visual Localization System from Essential Matrix under Local Planar Motion

    Full text link
    Visual localization plays a critical role in the functionality of low-cost autonomous mobile robots. Current state-of-the-art approaches for achieving accurate visual localization are 3D scene-specific, requiring additional computational and storage resources to construct a 3D scene model when facing a new environment. An alternative approach of directly using a database of 2D images for visual localization offers more flexibility. However, such methods currently suffer from limited localization accuracy. In this paper, we propose an accurate and robust multiple checking-based 3D model-free visual localization system to address the aforementioned issues. To ensure high accuracy, our focus is on estimating the pose of a query image relative to the retrieved database images using 2D-2D feature matches. Theoretically, by incorporating the local planar motion constraint into both the estimation of the essential matrix and the triangulation stages, we reduce the minimum required feature matches for absolute pose estimation, thereby enhancing the robustness of outlier rejection. Additionally, we introduce a multiple-checking mechanism to ensure the correctness of the solution throughout the solving process. For validation, qualitative and quantitative experiments are performed on both simulation and two real-world datasets and the experimental results demonstrate a significant enhancement in both accuracy and robustness afforded by the proposed 3D model-free visual localization system

    Three-Dimensional Culture of Hybridoma Cells Secreting Anti-Human Chorionic Gonadotropin by a New Rolling Culture System

    Get PDF
    Cell growth rate and production of monoclonal antibody (MAb) of hybridoma cells producing anti-human chorionic gonadotropin (hCG) MAb have been used as investigation criteria in double-mouthed rolling bottle (DMRB). Compared with T-flask cell culture, both of the cell number and MAb production increased by approximately 42.5% when the medium was supplemented with 5% fetal calf serum (FCS) and DMRB rotated at 2 turns per minute. Yield of MAb was experimentally related to the number of viable cells. Interestingly, MAb yield was four times as high as that cultured in T-flask in the first 24 hours, and about 75% yield of total MAb was secreted by 48 hours during the culture. It appears that the promoted cell growth and MAb yield are resulted from the three-dimensional growth of hybridoma cells under a suitably revolving condition

    Leveraging BEV Representation for 360-degree Visual Place Recognition

    Full text link
    This paper investigates the advantages of using Bird's Eye View (BEV) representation in 360-degree visual place recognition (VPR). We propose a novel network architecture that utilizes the BEV representation in feature extraction, feature aggregation, and vision-LiDAR fusion, which bridges visual cues and spatial awareness. Our method extracts image features using standard convolutional networks and combines the features according to pre-defined 3D grid spatial points. To alleviate the mechanical and time misalignments between cameras, we further introduce deformable attention to learn the compensation. Upon the BEV feature representation, we then employ the polar transform and the Discrete Fourier transform for aggregation, which is shown to be rotation-invariant. In addition, the image and point cloud cues can be easily stated in the same coordinates, which benefits sensor fusion for place recognition. The proposed BEV-based method is evaluated in ablation and comparative studies on two datasets, including on-the-road and off-the-road scenarios. The experimental results verify the hypothesis that BEV can benefit VPR by its superior performance compared to baseline methods. To the best of our knowledge, this is the first trial of employing BEV representation in this task
    • …
    corecore