37 research outputs found

    Vehicle global 6-DoF pose estimation under traffic surveillance camera

    Get PDF
    Abstract(#br)Accurately sensing the global position and posture of vehicles in traffic surveillance videos is a challenging but valuable issue for future intelligent transportation systems. Although in recent years, deep learning has brought about major breakthroughs in the six degrees of freedom (6-DoF) pose estimation of objects from monocular images, accurate estimation of the geographic 6-DoF poses of vehicles using images from traffic surveillance cameras remains challenging. We present an architecture that computes continuous global 6-DoF poses throughout joint 2D landmark estimation and 3D pose reconstruction. The architecture infers the 6-DoF pose of a vehicle from the appearance of the image of the vehicle and 3D information. The architecture, which does not rely on intrinsic camera parameters, can be applied to all surveillance cameras by a pre-trained model. Also, with the help of 3D information from the point clouds and the 3D model itself, the architecture can predict landmarks with few and/or blurred textures. Moreover, because of the lack of public training datasets, we release a large-scale dataset, ADFSC, that contains 120 K groups of data with random viewing angles. Regarding both 2D and 3D metrics, our architecture outperforms existing state-of-the-art algorithms in vehicle 6-DoF estimation

    Analysis of Circulating Tumor Cells in Ovarian Cancer and Their Clinical Value as a Biomarker

    Get PDF
    Background/Aims: Monitoring the appearance and progression of tumors are important for improving the survival rate of patients with ovarian cancer. This study aims to examine circulating tumor cells (CTCs) in epithelial ovarian cancer (EOC) patients to evaluate their clinical significance in comparison to the existing biomarker CA125. Methods: Immuomagnetic bead screening, targeting epithelial antigens on ovarian cancer cells, combined with multiplex reverse transcriptase-polymerase chain reaction (Multiplex RT-PCR) was used to detect CTCs in 211 samples of peripheral blood (5 ml) from 109 EOC patients. CTCs and CA125 were measured in serial from 153 blood and 153 serum samples from 51 patients and correlations with treatment were analyzed. Immunohistochemistry was used to detect the expression of tumor-associated proteins in tumor tissues and compared with gene expression in CTCs from patients. Results: CTCs were detected in 90% (98/109) of newly diagnosed patients. In newly diagnosed patients, the number of CTCs was correlated with stage (p=0.034). Patients with stage IA-IB disease had a CTC positive rate of 93% (13/14), much higher than the CA125 positive rate of only 64% (9/14) for the same patients. The numbers of CTCs changed with treatment, and the expression of EpCAM (p=0.003) and HER2 (p=0.035) in CTCs was correlated with resistance to chemotherapy. Expression of EpCAM in CTCs before treatment was also correlated with overall survival (OS) (p=0.041). Conclusion: Detection of CTCs allows early diagnose and expression of EpCAM in CTC positive patients predicts prognosis and should be helpful for monitoring treatment

    Automated Visual Recognizability Evaluation of Traffic Sign Based on 3D LiDAR Point Clouds

    No full text
    Maintaining the high visual recognizability of traffic signs for traffic safety is a key matter for road network management. Mobile Laser Scanning (MLS) systems provide efficient way of 3D measurement over large-scale traffic environment. This paper presents a quantitative visual recognizability evaluation method for traffic signs in large-scale traffic environment based on traffic recognition theory and MLS 3D point clouds. We first propose the Visibility Evaluation Model (VEM) to quantitatively describe the visibility of traffic sign from any given viewpoint, then we proposed the concept of visual recognizability field and Traffic Sign Visual Recognizability Evaluation Model (TSVREM) to measure the visual recognizability of a traffic sign. Finally, we present an automatic TSVREM calculation algorithm for MLS 3D point clouds. Experimental results on real MLS 3D point clouds show that the proposed method is feasible and efficient

    A fast method for measuring the similarity between 3d model and 3d point cloud

    No full text
    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data

    A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    No full text
    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.</jats:p

    Improving Dynamic HDR Imaging with Fusion Transformer

    No full text
    Reconstructing a High Dynamic Range (HDR) image from several Low Dynamic Range (LDR) images with different exposures is a challenging task, especially in the presence of camera and object motion. Though existing models using convolutional neural networks (CNNs) have made great progress, challenges still exist, e.g., ghosting artifacts. Transformers, originating from the field of natural language processing, have shown success in computer vision tasks, due to their ability to address a large receptive field even within a single layer. In this paper, we propose a transformer model for HDR imaging. Our pipeline includes three steps: alignment, fusion, and reconstruction. The key component is the HDR transformer module. Through experiments and ablation studies, we demonstrate that our model outperforms the state-of-the-art by large margins on several popular public datasets
    corecore