8,509 research outputs found

    Intelligent sampling for the measurement of structured surfaces

    Get PDF
    Uniform sampling in metrology has known drawbacks such as coherent spectral aliasing and a lack of efficiency in terms of measuring time and data storage. The requirement for intelligent sampling strategies has been outlined over recent years, particularly where the measurement of structured surfaces is concerned. Most of the present research on intelligent sampling has focused on dimensional metrology using coordinate-measuring machines with little reported on the area of surface metrology. In the research reported here, potential intelligent sampling strategies for surface topography measurement of structured surfaces are investigated by using numerical simulation and experimental verification. The methods include the jittered uniform method, low-discrepancy pattern sampling and several adaptive methods which originate from computer graphics, coordinate metrology and previous research by the authors. By combining the use of advanced reconstruction methods and feature-based characterization techniques, the measurement performance of the sampling methods is studied using case studies. The advantages, stability and feasibility of these techniques for practical measurements are discussed

    Fast and Accurate 3D Face Recognition Using Registration to an Intrinsic Coordinate System and Fusion of Multiple Region classifiers

    Get PDF
    In this paper we present a new robust approach for 3D face registration to an intrinsic coordinate system of the face. The intrinsic coordinate system is defined by the vertical symmetry plane through the nose, the tip of the nose and the slope of the bridge of the nose. In addition, we propose a 3D face classifier based on the fusion of many dependent region classifiers for overlapping face regions. The region classifiers use PCA-LDA for feature extraction and the likelihood ratio as a matching score. Fusion is realised using straightforward majority voting for the identification scenario. For verification, a voting approach is used as well and the decision is defined by comparing the number of votes to a threshold. Using the proposed registration method combined with a classifier consisting of 60 fused region classifiers we obtain a 99.0% identification rate on the all vs first identification test of the FRGC v2 data. A verification rate of 94.6% at FAR=0.1% was obtained for the all vs all verification test on the FRGC v2 data using fusion of 120 region classifiers. The first is the highest reported performance and the second is in the top-5 of best performing systems on these tests. In addition, our approach is much faster than other methods, taking only 2.5 seconds per image for registration and less than 0.1 ms per comparison. Because we apply feature extraction using PCA and LDA, the resulting template size is also very small: 6 kB for 60 region classifiers

    Coarse to fine : toward an intelligent 3D acquisition system

    No full text
    International audienceThe 3D acquisition-compression-processing chain is , most of the time , sequenced into independent stages. As resulting , a large amount of 3D points are acquired whatever the geometry of the object and the processing to be done in further steps. It appears , particularly in mechanical part 3D modeling and in CAD , that the acquisition of such an amount of data is not always mandatory. We propose a method aiming at minimizing the number of 3D points to be acquired with respect to the local geometry of the part and therefore to compress the cloud of points during the acquisition stage. The method we propose is based on a new coarse to fine approach in which from a coarse set of 2D points associated to the local normals the 3D object model is segmented into a combination of primitives. The obtained model is enriched where it is needed with new points and a new primitive extraction stage is performed in the refined regions. This is done until a given precision of the reconstructed object is attained. It is noticeable that contrary to other studies we do not work on a meshed model but directly on the data provided by the scanning device

    A Novel Point Cloud Compression Algorithm for Vehicle Recognition Using Boundary Extraction

    Get PDF
    Recently, research on the hardware system for generating point cloud data through 3D LiDAR scanning has improved, which has important applications in autonomous driving and 3D reconstruction. However, point cloud data may contain defects such as duplicate points, redundant points, and an unordered mass of points, which put higher demands on the performance of hardware systems for processing data. Simplifying and compressing point cloud data can improve recognition speed in subsequent processes. This paper studies a novel algorithm for identifying vehicles in the environment using 3D LiDAR to obtain point cloud data. The point cloud compression method based on the nearest neighbor point and boundary extraction from octree voxels center points is applied to the point cloud data, followed by the vehicle point cloud identification algorithm based on image mapping for vehicle recognition. The proposed algorithm is tested using the KITTI dataset, and the results show improved accuracy compared to other methods

    A practical comparison between two powerful PCC codec’s

    Get PDF
    Recent advances in the consumption of 3D content creates the necessity of efficient ways to visualize and transmit 3D content. As a result, methods to obtain that same content have been evolving, leading to the development of new methods of representations, namely point clouds and light fields. A point cloud represents a set of points with associated Cartesian coordinates associated with each point(x, y, z), as well as being able to contain even more information inside that point (color, material, texture, etc). This kind of representation changes the way on how 3D content in consumed, having a wide range of applications, from videogaming to medical ones. However, since this type of data carries so much information within itself, they are data-heavy, making the storage and transmission of content a daunting task. To resolve this issue, MPEG created a point cloud coding normalization project, giving birth to V-PCC (Video-based Point Cloud Coding) and G-PCC (Geometry-based Point Cloud Coding) for static content. Firstly, a general analysis of point clouds is made, spanning from their possible solutions, to their acquisition. Secondly, point cloud codecs are studied, namely VPCC and G-PCC from MPEG. Then, a state of art study of quality evaluation is performed, namely subjective and objective evaluation. Finally, a report on the JPEG Pleno Point Cloud, in which an active colaboration took place, is made, with the comparative results of the two codecs and used metrics.Os avanços recentes no consumo de conteúdo 3D vêm criar a necessidade de maneiras eficientes de visualizar e transmitir conteúdo 3D. Consequentemente, os métodos de obtenção desse mesmo conteúdo têm vindo a evoluir, levando ao desenvolvimento de novas maneiras de representação, nomeadamente point clouds e lightfields. Um point cloud (núvem de pontos) representa um conjunto de pontos com coordenadas cartesianas associadas a cada ponto (x, y, z), além de poder conter mais informação dentro do mesmo (cor, material, textura, etc). Este tipo de representação abre uma nova janela na maneira como se consome conteúdo 3D, tendo um elevado leque de aplicações, desde videojogos e realidade virtual a aplicações médicas. No entanto, este tipo de dados, ao carregarem com eles tanta informação, tornam-se incrivelmente pesados, tornando o seu armazenamento e transmissão uma tarefa hercúleana. Tendo isto em mente, a MPEG criou um projecto de normalização de codificação de point clouds, dando origem ao V-PCC (Video-based Point Cloud Coding) e G-PCC (Geometry-based Point Cloud Coding) para conteúdo estático. Esta dissertação tem como objectivo uma análise geral sobre os point clouds, indo desde as suas possívei utilizações à sua aquisição. Seguidamente, é efectuado um estudo dos codificadores de point clouds, nomeadamente o V-PCC e o G-PCC da MPEG, o estado da arte da avaliação de qualidade, objectiva e subjectiva, e finalmente, são reportadas as actividades da JPEG Pleno Point Cloud, na qual se teve uma colaboração activa

    Point Density for Soil Specimen Volume Measurements in Image-Based Methods during Triaxial Testing

    Get PDF
    Discrete Measurement Targets Were Frequently Utilized in Image-Based Methods on the Specimen\u27s Surface to Monitor the Soil Specimen during Triaxial Testing. However, the Required Density of Measurement Targets that Should Be Used in Triaxial Testing to Achieve Highly Accurate Volume Measurement Has Not Been Investigated. to overcome This Limitation, This Paper Presents a Parametric Study to Determine the Optimum Target/point Densities to Be Utilized on the Triaxial Soil Specimen Surface to Achieve the Desired Level of Volume Measurement Accuracy in Image-Based Methods. LiDAR Scanning Was Applied to Establish the Ground Truth Volume of the Specimen. the Effects of Deformation and Failure Modes Were Investigated by Calculating the Volume Measurement Accuracy at Different Strain Levels and for Different Undisturbed Soil Specimens of Clay and Sand with Silt. an Interpolation Method Was Proposed to Increase the Number of Discrete Targets Representing the Triaxial Specimen\u27s Surface. the Analysis Results Show that a Higher Target Density is Required at a Larger Strain. Also, Adding the Number of Interpolation Points Can Only Increase the Accuracy to a Certain Level. as the Volume Measurement Accuracy Was Different for Each of the Clay and Sand with Silt Specimens, the Non-Uniform Deformation, and Failure Mode of the Specimen Can Affect the Required Optimum Density of Discrete Measurement Targets. in Conclusion, It is Recommended to Choose the Optimum Density of Targets based on the Accuracy Requirement, the Maximum Soil Deformation Level, and the Expected Failure Mode of the Specimen

    듀얼미러 라이다 이미징을 위한 타이밍이 고려된 샘플링 알고리즘

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2019. 2. Lee, Hyuk-Jae.In recent years, active sensor technologies such as light detection and ranging (LIDAR) have been intensively studied in theory and widely adopted in many applications, i.e., self-driving cars, robotics and sensing. Generally, the spatial resolution of a depth-acquisition device, such as a LiDAR sensor, is limited because of a slow acquisition speed. To accurately reconstruct a depth image from a limited spatial resolution, a two-stage sampling process has been widely used. However, two-stage sampling uses an irregular sampling pattern for the sampling operation, which requires a large amount of computation for reconstruction. A mathematical formulation of a LiDAR system demonstrates that the existing two-stage sampling does not satisfy its timing constraint for practical use. Therefore, designing a LiDAR system with an efficient sampling algorithm is a significant technological challenge. Firstly, this thesis addresses the problem of adopting the state-of-art laser marking system of a dual-mirror deflection scanner when creating a high-definition LIDAR system. Galvanometer scanners are modeled and parameterized based on concepts of their controllers and the well-known raster scanning method. The scanning strategy is then modeled and analyzed considering the physical scanning movement and the minimum spanning tree. From this analysis, the link between the quality of the captured image of a field of view (FOV) and the scanning speed is revealed. Furthermore, sufficient conditions are derived to indicate that the acquired image fully covers the FOV and that the captured objects are well aligned under a specific frame rate. Finally, a sample LIDAR system is developed to illustrate the proposed concepts. Secondly, to overcome the drawbacks of two-stage sampling, we propose a new sampling method that reduces the computational complexity and memory requirements by generating the optimal representatives of a sampling pattern in down-sample data. A sampling pattern is derived from a k-NN expanding operation from the downsampled representatives. The proposed algorithm is designed to preserve the object boundary by restricting the expansion-operation only to the object boundary or complex texture. In addition, the proposed algorithm runs in linear-time complexity and reduces the memory requirements using a down-sampling ratio. Experimental results with Middlebury datasets and Brown laser-range datasets are presented. Thirdly, state-of-the-art adaptive methods such as two-step sampling are highly effective while addressing indoor, less complex scenes at a moderately low sampling rate. However, their performance is relatively low in complex on-road environments, particularly when the sampling rate of the measuring equipment is low. To address this problem, this thesis proposes a region-of-interest-(ROI)-based sampling algorithm in on-road environments for autonomous driving. With the aid of fast and accurate road and object detection algorithms, particularly those based on convolutional neural networks (CNNs), the proposed sampling algorithm utilizes the semantic information and effectively distributes samples in road, object, and background areas. Experimental results with KITTI datasets are presented.최근 LIDAR (light detection and ranging)와 같은 능동적 센서 기술은 이론적으로도 집중적으로 연구되었고, 자율주행차, 로봇, 센싱 등 다양한 응용 분야에 널리 사용되고 있다. 일반적으로 LiDAR 센서와 같은 심도측정장치는 느린 속도 때문에 공간적 해상도가 제한된다. 제한된 공간적 해상도로부터 심도 이미지를 정확하게 재구성하기 위해서 2단계 샘플링 방법이 널리 사용되고 있다. 그러나 2단계 샘플링은 불규칙적인 샘플링 패턴으로 샘플링을 하기 때문에, 재구성 과정에 많은 양의 연산이 필요하다. LiDAR 시스템을 수학적인 모델을 사용하여 분석하였을 때, 기존의 2단계 샘플링은 실용적으로 사용되기 위한 타이밍 제약 조건을 만족하지 못함을 확인하였다. 따라서 효율적인 샘플링 알고리즘을 사용하는 LiDAR 시스템을 설계하는 것은 중요한 기술적 과제이다. 첫째, 본 논문은 최신의 레이저 마킹 시스템을 dual-mirror 스캐너에 적용하여 고해상도 LiDAR 시스템을 만드는 문제를 다룬다. Galvanometer 스캐너 컨트롤러와 잘 알려진 래스터 스캔 방법에 기초하여 Galvanometer 스캐너를 모델링, 매개변수화한다. 그리고 물리적인 스캐닝 움직임과 최소 신장 트리를 고려하여 스캐닝 방법을 모델링하고 분석한다. 분석으로부터 원하는 FOV (field of view)로 캡쳐된 이미지의 품질과 스캐닝 속도 사이의 관계를 밝혔다. 또한 획득된 이미지가 FOV를 완전히 표현하며, 캡쳐된 object들이 특정 프레임 레이트에서 잘 정렬됨을 나타내는 충분조건을 유도하였다. 마지막으로 제안된 개념을 확인하기 위해 샘플 LIDAR 시스템을 개발하였다. 둘째, 2단계 샘플링의 단점을 극복하기 위해, 다운 샘플 데이터에서 샘플링 패턴의 최적 표현을 생성함으로써 연산 복잡도와 메모리 요구량을 줄일 수 있는 새로운 샘플링 방법을 제안한다. 샘플링 패턴은 다운 샘플된 표현의 k-NN 확장 연산으로부터 도출된다. 제안된 방법은 물체 경계 또는 복잡한 텍스처에 한해서 확장연산을 수행함으로써 물체 경계를 보존하도록 설계되었다. 또한 제안하는 방법은 선형적인 시간 복잡도로 동작하며 다운 샘플링 비율을 이용하여 메모리 요구량을 줄인다. Middlebury 데이터셋과 Brown laser-range 데이터셋을 사용한 실험 결과가 제시된다. 셋째, 2단계 샘플링과 같은 최신의 적응적 방법들은 비교적 낮은 샘플링 레이트로 실내의 복잡하지 않은 장면들을 처리하는 데 매우 효과적이다. 그러나 복잡한 도로 환경에서는, 특히 측정 장비의 샘플링 레이트가 낮은 경우에, 해당 방법들의 성능이 상대적으로 떨어진다. 이 문제를 해결하기 위해 본 논문은 자율주행을 위한 도로 환경에서의 ROI (region-of-interest) 기반 샘플링 알고리즘을 제안한다. 제안된 샘플링 알고리즘은 CNN (convolutional neural network) 기반의 빠르고 정확한 도로 및 물체 감지 알고리즘을 사용하여, semantic 정보를 활용하고 도로, 물체, 배경 영역에 샘플들을 효과적으로 분배한다. KITTI 데이터셋을 사용한 실험 결과가 제시된다.Abstract i Table of Contents iii List of Figures vii List of Tables xi Chapter 1: Introduction 1 1.1. Overview 1 1.2. Scope and contributions 2 1.3. Thesis Outlines 3 Chapter 2: Related work 4 2.1. LiDAR sensors 4 2.2. Sampling 6 2.2.1. Sampling problem definition 6 2.2.2. Sampling model 7 2.2.3. Oracle Random sampling (Gradient-based sampling) 8 2.3. Reconstruction 9 Chapter 3: Dual-Mirror LiDAR 11 3.1. Introduction 11 3.1.1. Related work 12 3.2. Modelling a controller of dual-mirror scanners 13 3.2.1. Dual-mirror scanners 13 3.2.2. Controller Model 15 3.2.2.1. FOV representation 15 3.2.2.2. Timing constraints 16 3.2.2.3. Maximum Speed of LiDAR scanners 17 3.3. LiDAR scanning optimization problem 18 3.3.1. Scanning Problem 19 3.3.2. Optimal scanning pattern 20 3.3.2.1. Grid-graph representation of Field of View 20 3.3.2.2. Optimal scanning pattern 21 3.3.2.3. Combining an optimal sampling pattern with timing constraints 24 3.4. LiDAR system Prototype 30 3.4.1. System overview 30 3.4.2. Speed evaluation 32 3.4.3. Subjective Evaluation 33 3.4.4. Accuracy Evaluation 36 Chapter 4: Sampling for Dual-Mirror LiDAR: Sampling Model and Algorithm 38 4.1. Introduction 38 4.2. Sampling Model for Dual-Mirror LiDAR 41 4.2.1. Timing constraint 41 4.2.2. Memory-space constraint 45 4.2.3. New sampling problem with constraints 47 4.3. Proposed sampling Algorithm and Its Properties 48 4.3.1. Downsampling and k-NN expanding operator 48 4.3.2. Proposed Sampling Algorithm with k-NN Expanding 52 4.3.3. Example with Synthetic Data 57 4.3.4. Proposed sampling algorithm with interpolation 59 4.3.5. Timing and memory constraints 62 4.3.5.1. Timing constraint 62 4.3.5.2. Memory constraint 63 4.4. Experimental results 64 4.4.1. Comparison on the conventional sampling problem 65 4.4.1.1. Subjective comparison 65 4.4.1.2. Quantitative comparison 65 4.4.2. Comparison on the new sampling problem for LiDAR 69 4.4.2.1. Compression ratios 69 4.4.2.2. Quantitative evaluation with Peak-signal-to-noise-ratio 70 4.4.2.3. Quantitative evaluation with Percentages of bad pixels 72 4.4.3. Subjective evaluation 77 4.4.4. Proposed grid sampling and grid sampling method 79 4.4.4.1. Middlebury datasets 79 4.4.4.2. Brown Laser range datasets 80 Chapter 5: ROI-based LiDAR Sampling in On-Road Environment for Autonomous Driving 84 5.1. Introduction 84 5.2. Proposed ROI-based sampling algorithm 87 5.2.1. Motivating example 87 5.2.2. ROI-based Sampling Problem 91 5.2.3. Proposed ROI-based sampling algorithm 93 5.2.4. Practical considerations 94 5.2.5. Distortion optimization problem 95 5.3. Experimental results 96 5.3.1. Datasets 96 5.3.2. Evaluation with different parameters 99 5.3.3. Object and quantitative comparisons 102 Chapter 6: Implementation Issues 108 6.1. Implementation of gradient-based sampling 108 6.2. System overview 111 Chapter 7: Conclusion 113 References 115 초록 124Docto
    corecore