9 research outputs found

    COLOR MULTIPLEXED SINGLE PATTERN SLI

    Get PDF
    Structured light pattern projection techniques are well known methods of accurately capturing 3-Dimensional information of the target surface. Traditional structured light methods require several different patterns to recover the depth, without ambiguity or albedo sensitivity, and are corrupted by object movement during the projection/capture process. This thesis work presents and discusses a color multiplexed structured light technique for recovering object shape from a single image thus being insensitive to object motion. This method uses single pattern whose RGB channels are each encoded with a unique subpattern. The pattern is projected on to the target and the reflected image is captured using high resolution color digital camera. The image is then separated into individual color channels and analyzed for 3-D depth reconstruction through use of phase decoding and unwrapping algorithms thereby establishing the viability of the color multiplexed single pattern technique. Compared to traditional methods (like PMP, Laser Scan etc) only one image/one-shot measurement is required to obtain the 3-D depth information of the object, requires less expensive hardware and normalizes albedo sensitivity and surface color reflectance variations. A cosine manifold and a flat surface are measured with sufficient accuracy demonstrating the feasibility of a real-time system

    Acquisition of 3D shapes of moving objects using fringe projection profilometry

    Get PDF
    Three-dimensional (3D) shape measurement for object surface reconstruction has potential applications in many areas, such as security, manufacturing and entertainment. As an effective non-contact technique for 3D shape measurements, fringe projection profilometry (FPP) has attracted significant research interests because of its high measurement speed, high measurement accuracy and ease to implement. Conventional FPP analysis approaches are applicable to the calculation of phase differences for static objects. However, 3D shape measurement for dynamic objects remains a challenging task, although they are highly demanded in many applications. The study of this thesis work aims to enhance the measurement accuracy of the FPP techniques for the 3D shape of objects subject to movement in the 3D space. The 3D movement of objects changes not only the position of the object but also the height information with respect to the measurement system, resulting in motion-induced errors with the use of existing FPP technology. The thesis presents the work conducted for solutions of this challenging problem

    Determination of measurement deviations of 3D optical scanner

    Get PDF
    Cílem této bakalářské práce je experimentální zjištění odchylek měření optického systému ATOS Triple Scan při aplikaci matnícího křídového a titanového nástřiku užitím statistického zpracování dat. Teoretická část práce obsahuje krátký úvod do 3D rekonstrukce reálných objektů, rozdělení systémů pracujících se strukturovaným světlem podle struktury promítaných vzorů a dosavadní poznatky v oblasti určování odchylek skenovacích systémů. Praktická část práce spočívá v experimentálním měření kalibračních elementů s vrstvou křídového a titanového matnícího prášku a vyhodnocení tohoto měření. Z výsledků měření je stanovena nejistota měření při opakovaném nástřiku křídovým a titanovým práškem a tloušťka vrstvy matnících prášků.The goal of this bachelor thesis is to determine measurement deviations of the optical system ATOS Triple Scan, when the chalk and titanium coating is applied, by statistical data analysis. In the theoretical part, a short introduction to 3D reconstruction of real objects, distribution of structured light systems according to the pattern structure and existing knowledge about the scanning system accuracy is given. The practical part is based on the experimental measurement the calibration elements with specific chalk and titanium coating. As the result, the measurement uncertainty and the coating thickness is determined.

    Structured light-based 3D surface measurement using a multi-camera system

    Get PDF
    Tämän diplomityön tarkoituksena oli tutkia, millä strukturoituun valoon perustuvilla menetelmillä saavutetaan vankimmat mittaustulokset 3D-pinnan mittauksessa monikamerajärjestelmällä. Työn teoriaosuudessa vertailtiin pinnan mittauksessa käytettyjä strukturoidun valon menetelmiä. Työn kokeellisessa osuudessa rakennettiin mittausjärjestelmä, jossa oli yhteensä 16 kameraa ja yksi projektori. Mittakappaleina käytettiin tasoa ja auton ovea. Mittauksilla pyrittiin selvittämään, minkälainen tarkkuus, luotettavuus ja toistettavuus saavutetaan valituilla strukturoidun valon menetelmillä. Heijastettavina piirteinä käytettiin viivoja ja ympyröitä. Vastinpisteiden etsinnässä kuvien välillä hyödynnettiin Gray-koodausta ja lisäksi kokeiltiin epipolaarisuoriin perustuvaa menetelmää. Tutkimus toteutettiin toimeksiantona Mapvision Oy Ltd:lle ja kokeellisen osuuden sovelluskohteena oli autoteollisuuden laadunvarmistus. Työn teoriaosuuden perusteella eri menetelmien vankkuus riippuu olennaisesti sovelluskohteen asettamista rajaehdoista. Paikallaan pysyvien kohteiden mittauksessa vankimmat menetelmät perustuvat useamman eri kuvion heijastamiseen kohteeseen. Liikkuvilla kohteilla saadaan vankimmat mittaustulokset yhden kuvanoton menetelmillä. Tämän työn tutkimustulosten perusteella yksi vankimmista usean kuvanoton menetelmistä monikamerajärjestelmällä perustuu pysty- ja vaakasuuntaisten viivojen käyttöön strukturoituna valona. Tällä menetelmällä saavutettiin 0.02 mm tarkkuus tason mittauksessa ja parhaimmillaan 0.02 – 0.05 mm toistotarkkuus auton oven mittauksessa. Käytännössä toistotarkkuus oli kuitenkin 0.02 – 0.2 mm laajoilla ja tasaisilla pinnoilla. Luotettavuus oli heikoin jyrkkien pinnanvaihteluiden lähellä. Vastinpisteiden etsintä epipolaarisuorien avulla osoittautui erittäin käyttökelpoiseksi vaihtoehdoksi Gray-koodaukselle.The purpose of this master's thesis was to identify a structured light-based 3D surface measurement method that could provide the most robust measurement results using a multi-camera system. The thesis compares structured light methods used for surface measurement. To determine the accuracy, reliability and repeatability of these methods, a flat surface and a car door were measured using a measurement system consisting of 16 cameras and a projector. Lines and circles were used as projected features. Gray coding was used to detect corresponding points between images, and an epipolar line-based method was also tested. The study, commissioned by Mapvision Oy Ltd, will be used for ensuring quality control in the automotive industry. The literature show that the robustness of the different methods essentially depends on the limitations of the specific application. The most robust methods for measuring static objects are based on projecting multiple different patterns on the scene. For moving objects, the most robust measurement results are achieved with one-shot methods. The study results found that one of the most robust multiple-shot methods in the multi-camera system is based on using vertical and horizontal lines as a structured light, which yielded an accuracy of 0.02 mm when measuring a flat object, with the best repeatability occurring at 0.02 – 0.05 mm when measuring a car door. In contrast, the repeatability for large flat surfaces was 0.02 – 0.2 mm, with the worst reliability being observed near steep surfaces. This study demonstrates that epipolar lines offer an effective alternative to Gray coding for detecting corresponding points

    自己投影法に基づく高速三次元形状検査の研究

    Get PDF
    広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora

    Scene segmentation using similarity, motion and depth based cues

    Get PDF
    Segmentation of complex scenes to aid surveillance is still considered an open research problem. In this thesis a computational model (CM) has been developed to classify a scene into foreground, moving-shadow and background regions. It has been demonstrated how the CM, with the optional use of a channel ratio test, can be applied to demarcate foreground shadow regions in indoor scenes illuminated by a fixed incandescent source of light. A combined approach, involving the CM working in tandem with a traditional motion cue based segmentation method, has also been constructed. In the combined approach, the CM is applied to segregate the foreground shaded regions in a current frame based on a binary mask generated using a standard background subtraction process (BSP). Various popular outlier detection strategies have been investigated to assess their suitabilities in generating a threshold automatically, required to develop a binary mask from a difference frame, the outcome of the BSP. To evaluate the full scope of the pixel labeling capabilities of the CM and to estimate the associated time constraints, the model is deployed for foreground scene segmentation in recorded real-life video streams. The observations made validate the satisfactory performance of the model in most cases. In the second part of the thesis depth based cues have been exploited to perform the task of foreground scene segmentation. An active structured light based depthestimating arrangement has been modeled in the thesis; the choice of modeling an active system over a passive stereovision one has been made to alleviate some of the difficulties associated with the classical correspondence problem. The model developed not only facilitates use of the set-up but also makes possible a method to increase the working volume of the system without explicitly encoding the projected structured pattern. Finally, it is explained how scene segmentation can be accomplished based solely on the structured pattern disparity information, without generating explicit depthmaps. To de-noise the difference frames, generated using the developed method, two median filtering schemes have been implemented. The working of one of the schemes is advocated for practical use and is described in terms of discrete morphological operators, thus facilitating hardware realisation of the method to speed-up the de-noising process

    3D modeling by low-cost range cameras: methods and potentialities

    Get PDF
    Nowadays the demand of 3D models for the documentation and visualization of objects and environments is continually increasing. However, the traditional 3D modeling techniques and systems (i.e. photogrammetry and laser scanners) can be very expensive and/or onerous, as they often need qualified technicians and specific post-processing phases. Thus, it is important to find new instruments, able to provide low-cost 3D data in real time and in a user-friendly way. Range cameras seem one of the most promising tools to achieve this goal: they are low-cost 3D scanners, able to easily collect dense point clouds at high frame rate, in a short range (few meters) from the imaged objects. Such sensors, though, still remain a relatively new 3D measurement technology, not yet exhaustively studied. Thus, it is essential to assess the metric quality of the depth data retrieved by these devices. This thesis is precisely included in this background: the aim is to evaluate the potentialities of range cameras for geomatic applications and to provide useful indications for their practical use. Therefore the three most popular and/or promising low-cost range cameras, namely the Microsoft Kinect v1, the Micorsoft Kinect v2 and the Occipital Structure Sensor, were firstly characterized from a geomatic point of view in order to assess the metric quality of the depth data retrieved by them. These investigations showed that such sensors present a depth precision and a depth accuracy in the range of some millimeters to few centimeters, depending both on the operational principle adopted by the single device (Structured Light or Time of Flight) and on the depth itself. On this basis, two different models were identified for precision and accuracy vs. depth: parabolic for the Structured Light (the Kinect v1 and the Structure Sensor) and linear for Time of Flight (the Kinect v2) sensors, respectively. Then the effectiveness of such accuracy models was demonstrated to be globally compliant with the found precision models for all of the three sensors. Furthermore, the proposed calibration model was validated for the Structure Sensor: with calibration, the overall RMSE, decreased from 27 to 16 mm. Finally four case studies were carried out in order to evaluate: • the performances of the Kinect v2 sensor for monitoring oscillatory motions (relevant for structural and/or industrial monitoring), demonstrating a good ability of the system to detect movements and displacements; • the integration feasibility of Kinect v2 with a classical stereo system, highlighting the need of an integration of range cameras into 3D classical photogrammetric systems especially to overpass limitations due to acquisition completeness; • the potentialities of the Structure Sensor for the 3D surveying of indoor environments, showing a more than sufficient accuracy for most applications; • the potentialities of the Structure Sensor to document archaeological small finds, where metric accuracy seems to be rather good while textured models shows some misalignments. In conclusion, although the experimental results demonstrated that range cameras have the capability to give good and encouraging results, the performances of traditional 3D modeling techniques in terms of accuracy and precision are still superior and must be preferred when the accuracy requirements are restrictive. But for a very wide and continuously increasing range of applications, when the required accuracy can be at the level from few millimeters (very close-range) to few centimeters, then range cameras can be a valuable alternative, especially when non expert users are involved. Furthermore, the technology on which these sensors are based is continually evolving, driven also by the new generation of AR/VR reality kits, and certainly also their geometric performances will soon improve
    corecore