85 research outputs found

    Image-based 3D digitizing for plant architecture analysis and phenotyping.

    Get PDF
    Abstract-Functional-structural plant modeling and plant phenotyping require the measurement of geometric features in specimens. This data acquisition is called plant digitizing. Actually, these measurements are performed manually, in invasive or even destructive ways, or using expensive laser scanning equipment. Computer vision based 3D reconstruction is an accurate and low cost alternative for the digitizing of plants not presenting a dense canopy. Sparse canopies are found in several important annual plants in agriculture as soybean and maize, at least in their early stages of development. This paper shows as the state of the art in structure from motion and multiple view stereo is able to produce accurate 3D models for specimens presenting sparse canopies. Three-dimensional triangular meshes are computed from a set of non-calibrated images, modeling a basil and an Ixora specimens and accurately representing their leaves and branches.SIBGRAPI 2012

    In situ monitoring of additive manufacturing using digital image correlation: A review

    Get PDF
    UIDB/00667/2020 POCI-01-0145-FEDER-016414This paper is a critical review of in situ full-field measurements provided by digital image correlation (DIC) for inspecting and enhancing additive manufacturing (AM) processes. The principle of DIC is firstly recalled and its applicability during different AM processes systematically addressed. Relevant customisations of DIC in AM processes are highlighted regarding optical system, lighting and speckled pattern procedures. A perspective is given in view of the impact of in situ monitoring regarding AM processes based on target subjects concerning defect characterisation, evaluation of residual stresses, geometric distortions, strain measurements, numerical modelling validation and material characterisation. Finally, a case study on in situ measurements with DIC for wire and arc additive manufacturing (WAAM) is presented emphasizing opportunities, challenges and solutions.publishersversionpublishe

    Facility for validating technologies for the autonomous space rendezvous and docking to uncooperative targets

    Get PDF
    We present the latest advancements in the air-bearing facility installed at La Sapienza’s GN Lab in the School of Aerospace Engineering. This facility has been utilized in recent times to validate robust control laws for simultaneous attitude control and vibration active damping. The instrumentation and testbed have been restructured and enhanced to enable simulations of close proximity operations. Relative pose determination, accomplished through visual navigation as either an auxiliary or standalone system, is the first building block. Leveraging the acquired knowledge, optimal guidance and control algorithms can be tested for contactless operations (e.g. on-orbit inspection), as well as berthing and docking tasks

    Measuring the accuracy of digitization of contactless scanners

    Get PDF
    Cílem práce je praktické ověření a stanovení přesnosti digitalizace bezkontaktních 3D skenerů, které jsou dostupné na oddělení TUL / KSA, v souladu s postupy používanými pro kalibraci (přejímací zkoušky) těchto zařízení. Součástí práce jsou informace o laboratorním vybavení potřebném k implementaci praktické části práce (3D bezkontaktní skener Atos III Triple Scan, Metra-Scan, Ein-scan, REV scan, Leica AT901-MR, SW GOM Inspect), o principech optické digitalizace a tak zvaných akceptačních testech. Implementace doporučených postupů pro testování přesnosti optických 3D skenerů je realizována na kalibračním standardu (etalonu), jehož nominální rozměry jsou určeny měřením na souřadnicovém měřicím stroji (CMM). S využitím standardu je stanovena přesnost digitalizace jednotlivých skenerů, výsledky jsou zpracovány, analyzovány a konfrontovány s údaji poskytnutými výrobci zařízení.The aim of the thesis is practical verification and determination of the accuracy of digitization of contactless 3D scanners which were available at the TUL/KSA department in accordance with the procedures used for calibration (acceptance tests) of these devices. The steps involved in this thesis are, to gain knowledge of laboratory equipment needed to implement the practical part of the work (3D contactless scanner such as Atos III Triple scan, Metra-Scan, Ein-scan, REV scan, Leica AT901-MR, SW GOM Inspect), with the principles of optical digitization and the so-called Acceptance tests. This thesis requires a Calibration standard, which is also termed Etalon that will enable the recommended procedures for testing the accuracy of optical 3D scanners to be implemented and determination of the nominal dimensions of the standard i.e., by CMM. By using this standard, the accuracy of digitization of individual scanners is determined and the results are processed and the accuracy results are compared with the data provided by the device manufacturer

    An integrated python-based open-source Digital Image Correlation Software for in-plane and out-of-plane measurements (Part 2)

    Get PDF
    Funding Information: The authors would like to acknowledge Fundação para a Ciência e a Tecnologia (FCT-MCTES) throughout the project PTDC/EMD-EMD/1230/2021 (AneurysmTool) and UIDB/00667/2020 (UNIDEMI) and the support provided by the Brazilian Government funding agencies CAPES, Brazil , FAPERJ, Brazil and CNPq, Brazil . Publisher Copyright: © 2022 The AuthorsThis work presents an out-of-the-box python-based open-source 3D Digital Image Correlation (3D-DIC) software for both in-plane and out-of-plane full-field measurements, denoted by iCorrVision-3D. The software includes an integrated stereo grabber for image acquisition, stereo calibration, numerical stereo correlation and post-processing modules. The main objective is to provide a complete integrated 3D-DIC system for users. All important DIC setting parameters can be easily controlled by the user from an intuitive graphical interface. For instance, the interpolation strategy and correlation techniques that are usually not open for users are available for modifications in iCorrVision-3D. The proposed software can be used in a great number of applications in engineering. Results indicated that the iCorrVision-3D software is robust and accurate in reconstructing the 3D shape of objects and in evaluating the out-of-plane full-field displacement of specimens being tested.publishersversionpublishe

    Review of machine-vision based methodologies for displacement measurement in civil structures

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this record.Vision-based systems are promising tools for displacement measurement in civil structures, possessing advantages over traditional displacement sensors in instrumentation cost, installation efforts and measurement capacity in terms of frequency range and spatial resolution. Approximately one hundred papers to date have appeared on this subject, investigating topics like: system development and improvement, the viability on field applications and the potential for structural condition assessment. The main contribution of this paper is to present a literature review of vision-based displacement measurement, from the perspectives of methodologies and applications. Video processing procedures in this paper are summarised as a three-component framework, camera calibration, target tracking and structural displacement calculation. Methods for each component are presented in principle, with discussions about the relative advantages and limitations. Applications in the two most active fields: bridge deformation and cable vibration measurement are examined followed by a summary of field challenges observed in monitoring tests. Important gaps requiring further investigation are presented e.g. robust tracking methods, non-contact sensing and measurement accuracy evaluation in field conditions

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    LOW-COST DEPTH-CAMERA: OPEN-SOURCE 3D DISPLACEMENT MEASUREMENTS FOR 4D PRINTED HYGROSCOPIC COMPOSITES

    Get PDF
    4D printing (4DP) is a growing branch of 3D printing technology that involves the design of composite material architectures capable of shape-change transformations, which occur post printing, in response to external stimulus. Among these, Wood Polymer Composites (WPCs) change their shape in reaction to changes of moisture content, shrinking or swelling like natural wood until the equilibrium with the environment is reached. Such intrinsic material behavior can be particularly useful in the development of passive moisture airflow controllers that can modulate humidity and airflow in indoor environments to improve air quality. Precise measurement of the time-based stimulus induced shape-change response of these composites is critical to assess the responsiveness, velocity of reaction and overall deformation of the designed 4DP composite mechanisms. Up until now, Digital Image Correlation (DIC) techniques have been widely used for such purpose. However, DIC methods require expensive equipment and costly commercial software. This paper presents a Low-Cost Depth-Camera (LCDC) method that uses a free custom algorithm that returns a 3D coloured displacement map with the corresponding meshes of the acquired object. The LCDC method does not require specialized equipment and allows for an overall understanding of the time-dependent deformation of 4DP actuators, this method also facilitates the comparison between composites with different properties under the same external conditions. This new LCDC method has the potential to further 4DP research by providing an open-source, accessible and reliable tool to assess 3D displacement measurements

    Point cloud hand-object segmentation using multimodal imaging with thermal and color data for safe robotic object handover

    Get PDF
    This paper presents an application of neural networks operating on multimodal 3D data (3D point cloud, RGB, thermal) to effectively and precisely segment human hands and objects held in hand to realize a safe human–robot object handover. We discuss the problems encountered in building a multimodal sensor system, while the focus is on the calibration and alignment of a set of cameras including RGB, thermal, and NIR cameras. We propose the use of a copper–plastic chessboard calibration target with an internal active light source (near-infrared and visible light). By brief heating, the calibration target could be simultaneously and legibly captured by all cameras. Based on the multimodal dataset captured by our sensor system, PointNet, PointNet++, and RandLA-Net are utilized to verify the effectiveness of applying multimodal point cloud data for hand–object segmentation. These networks were trained on various data modes (XYZ, XYZ-T, XYZ-RGB, and XYZ-RGB-T). The experimental results show a significant improvement in the segmentation performance of XYZ-RGB-T (mean Intersection over Union: 82.8% by RandLA-Net) compared with the other three modes (77.3% by XYZ-RGB, 35.7% by XYZ-T, 35.7% by XYZ), in which it is worth mentioning that the Intersection over Union for the single class of hand achieves 92.6%
    corecore