842 research outputs found

    A profile measurement system for rail quality assessment during manufacturing

    Get PDF
    Steel rails used in the transport sector and in industry are designed and manufactured to support high stress levels generated by high-speed and heavy-loaded modern trains. In the rail manufacturing process, one of the key stages is rolling, where fast, accurate and repeatable rail profile measurement is a major challenge. In this paper, a rail profile measurement system for rail rolling mills based on four conventional, inexpensive laser range finders is proposed. The range finders are calibrated using a common reference to properly express the point clouds generated by each range finder in the world coordinate system. The alignment of the point clouds to the rail model is performed by means of an efficient and robust registration method. Experiments carried out in a rail rolling mill demonstrate the accuracy and repeatability of the system; the maximum error is below 0.12%. All parallelizable tasks were designed and developed to be executed concurrently, achieving an acquisition rate of up to 210 fp

    Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Full text link
    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and Automation (ICRA '15), Seattle, WA, US

    Robot guidance using machine vision techniques in industrial environments: A comparative review

    Get PDF
    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works

    A Laser-Based Vision System for Weld Quality Inspection

    Get PDF
    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Optical MEMS

    Get PDF
    Optical microelectromechanical systems (MEMS), microoptoelectromechanical systems (MOEMS), or optical microsystems are devices or systems that interact with light through actuation or sensing at a micro- or millimeter scale. Optical MEMS have had enormous commercial success in projectors, displays, and fiberoptic communications. The best-known example is Texas Instruments’ digital micromirror devices (DMDs). The development of optical MEMS was impeded seriously by the Telecom Bubble in 2000. Fortunately, DMDs grew their market size even in that economy downturn. Meanwhile, in the last one and half decade, the optical MEMS market has been slowly but steadily recovering. During this time, the major technological change was the shift of thin-film polysilicon microstructures to single-crystal–silicon microsructures. Especially in the last few years, cloud data centers are demanding large-port optical cross connects (OXCs) and autonomous driving looks for miniature LiDAR, and virtual reality/augmented reality (VR/AR) demands tiny optical scanners. This is a new wave of opportunities for optical MEMS. Furthermore, several research institutes around the world have been developing MOEMS devices for extreme applications (very fine tailoring of light beam in terms of phase, intensity, or wavelength) and/or extreme environments (vacuum, cryogenic temperatures) for many years. Accordingly, this Special Issue seeks to showcase research papers, short communications, and review articles that focus on (1) novel design, fabrication, control, and modeling of optical MEMS devices based on all kinds of actuation/sensing mechanisms; and (2) new developments of applying optical MEMS devices of any kind in consumer electronics, optical communications, industry, biology, medicine, agriculture, physics, astronomy, space, or defense

    Virtual 3D Reconstruction of Archaeological Pottery Using Coarse Registration

    Get PDF
    The 3D reconstruction of objects has not only improved visualisation of digitised objects, it has helped researchers to actively carry out archaeological pottery. Reconstructing pottery is significant in archaeology but is challenging task among practitioners. For one, excavated potteries are hardly complete to provide exhaustive and useful information, hence archaeologists attempt to reconstruct them with available tools and methods. It is also challenging to apply existing reconstruction approaches in archaeological documentation. This limitation makes it difficult to carry out studies within a reasonable time. Hence, interest has shifted to developing new ways of reconstructing archaeological artefacts with new techniques and algorithms. Therefore, this study focuses on providing interventions that will ease the challenges encountered in reconstructing archaeological pottery. It applies a data acquisition approach that uses a 3D laser scanner to acquire point cloud data that clearly show the geometric and radiometric properties of the object’s surface. The acquired data is processed to remove noise and outliers before undergoing a coarse-to-fine registration strategy which involves detecting and extracting keypoints from the point clouds and estimating descriptions with them. Additionally, correspondences are estimated between point pairs, leading to a pairwise and global registration of the acquired point clouds. The peculiarity of the approach of this thesis is in its flexibility due to the peculiar nature of the data acquired. This improves the efficiency, robustness and accuracy of the approach. The approach and findings show that the use of real 3D dataset can attain good results when used with right tools. High resolution lenses and accurate calibration help to give accurate results. While the registration accuracy attained in the study lies between 0.08 and 0.14 mean squared error for the data used, further studies will validate this result. The results obtained are nonetheless useful for further studies in 3D pottery reassembly

    A Multi Camera and Multi Laser Calibration Method for 3D Reconstruction of Revolution Parts

    Get PDF
    This paper describes a method for calibrating multi camera and multi laser 3D triangulation systems, particularly for those using Scheimpflug adapters. Under this configuration, the focus plane of the camera is located at the laser plane, making it difficult to use traditional calibration methods, such as chessboard pattern-based strategies. Our method uses a conical calibration object whose intersections with the laser planes generate stepped line patterns that can be used to calculate the camera-laser homographies. The calibration object has been designed to calibrate scanners for revolving surfaces, but it can be easily extended to linear setups. The experiments carried out show that the proposed system has a precision of 0.1 mm

    3D Laser Scanner Development and Analysis

    Get PDF

    A Single Camera Unit-Based Three-Dimensional Surface Imaging Technique

    Get PDF
    The main objective of this study is to develop a single-camera unit-based three-dimensional surface imaging technique that could be used to reduce the disparity error in three-dimensional (3D) image reconstruction and simplify the calibration process of the imaging system. The current advanced stereoscopic 3D imaging system uses a pair of imaging devices (e.g., complementary metal-oxide semiconductor (CMOS) or charge-coupled device (CCD)), imaging lenses, and other accessories (e.g., light sources, polarizing filters) and diffusers.) To reconstruct the 3D scene, the system needs to calibrate the camera and compute a disparity map. However, in most cases in the industry, a pair of imaging devices is not ideally identical, so it is a necessary step to finely adjust and compensate for camera orientation, lens focal length, and intrinsic parameters for each camera. More importantly, conventional stereoscopic systems may respond differently to incident light reflected from the target surface. It is possible for the pixel information in the left and right images to be slightly different. This results in an increase in disparity error, even though the stereovision system is calibrated and compensated for rotation and vertical offsets between two cameras. This thesis aims to solve the aforementioned challenges by proposing a new stereo vision scheme based on only one camera to obtain target 3D data by 3D image reconstruction of two images obtained from two different camera positions
    • 

    corecore