403 research outputs found

    Sensor integration for robotic laser welding processes

    Get PDF
    The use of robotic laser welding is increasing among industrial applications, because of its ability to weld objects in three dimensions. Robotic laser welding involves three sub-processes: seam detection and tracking, welding process control, and weld seam inspection. Usually, for each sub-process, a separate sensory system is required. The use of separate sensory systems leads to heavy and bulky tools, in contrast to compact and light sensory systems that are needed to reach sufficient accuracy and accessibility. In the solution presented in this paper all three subprocesses are integrated in one compact multipurpose welding head. This multi-purpose tool is under development and consists of a laser welding head, with integrated sensors for seam detection and inspection, while also carrying interfaces for process control. It can provide the relative position of the tool and the work piece in three-dimensional space. Additionally, it can cope with the occurrence of sharp corners along a three-dimensional weld path, which are difficult to detect and weld with conventional equipment due to measurement errors and robot dynamics. In this paper the process of seam detection will be mainly elaborated

    Self-Attention Dense Depth Estimation Network for Unrectified Video Sequences

    Full text link
    The dense depth estimation of a 3D scene has numerous applications, mainly in robotics and surveillance. LiDAR and radar sensors are the hardware solution for real-time depth estimation, but these sensors produce sparse depth maps and are sometimes unreliable. In recent years research aimed at tackling depth estimation using single 2D image has received a lot of attention. The deep learning based self-supervised depth estimation methods from the rectified stereo and monocular video frames have shown promising results. We propose a self-attention based depth and ego-motion network for unrectified images. We also introduce non-differentiable distortion of the camera into the training pipeline. Our approach performs competitively when compared to other established approaches that used rectified images for depth estimation

    Deep Convolutional Neural Networks for Estimating Lens Distortion Parameters

    Get PDF
    In this paper we present a convolutional neural network (CNN) to predict multiple lens distortion parameters from a single input image. Unlike other methods, our network is suitable to create high resolution output as it directly estimates the parameters from the image which then can be used to rectify even very high resolution input images. As our method it is fully automatic, it is suitable for both casual creatives and professional artists. Our results show that our network accurately predicts the lens distortion parameters of high resolution images and corrects the distortions satisfactory

    INDOOR PHOTOGRAMMETRY USING UAVS WITH PROTECTIVE STRUCTURES: ISSUES AND PRECISION TESTS

    Get PDF
    Abstract. Management of disaster scenarios requires applying emergency procedures ensuring maximum safety and protection for field operators. Actual conditions of disaster sites are labelled as "Triple-D: Dull, Dusty, Dangerous" areas. It is well known that in this kind of areas and situations remote surveying systems are at their very best effective, and among these UAVs currently are an effective and performing field tool. Indoor spaces are a particularly complex scenario for this kind of surveys. In this case, technological advances currently offer micro-UAV systems, featuring 360° protective cages, which are able to collect video streams while flying in very tight spaces. Such cases require manual control of the vehicle, with the operator piloting the aircraft without prior knowledge of the status quo of the survey object and therefore without prior planning of flight paths. A possible benefit in terms of knowledge of the survey object could lay in the creation of a 3D model based on images extracted by video streams; to date, widely tested methods and techniques are available for processing UAV-borne video streams to obtain such models. Anyway, the protective cage and the need to use, in these operating conditions, wide-angle lenses presents some issues linked to ever-changing image framing, due to the presence of the cage wires on the field of view. The present work focused on this issue. Using this type of UAVs, video streams have been collected in different environments, both indoors and outdoors, testing several procedures for photogrammetric processing in order to assess the ability to create 3D models. These have been tested for reliability based on data collection conditions, also assessing the level of automation and speed attainable in post-processing. The present paper describes the different tests carried out and the related results.</p

    Impact of lens distrortions on strain measurements obtained with digital image correlation

    Get PDF
    The determination of strain fields based on displacements obtained via DIC at the micro-strain level is still a cumbersome task. In particular when high-strain gradients are involved, e.g. in composite materials with multidirectional fibre reinforcement, uncertainties in the experimental setup and errors in the derivation of the displacement fields can substantially hamper the strain identification process. In this contribution, the aim is to investigate the impact of lens distortions on strain measurements. To this purpose, we first perform pure rigid body motion experiments, revealing the importance of precise correction of lens distortions. Next, a uni-axial tensile test on a textile composite with spatially varying high strain gradients is performed, resulting in very accurate determined strains along the fibers of the materia

    Cloud Radiative Effect Study Using Sky Camera

    Full text link
    The analysis of clouds in the earth's atmosphere is important for a variety of applications, viz. weather reporting, climate forecasting, and solar energy generation. In this paper, we focus our attention on the impact of cloud on the total solar irradiance reaching the earth's surface. We use weather station to record the total solar irradiance. Moreover, we employ collocated ground-based sky camera to automatically compute the instantaneous cloud coverage. We analyze the relationship between measured solar irradiance and computed cloud coverage value, and conclude that higher cloud coverage greatly impacts the total solar irradiance. Such studies will immensely help in solar energy generation and forecasting.Comment: Accepted in Proc. IEEE AP-S Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting, 201

    Depth-Dependent High Distortion Lens Calibration

    Full text link
    [EN] Accurate correction of high distorted images is a very complex problem. Several lens distortion models exist that are adjusted using different techniques. Usually, regardless of the chosen model, a unique distortion model is adjusted to undistort images and the camera-calibration template distance is not considered. Several authors have presented the depth dependency of lens distortion but none of them have treated it with highly distorted images. This paper presents an analysis of the distortion depth dependency in strongly distorted images. The division model that is able to represent high distortion with only one parameter is modified to represent a depth-dependent high distortion lens model. The proposed calibration method obtains more accurate results when compared to existing calibration methods.The Instituto de Automatica e Informatica Industrial (ai2) of the Universitat Politecnica de Valencia has financed the open access fees of this paper.Ricolfe Viala, C.; Esparza Peidro, A. (2020). Depth-Dependent High Distortion Lens Calibration. Sensors. 20(13):1-12. https://doi.org/10.3390/s20133695S1122013Ricolfe-Viala, C., & Sanchez-Salmeron, A.-J. (2010). Lens distortion models evaluation. Applied Optics, 49(30), 5914. doi:10.1364/ao.49.005914Wieneke, B. (2008). Volume self-calibration for 3D particle image velocimetry. Experiments in Fluids, 45(4), 549-556. doi:10.1007/s00348-008-0521-5Magill, A. A. (1955). Variation in Distortion with Magnification*. Journal of the Optical Society of America, 45(3), 148. doi:10.1364/josa.45.000148Fryer, J. G., & Fraser, C. S. (2006). ON THE CALIBRATION OF UNDERWATER CAMERAS. The Photogrammetric Record, 12(67), 73-85. doi:10.1111/j.1477-9730.1986.tb00539.xAlvarez, L., Gómez, L., & Sendra, J. R. (2010). Accurate Depth Dependent Lens Distortion Models: An Application to Planar View Scenarios. Journal of Mathematical Imaging and Vision, 39(1), 75-85. doi:10.1007/s10851-010-0226-2Ricolfe-Viala, C., Sanchez-Salmeron, A.-J., & Martinez-Berti, E. (2011). Accurate calibration with highly distorted images. Applied Optics, 51(1), 89. doi:10.1364/ao.51.000089Ricolfe-Viala, C., & Sánchez-Salmerón, A.-J. (2010). Robust metric calibration of non-linear camera lens distortion. Pattern Recognition, 43(4), 1688-1699. doi:10.1016/j.patcog.2009.10.003Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine Vision and Applications, 13(1), 14-24. doi:10.1007/pl0001326
    corecore