61 research outputs found

    Near-Markerless position measurement using genetics and point-tracing algorithms

    Get PDF

    In-Line Monitoring of Laser Welding Using a Smart Vision System

    Get PDF
    This paper presents a vision system for the in-line monitoring of laser welding. The system is based on a coaxial optical setup purposely chosen to guarantee robust detection of the joints and optimal acquisition of the melt pool region. Two procedures have been developed: The former focuses on keeping the laser head locked to the joint during the welding; the latter monitors the appearance of the keyhole region. The system feedbacks the joint position to the robot used to move the welding laser and monitors the penetration state of the laser. The goal is to achieve a continuous adaptation of the laser parameters (power., speed and focusing) to guarantee the weld quality. The developed algorithms have been designed to optimize the system performance in terms of the elaboration time and of accuracy and robustness of the detection. The overall architecture follows the Industrial Internet of Things approach, where vision is embedded, edge-based analysis is carried out, actuators are directly driven by the vision system, a latency-free transmission architecture allows interconnection as well as the possibility to remotely control multiple delocalized units

    DEEP LEARNING FOR GESTURE RECOGNITION IN GYM TRAINING PERFORMED BY A VISION-BASED AUGMENTED REALITY SMART MIRROR

    Get PDF
    This paper illustrates the development and the validation of a smart mirror for sport training. The application is based the skeletonization algorithm MediaPipe and runs on an embedded device Nvidia Jetson Nano equipped with two fisheye cameras. The software has been evaluated considering the exercise biceps curl. The elbow angle has been measured by both MediaPipe and the motion capture system BTS (ground truth), and the resulting values have been compared to determine angle uncertainty, residual errors, and intra-subject and inter-subject repeatability. The uncertainty of the joints’ estimation and the quality of the image captured by the cameras reflect on the final uncertainty of the indicator over time, highlighting the areas of improvements for further developments

    First Step Towards Embedded Vision System for Pruning Wood Estimation

    Get PDF
    This paper focuses on the development and evaluation of a portable vision-based acquisition device for vineyards, equipped with a GPU-accelerated processing unit. The device is designed to perform in-field image acquisitions with high-resolution and dense information. It includes three vision systems: the Intel® RealSenseTM depth camera D435i, the Intel® RealSenseTM tracking camera T265, and a Basler RGB DART camera. The device is powered by an Nvidia Jetson Nano processing board for both simultaneous data acquisition and real-time processing. The paper presents two specific tasks for which the acquisition device can be useful: wood volume estimation and early bud counting. Acquisition campaigns were conducted in a commercial vineyard in Italy, capturing images of vine shoots and buds using the prototype device. The wood volume estimation software is based on image processing techniques, achieving an RMSE of 2.1 cm3 and a mean deviation of 1.8 cm3. The buds detection task is obtained by fine-tuning the YOLOv8 model on a purposely acquired custom dataset, achieving a promising F1-Score of 0.79

    Deep learning-based hand gesture recognition for collaborative robots

    Get PDF
    This paper is a first step towards a smart hand gesture recognition set up for Collaborative Robots using a Faster R-CNN Object Detector to find the accurate position of the hands in RGB images. In this work, a gesture is defined as a combination of two hands, where one is an anchor and the other codes the command for the robot. Other spatial requirements are used to improve the performances of the model and filter out the incorrect predictions made by the detector. As a first step, we used only four gestures

    A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination

    Get PDF
    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens

    Computer vision-based mapping of grapevine vigor variability for enhanced fertilization strategies through intelligent pruning estimation

    Get PDF
    The objective of this study is to develop an affordable and non-invasive method using computer vision to estimate pruning weight in commercial vineyards. The study aims to enable controlled fertilization by leveraging pruning data as an indicator of plant vigor [1]. The methodology entails the analysis of RGB and DEPTH images acquired through an embedded platform (Figure 1) in a vineyard cultivating corvina grapes using the guyot method [2]. Initially, pruning weight was evaluated by processing pictures taken manually with a controlled background. Then, we developed an algorithm to estimate pruned wood weight based on these images. Subsequently, a mobile sensor platform was utilized to automatically capture grapevine images without a controlled background. Collected data will then be used to deploy a convolutional neural network (CNN) for intelligent pruning estimation capable of extracting meaningful data from real-world environments. Additionally, we integrated and validated a visual-odometry sensor (Intel Realsense T265) to map the spatial variability of pruning estimation results

    STEWIE: eSTimating grapE berries number and radius from images using a Weakly supervIsed nEural network

    Get PDF
    Counting tasks with overlapping and occluded tar-gets are often tackled by means of neural networks outputting density maps. While this approach has been proven to be highly effective for crowd-counting tasks, it has not been exploited extensively in other fields (like fruit counting). Furthermore, this approach has never been used to infer the shape or the size of the recognized objects. In this paper, we present a novel deep learning-based methodology to automatically estimate the number of grape berries present in an image and evaluate their average radius as a double output of the network. For the model training, we employ a public dataset consisting of 300 vines images, where each berry center has been dot-annotated. Since the dataset does not directly provide information about the berry radii, we first develop a numerical optimization methodology to calculate the radius of the berries, by exploiting the dot annotations, some prior knowledge (berry maximum size), and a current state-of-the-art segmentation model. Then, we employ the combined information (berry center and radius) to train a custom neural network that outputs two density maps, from which we infer the number of berries in the image and their average size

    Preventing and monitoring work-related diseases in firefighters: a literature review on sensor-based systems and future perspectives in robotic devices.

    Get PDF
    : In recent years, the necessity to prevent work-related diseases has led to the use of sensor based systems to measure important features during working activities. This topic achieved great popularity especially in hazardous and demanding activities such as those required of firefighters. Among feasible sensor systems, wearable sensors revealed their advantages in terms of possibility to conduct measures in real conditions and without influencing the movements of workers. In addition, the advent of robotics can be also exploited in order to reduce work-related disorders. The present literature review aims at providing an overview of sensor-based systems used to monitor physiological and physical parameters in firefighters during real activities, as well as to offer ideas for understanding the potentialities of exoskeletons and assistive devices

    Workstation meta-collaborative basate su sistemi di visione e algoritmi intelligenti

    Get PDF
    In questo contributo si presentano le caratteristiche di una workstation industriale meta-collaborativa, ideata per garantire la collaborazione uomo-robot a prescindere dalla presenza o meno di barriere fisiche tra le parti, in ottica di Industria 4.0. Per garantire ciò, il sistema proposto, realizzato in ROS, si basa sul canale di comunicazione visivo e adotta una comunicazione per mezzo di comandi gestuali. Lo sviluppo del sistema è ancora in corso, pertanto si presentano risultati parziali relativi al riconoscimento e alla traduzione del gesto, realizzato tramite l’object detector R-FCN
    • …
    corecore