14 research outputs found

    Embedded Line Scan Image Sensors: The Low Cost Alternative for High Speed Imaging

    Full text link
    In this paper we propose a low-cost high-speed imaging line scan system. We replace an expensive industrial line scan camera and illumination with a custom-built set-up of cheap off-the-shelf components, yielding a measurement system with comparative quality while costing about 20 times less. We use a low-cost linear (1D) image sensor, cheap optics including a LED-based or LASER-based lighting and an embedded platform to process the images. A step-by-step method to design such a custom high speed imaging system and select proper components is proposed. Simulations allowing to predict the final image quality to be obtained by the set-up has been developed. Finally, we applied our method in a lab, closely representing the real-life cases. Our results shows that our simulations are very accurate and that our low-cost line scan set-up acquired image quality compared to the high-end commercial vision system, for a fraction of the price.Comment: 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA

    CAD2Render: A Modular Toolkit for GPU-accelerated Photorealistic Synthetic Data Generation for the Manufacturing Industry

    Full text link
    The use of computer vision for product and assembly quality control is becoming ubiquitous in the manufacturing industry. Lately, it is apparent that machine learning based solutions are outperforming classical computer vision algorithms in terms of performance and robustness. However, a main drawback is that they require sufficiently large and labeled training datasets, which are often not available or too tedious and too time consuming to acquire. This is especially true for low-volume and high-variance manufacturing. Fortunately, in this industry, CAD models of the manufactured or assembled products are available. This paper introduces CAD2Render, a GPU-accelerated synthetic data generator based on the Unity High Definition Render Pipeline (HDRP). CAD2Render is designed to add variations in a modular fashion, making it possible for high customizable data generation, tailored to the needs of the industrial use case at hand. Although CAD2Render is specifically designed for manufacturing use cases, it can be used for other domains as well. We validate CAD2Render by demonstrating state of the art performance in two industrial relevant setups. We demonstrate that the data generated by our approach can be used to train object detection and pose estimation models with a high enough accuracy to direct a robot. The code for CAD2Render is available at https://github.com/EDM-Research/CAD2Render.Comment: Accepted at the Workshop on Photorealistic Image and Environment Synthesis for Computer Vision (PIES-CV) at WACV2

    Sparse multi-sensor monitoring system design for vehicle application

    No full text
    In today's fast growing vehicle industry, the number of functionalities (comfort features, monitoring features, safety features, etc.) is steadily increasing. Each of these functionalities are developed independently from each other, hence the sensors are not shared among them. Although this design approach results into robust monitoring of these different functionalities, it requires a large number of sensors in different locations resulting in a complex hardware and software architecture (e.g. complex wires). This paper describes our approach where a multi sensor design method is used to optimally select locations of sensors that are shared by different functionalities. This results into a reduced number of sensors that monitor the same amount of functionalities. We demonstrate in this paper, an optimization algorithm based on Multi-Objective Integer Programming (MOIP) for optimal sensor placement for monitoring Motion Sickness Dose Value (MSDV) estimation and Speed Bump Detection (SBD) as part of a driver assistant system. The algorithm is further validated on a numerical data-set captured from an IPG CarMaker vehicle model. The methodology can be further extended to more functionalities with large number of applications in vehicle industry

    Motion blur characterization and compensation for line scan (1D) cameras

    No full text
    Line sensors (1D) are often used for quality monitoring of moving objects in industrial environments. They are, for instance, used to derive dimensional and geometrical information of moving objects or products. Their high sampling rate makes them well suited for retrieving information of fast moving objects. However, in fast motion the 1D sensor, as any other kind of image sensor, introduces artefacts commonly referred to as motion blur. In this paper, we discuss (1D) sensor motion blur and methods to compensate for it. An experimental set-up and a simulation tool have been developed to characterize motion blur of (1D) sensors. Once properly characterized, a deblurring algorithm (based on a non-blind deconvolution method) has been developed to reconstruct a deblurred image. The results are validated using experimental data collected from a vibrating string. Comparison between dimensional feature measurements of the vibrating string, without and with deblurring methods are illustrated. The analysis shows that a decrease by a factor of two on the measurement variance can be achieved by applying the proposed deblurring method.Oramas Mogrovejo J.A., Ompusunggu A.P., Tuytelaars T., Bey-Temsamani A., ''Motion blur characterization and compensation for line scan (1D) cameras '', SPIE conference on automated visual inspection and machine vision, 15 pp., June 29, 2017, Munich, Germany.status: publishe

    Embedded Line Scan Image Sensors: The Low Cost Alternative for High Speed Imaging

    Get PDF
    In this paper we propose a low-cost high-speed imaging line scan system. We replace an expensive industrial line scan camera and illumination with a custom-built set-up of cheap off-the-shelf components, yielding a measurement system with comparative quality while costing about 20 times less. We use a low-cost linear (1D) image sensor, cheap optics including a LED-based or LASER-based lighting and an embedded platform to process the images. A step-by-step method to design such a custom high speed imaging system and select proper components is proposed. Simulations allowing to predict the final image quality to be obtained by the set-up has been developed. Finally, we applied our method in a lab, closely representing the real-life cases. Our results shows that our simulations are very accurate and that our low-cost line scan set-up acquired image quality compared to the high-end commercial vision system, for a fraction of the price.2015 International Conference on Image Processing Theory, Tools and Applications (IPTA)status: publishe

    Encoding Stability into Laser Powder Bed Fusion Monitoring Using Temporal Features and Pore Density Modelling

    No full text
    In laser powder bed fusion (LPBF), melt pool instability can lead to the development of pores in printed parts, reducing the part’s structural strength. While camera-based monitoring systems have been introduced to improve melt pool stability, these systems only measure melt pool stability in limited, indirect ways. We propose that melt pool stability can be improved by explicitly encoding stability into LPBF monitoring systems through the use of temporal features and pore density modelling. We introduce the temporal features, in the form of temporal variances of common LPBF monitoring features (e.g., melt pool area, intensity), to explicitly quantify printing stability. Furthermore, we introduce a neural network model trained to link these video features directly to pore densities estimated from the CT scans of previously printed parts. This model aims to reduce the number of online printer interventions to only those that are required to avoid porosity. These contributions are then implemented in a full LPBF monitoring system and tested on prints using 316L stainless steel. Results showed that our explicit stability quantification improved the correlation between our predicted pore densities and true pore densities by up to 42%

    Prognostics for Optimal Maintenance: Maintenance Cost Versus Product Quality Optimization for Industrial Cases

    No full text
    Correlation between the quality degradation of a product and maintenance of a machine is often established by the production engineers. To asses this correlation, some assumptions are made. In most cases it is assumed that the quality of the product degrades after a fixed number of operation cycles of the production machine. Therefore maintenance of the production machine is only performed after this number of cycles is accomplished. This kind of assumptions is often not valid in modern industry since high variability of products, tolerances of machines / components, reliability variations of these components, extensive / smooth usage, etc. make this degradation quite dynamic in time. As a result, the quality of the product could get degraded in a fast way if this variability is high or in a slow way if this variability is low. Both cases will lead to low benefit because of lost production in the former case or redundant maintenance in the latter one. In this paper we propose a solution to this problem by maximizing the benefit using online monitoring of product’s quality degradation and maintenance cost evolution. A Condition Based Maintenance framework for industry developed in Prognostics for Optimal Maintenance (POM) project [1] and described in [2] is applied to two industrial use cases in order to deploy and validate the proposed technique.status: accepte

    Directed real-world learned exploration

    No full text
    Abstract: Automated Guided Vehicles (AGV) are omnipresent, and are able to carry out various kind of preprogrammed tasks. Unfortunately, a lot of manual configuration is still required in order to make these systems operational, and configuration needs to be re-done when the environment or task is changed. As an alternative to current inflexible methods, we employ a learning based method in order to perform directed exploration of a previously unseen environment. Instead of relying on handcrafted heuristic representations, the agent learns its own environmental representation through its embodiment. Our method offers loose coupling between the Reinforcement Learning (RL) agent, which is trained in simulation, and a separate, on real-world images trained task module. The uncertainty of the task module is used to direct the exploration behavior. As an example, we use a warehouse inventory task, and we show how directed exploration can improve the task performance through active data collection. We also propose a novel environment representation to efficiently tackle the sim2real gap in both sensing and actuation. We empirically evaluate the approach both in simulated environments and a real-world warehouse
    corecore