15 research outputs found

    A survey on generative adversarial networks for imbalance problems in computer vision tasks

    Get PDF
    Any computer vision application development starts off by acquiring images and data, then preprocessing and pattern recognition steps to perform a task. When the acquired images are highly imbalanced and not adequate, the desired task may not be achievable. Unfortunately, the occurrence of imbalance problems in acquired image datasets in certain complex real-world problems such as anomaly detection, emotion recognition, medical image analysis, fraud detection, metallic surface defect detection, disaster prediction, etc., are inevitable. The performance of computer vision algorithms can significantly deteriorate when the training dataset is imbalanced. In recent years, Generative Adversarial Neural Networks (GANs) have gained immense attention by researchers across a variety of application domains due to their capability to model complex real-world image data. It is particularly important that GANs can not only be used to generate synthetic images, but also its fascinating adversarial learning idea showed good potential in restoring balance in imbalanced datasets. In this paper, we examine the most recent developments of GANs based techniques for addressing imbalance problems in image data. The real-world challenges and implementations of synthetic image generation based on GANs are extensively covered in this survey. Our survey first introduces various imbalance problems in computer vision tasks and its existing solutions, and then examines key concepts such as deep generative image models and GANs. After that, we propose a taxonomy to summarize GANs based techniques for addressing imbalance problems in computer vision tasks into three major categories: 1. Image level imbalances in classification, 2. object level imbalances in object detection and 3. pixel level imbalances in segmentation tasks. We elaborate the imbalance problems of each group, and provide GANs based solutions in each group. Readers will understand how GANs based techniques can handle the problem of imbalances and boost performance of the computer vision algorithms

    Intraclass image augmentation for defect detection using generative adversarial neural networks

    Get PDF
    Surface defect identification based on computer vision algorithms often leads to inadequate generalization ability due to large intraclass variation. Diversity in lighting conditions, noise components, defect size, shape, and position make the problem challenging. To solve the problem, this paper develops a pixel-level image augmentation method that is based on image-to-image translation with generative adversarial neural networks (GANs) conditioned on fine-grained labels. The GAN model proposed in this work, referred to as Magna-Defect-GAN, is capable of taking control of the image generation process and producing image samples that are highly realistic in terms of variations. Firstly, the surface defect dataset based on the magnetic particle inspection (MPI) method is acquired in a controlled environment. Then, the Magna-Defect-GAN model is trained, and new synthetic image samples with large intraclass variations are generated. These synthetic image samples artificially inflate the training dataset size in terms of intraclass diversity. Finally, the enlarged dataset is used to train a defect identification model. Experimental results demonstrate that the Magna-Defect-GAN model can generate realistic and high-resolution surface defect images up to the resolution of 512 × 512 in a controlled manner. We also show that this augmentation method can boost accuracy and be easily adapted to any other surface defect identification models

    Attention guided multi-task learning for surface defect identification

    Get PDF
    Surface defect identification is an essential task in the industrial quality control process, in which visual checks are conducted on a manufactured product to ensure that it meets quality standards. Convolutional Neural Network (CNN) based surface defect identification method has proven to outperform traditional image processing techniques. However, the real-world surface defect datasets are limited in size due to the expensive data generation process and the rare occurrence of defects. To address this issue, this paper presents a method for exploiting auxiliary information beyond the primary labels to improve the generalization ability of surface defect identification tasks. Considering the correlation between pixel level segmentation masks, object level bounding boxes and global image level classification labels, we argue that jointly learning features of the related tasks can improve the performance of surface defect identification tasks. This paper proposes a framework named Defect-Aux-Net, based on multi-task learning with attention mechanisms that exploit the rich additional information from related tasks with the goal of simultaneously improving robustness and accuracy of the CNN based surface defect identification. We conducted a series of experiments with the proposed framework. The experimental results showed that the proposed method can significantly improve the performance of state-of-the-art models while achieving an overall accuracy of 97.1%, Dice score of 0.926 and mAP of 0.762 on defect classification, segmentation and detection tasks

    Vision-Based Automation of Laser Cutting of Patterned Fabrics

    No full text

    MAINBOT - Mobile robots for inspection and maintenance in extensive industrial plants

    Get PDF
    MAINBOT project is developing service robots applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). MAINBOT aims at using already available robotic solutions to deploy innovative systems in order to fulfill project industrial objectives: to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling sensors or special non destructive testing equipment. MAINBOT will validate the proposed solutions in two solar plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, seven hours of storage, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m tower height), the variability of conditions (outdoor, day-night), safet y requirements, etc. The objective is to increase the efficiency of the installation by improving the inspection procedures and technologies. Robot capabilities are developed at different levels: (1) Simulation: realistic testing environments are created in order to validate the algorithms developed for the project using available robot, sensors and application environments. (2) Autonomous navigation: Hybrid (topological-metric) localization and planning algorithms are integrated in order to manage the huge extensions. (3) Manipulation: Robot arm movement planning and control algorithms are developed for positioning sensing equipment with accuracy and collision avoidance. (4) Interoperability: Mechanisms to integrate the heterogeneous systems taking part in the robot operation, from third party inspection equipments to the end user maintenance planning. (5) Non-Destructive Inspection: based on eddy current and thermography, detection algorithms are developed in order to provide automatic inspectio

    Analyzing Human Robot Collaboration with the Help of 3D Cameras

    No full text
    Part 10: The Operator 4.0 and the Internet of Things, Services and PeopleInternational audienceRecent developments in robotics allow the design of work systems with enhanced human robot collaboration (HRC) for assembly tasks. Productivity improvements are a common aim for companies that look into the implementation of HRC. To harvest the full productivity potential of these work systems, an analysis of the HRC work processes is essential. However, a dedicated method for the analysis of productivity in HRC is missing. This paper presents an approach using 3D-cameras to observe the employee in HRC. The approach links this information to robot states. The resulting analysis aims at improving the productivity of the work system e.g. by identifying and reducing balancing losses in HRC. The method tracks the movements of the employees in the HRC area and matches the corresponding robot activities

    Non-destructive inspection in industrial equipment using robotic mobile manipulation

    No full text
    MAINBOT project has developed service robots based applications to autonomously execute inspection tasks in extensive industrial plants in equipment that is arranged horizontally (using ground robots) or vertically (climbing robots). The industrial objective has been to provide a means to help measuring several physical parameters in multiple points by autonomous robots, able to navigate and climb structures, handling non-destructive testing sensors. MAINBOT has validated the solutions in two solar thermal plants (cylindrical-parabolic collectors and central tower), that are very demanding from mobile manipulation point of view mainly due to the extension (e.g. a thermal solar plant of 50Mw, with 400 hectares, 400.000 mirrors, 180 km of absorber tubes, 140m height tower), the variability of conditions (outdoor, day-night), safety requirements, etc. Once the technology was validated in simulation, the system was deployed in real setups and different validation tests carried out. In this paper two of the achievements related with the ground mobile inspection system are presented: (1) Autonomous navigation localization and planning algorithms to manage navigation in huge extensions and (2) Non-Destructive Inspection operations: thermography based detection algorithms to provide automatic inspection abilities to the robots

    PIROS: Cooperative, Safe and Reconfigurable Robotic Companion for CNC Pallets Load/Unload Stations

    No full text
    Handling and assembling applications with small batch size and high production mix require requires high adaptability, reconfigurability and flexibility. Thus, human-robot collaboration could be an effective solution to ensure production performance and operator satisfaction. This scenario requires human-awareness in different levels of the software framework, from the robot control to the task planning. The goal is to assign high added value activities to the human as much as possible, while the robot has to be able to substitute the human when needed. Team PIROS faces this goal by designing a IEC 61499/ROS-based architecture which integrate safety assessment, advanced force control, human-aware motion planning, gesture recognition, and task scheduling
    corecore