3 research outputs found
PLC Multi-robot Integration via Ethernet for Human Operated Quality Sampling
In automation, quality control inspection is a critical requirement to ensure product standards. The goal of this work is to insure product quality without interrupting the production line flow. The multi-robot system presented, connects a programmable logic controller (PLC), as the main controller, to a conveyor belt and two FANUC industrial robotic arms via EtherNet/IP. Human interaction is implemented to pick a work piece from the moving conveyor and return it with a quality label. This label is used by the PLC to execute the correct robot action; either to return the inspected part to the conveyor or discard it into the rejection bin. The operator uses a custom control panel connected to the PLC, which controls the conveyor and robot actions. The results show the feasibility of the presented multi robot automation line controlled by a PLC that allows human machine interaction to enable manual quality inspection during production. This paper details a student project developed in the advanced programmable logic controllers class. It is part of the master program in mechatronics. Students work in groups in a creative setting, where they learn to integrate various automation technologies and learn to write scientific publications
Augmented Reality and Artificial Intelligence in industry: Trends, tools, and future challenges
Augmented Reality (AR) is an augmented depiction of reality formed by overlaying digital information on an image of objects being seen through a device. Artificial Intelligence (AI) techniques have experienced unprecedented growth and are being applied in various industries. The combination of AR and AI is the next prominent direction in upcoming years with many industries and academia recognizing the importance of their adoption. With the advancements in the silicone industry that push the boundaries of Moore\u27s law, processors will be less expensive, more efficient, and power-optimized in the forthcoming years. This is a tremendous support and necessity for an AR boom, and with the help of AI, there is an excellent potential for smart industries to increase the production speed and workforce training along with improved manufacturing, error handling, assembly, and packaging. In this work, we provide a systematic review of recent advances, tools, techniques, and platforms of AI-empowered AR along with the challenges of using AI in AR applications. This paper will serve as a guideline for future research in the domain of AI-assisted AR in industrial applications
U-PEN++: Redesigning U-PEN Architecture with Multi-Head Attention for Retinal Image Segmentation
In the era of the ever-increasing need for computing power, deep learning (DL) algorithms are becoming critical for accomplishing success in various domains, such as accessibility and processing of information from the quantum of data present in the physical, digital, and biological realms. Medical image segmentation is one such application of DL in the healthcare sector. The segmentation of medical images, such as retinal images, enables an efficient analytical process for diagnostics and medical procedures. To segment regions of interest in medical images, UNet has been primarily used as the baseline DL architecture that consists of contracting and expanding paths for capturing semantic features and precision localization. Although several forms of U-Net have shown promise, its limitations such as hardware memory requirements and inaccurate localization of nonstandard shapes still need to be addressed effectively. In this work, we propose U-PEN++, which reconfigures previously developed U-PEN (U-Net with Progressively Expanded Neuron) architecture by introducing a new module named Progressively Expanded Neuron with Attention (PEN-A) that consists of Maclaurin Series of a nonlinear function and multihead attention mechanism. The proposed PEN-A module enriches the feature representation by capturing more relevant contextual information when compared to the U-PEN model. Moreover, the proposed model removes excessive hidden layers, resulting in less trainable parameters when compared to U-PEN. Experimental analysis performed on DRIVE and CHASE datasets demonstrated more effective s egmentation a nd p arameter efficiency of the proposed U-PEN++ architecture for retinal image segmentation tasks when compared to U-Net, U-PEN, and Residual U-Net architectures