1,362 research outputs found

    On inferring intentions in shared tasks for industrial collaborative robots

    Get PDF
    Inferring human operators' actions in shared collaborative tasks, plays a crucial role in enhancing the cognitive capabilities of industrial robots. In all these incipient collaborative robotic applications, humans and robots not only should share space but also forces and the execution of a task. In this article, we present a robotic system which is able to identify different human's intentions and to adapt its behavior consequently, only by means of force data. In order to accomplish this aim, three major contributions are presented: (a) force-based operator's intent recognition, (b) force-based dataset of physical human-robot interaction and (c) validation of the whole system in a scenario inspired by a realistic industrial application. This work is an important step towards a more natural and user-friendly manner of physical human-robot interaction in scenarios where humans and robots collaborate in the accomplishment of a task.Peer ReviewedPostprint (published version

    A Joint Learning Approach to Face Detection in Wavelet Compressed Domain

    Get PDF
    Face detection has been an important and active research topic in computer vision and image processing. In recent years, learning-based face detection algorithms have prevailed with successful applications. In this paper, we propose a new face detection algorithm that works directly in wavelet compressed domain. In order to simplify the processes of image decompression and feature extraction, we modify the AdaBoost learning algorithm to select a set of complimentary joint-coefficient classifiers and integrate them to achieve optimal face detection. Since the face detection on the wavelet compression domain is restricted by the limited discrimination power of the designated feature space, the proposed learning mechanism is developed to achieve the best discrimination from the restricted feature space. The major contributions in the proposed AdaBoost face detection learning algorithm contain the feature space warping, joint feature representation, ID3-like plane quantization, and weak probabilistic classifier, which dramatically increase the discrimination power of the face classifier. Experimental results on the CBCL benchmark and the MIT + CMU real image dataset show that the proposed algorithm can detect faces in the wavelet compressed domain accurately and efficiently

    Artificial intelligence for superconducting transformers

    Get PDF
    Artificial intelligence (AI) techniques are currently widely used in different parts of the electrical engineering sector due to their privileges for being used in smarter manufacturing and accurate and efficient operating of electric devices. Power transformers are a vital and expensive asset in the power network, where their consistent and fault-free operation greatly impacts the reliability of the whole system. The superconducting transformer has the potential to fully modernize the power network in the near future with its invincible advantages, including much lighter weight, more compact size, much lower loss, and higher efficiency compared with conventional oil-immersed counterparts. In this article, we have looked into the perspective of using AI for revolutionizing superconducting transformer technology in many aspects related to their design, operation, condition monitoring, maintenance, and asset management. We believe that this article offers a roadmap for what could be and needs to be done in the current decade 2020-2030 to integrate AI into superconducting transformer technology

    Isolation forests and deep autoencoders for industrial screw tightening anomaly detection

    Get PDF
    Within the context of Industry 4.0, quality assessment procedures using data-driven techniques are becoming more critical due to the generation of massive amounts of production data. In this paper, we address the detection of abnormal screw tightening processes, which is a key industrial task. Since labeling is costly, requiring a manual effort, we focus on unsupervised detection approaches. In particular, we assume a computationally light low-dimensional problem formulation based on angle–torque pairs. Our work is focused on two unsupervised machine learning (ML) algorithms: isolation forest (IForest) and a deep learning autoencoder (AE). Several computational experiments were held by assuming distinct datasets and a realistic rolling window evaluation procedure. First, we compared the two ML algorithms with two other methods, a local outlier factor method and a supervised Random Forest, on older data related with two production days collected in November 2020. Since competitive results were obtained, during a second stage, we further compared the AE and IForest methods by adopting a more recent and larger dataset (from February to March 2021, totaling 26.9 million observations and related to three distinct assembled products). Both anomaly detection methods obtained an excellent quality class discrimination (higher than 90%) under a realistic rolling window with several training and testing updates. Turning to the computational effort, the AE is much lighter than the IForest for training (around 2.7 times faster) and inference (requiring 3.0 times less computation). This AE property is valuable within this industrial domain since it tends to generate big data. Finally, using the anomaly detection estimates, we developed an interactive visualization tool that provides explainable artificial intelligence (XAI) knowledge for the human operators, helping them to better identify the angle–torque regions associated with screw tightening failures.This work is supported by: European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project n 39479; Funding Reference: POCI-01-0247-FEDER-39479]. The work of Diogo Ribeiro is supported the grant FCT PD/BDE/135105/2017

    Textile Taxonomy and Classification Using Pulling and Twisting

    Full text link
    Identification of textile properties is an important milestone toward advanced robotic manipulation tasks that consider interaction with clothing items such as assisted dressing, laundry folding, automated sewing, textile recycling and reusing. Despite the abundance of work considering this class of deformable objects, many open problems remain. These relate to the choice and modelling of the sensory feedback as well as the control and planning of the interaction and manipulation strategies. Most importantly, there is no structured approach for studying and assessing different approaches that may bridge the gap between the robotics community and textile production industry. To this end, we outline a textile taxonomy considering fiber types and production methods, commonly used in textile industry. We devise datasets according to the taxonomy, and study how robotic actions, such as pulling and twisting of the textile samples, can be used for the classification. We also provide important insights from the perspective of visualization and interpretability of the gathered data

    Rapid mapping of digital integrated circuit logic gates via multi-spectral backside imaging

    Full text link
    Modern semiconductor integrated circuits are increasingly fabricated at untrusted third party foundries. There now exist myriad security threats of malicious tampering at the hardware level and hence a clear and pressing need for new tools that enable rapid, robust and low-cost validation of circuit layouts. Optical backside imaging offers an attractive platform, but its limited resolution and throughput cannot cope with the nanoscale sizes of modern circuitry and the need to image over a large area. We propose and demonstrate a multi-spectral imaging approach to overcome these obstacles by identifying key circuit elements on the basis of their spectral response. This obviates the need to directly image the nanoscale components that define them, thereby relaxing resolution and spatial sampling requirements by 1 and 2 - 4 orders of magnitude respectively. Our results directly address critical security needs in the integrated circuit supply chain and highlight the potential of spectroscopic techniques to address fundamental resolution obstacles caused by the need to image ever shrinking feature sizes in semiconductor integrated circuits

    Analysis of the hands in egocentric vision: A survey

    Full text link
    Egocentric vision (a.k.a. first-person vision - FPV) applications have thrived over the past few years, thanks to the availability of affordable wearable cameras and large annotated datasets. The position of the wearable camera (usually mounted on the head) allows recording exactly what the camera wearers have in front of them, in particular hands and manipulated objects. This intrinsic advantage enables the study of the hands from multiple perspectives: localizing hands and their parts within the images; understanding what actions and activities the hands are involved in; and developing human-computer interfaces that rely on hand gestures. In this survey, we review the literature that focuses on the hands using egocentric vision, categorizing the existing approaches into: localization (where are the hands or parts of them?); interpretation (what are the hands doing?); and application (e.g., systems that used egocentric hand cues for solving a specific problem). Moreover, a list of the most prominent datasets with hand-based annotations is provided

    Quality assessment of manufactured ceramic work using digital signal processing

    Get PDF
    Tese de mestrado. Engenharia Mecânica. Faculdade de Engenharia. Universidade do Porto. 199
    corecore