774 research outputs found

    A Synergistic Framework Leveraging Autoencoders and Generative Adversarial Networks for the Synthesis of Computational Fluid Dynamics Results in Aerofoil Aerodynamics

    Full text link
    In the realm of computational fluid dynamics (CFD), accurate prediction of aerodynamic behaviour plays a pivotal role in aerofoil design and optimization. This study proposes a novel approach that synergistically combines autoencoders and Generative Adversarial Networks (GANs) for the purpose of generating CFD results. Our innovative framework harnesses the intrinsic capabilities of autoencoders to encode aerofoil geometries into a compressed and informative 20-length vector representation. Subsequently, a conditional GAN network adeptly translates this vector into precise pressure-distribution plots, accounting for fixed wind velocity, angle of attack, and turbulence level specifications. The training process utilizes a meticulously curated dataset acquired from JavaFoil software, encompassing a comprehensive range of aerofoil geometries. The proposed approach exhibits profound potential in reducing the time and costs associated with aerodynamic prediction, enabling efficient evaluation of aerofoil performance. The findings contribute to the advancement of computational techniques in fluid dynamics and pave the way for enhanced design and optimization processes in aerodynamics.Comment: 9 pages, 11 figure

    Comparative Evaluation and Implementation of State-of-the-Art Techniques for Anomaly Detection and Localization in the Continual Learning Framework

    Get PDF
    openThe capability of anomaly detection (AD) to detect defects in industrial environments using only normal samples has attracted significant attention. However, traditional AD methods have primarily concentrated on the current set of examples, leading to a significant drawback of catastrophic forgetting when faced with new tasks. Due to the constraints in flexibility and the challenges posed by real-world industrial scenarios, there is an urgent need to strengthen the adaptive capabilities of AD models. Hence, this thesis introduces a unified framework that integrates continual learning (CL) and anomaly detection (AD) to accomplish the goal of anomaly detection in the continual learning (ADCL). To evaluate the effectiveness of the framework, a comparative analysis is performed to assess the performance of the three specific feature-based methods for the AD task: Coupled-Hypersphere-Based Feature Adaptation (CFA), Student-Teacher approach, and PatchCore. Furthermore, the framework incorporates the utilization of replay techniques to facilitate continual learning (CL). A comprehensive evaluation is conducted using a range of metrics to analyze the relative performance of each technique and identify the one that exhibits superior results. To validate the effectiveness of the proposed approach, the MVTec AD dataset, consisting of real-world images with pixel-based anomalies, is utilized. This dataset serves as a reliable benchmark for Anomaly Detection in the context of Continual Learning, providing a solid foundation for further advancements in the field.The capability of anomaly detection (AD) to detect defects in industrial environments using only normal samples has attracted significant attention. However, traditional AD methods have primarily concentrated on the current set of examples, leading to a significant drawback of catastrophic forgetting when faced with new tasks. Due to the constraints in flexibility and the challenges posed by real-world industrial scenarios, there is an urgent need to strengthen the adaptive capabilities of AD models. Hence, this thesis introduces a unified framework that integrates continual learning (CL) and anomaly detection (AD) to accomplish the goal of anomaly detection in the continual learning (ADCL). To evaluate the effectiveness of the framework, a comparative analysis is performed to assess the performance of the three specific feature-based methods for the AD task: Coupled-Hypersphere-Based Feature Adaptation (CFA), Student-Teacher approach, and PatchCore. Furthermore, the framework incorporates the utilization of replay techniques to facilitate continual learning (CL). A comprehensive evaluation is conducted using a range of metrics to analyze the relative performance of each technique and identify the one that exhibits superior results. To validate the effectiveness of the proposed approach, the MVTec AD dataset, consisting of real-world images with pixel-based anomalies, is utilized. This dataset serves as a reliable benchmark for Anomaly Detection in the context of Continual Learning, providing a solid foundation for further advancements in the field

    Integrating State-of-the-Art Approaches for Anomaly Detection and Localization in the Continual Learning Setting

    Get PDF
    openThe significant attention surrounding the application of anomaly detection (AD) in identifying defects within industrial environments using only normal samples has prompted research and development in this area. However, traditional AD methods have been primarily focused on the current set of examples, resulting in a limitation known as catastrophic forgetting when encountering new tasks. The inflexibility of these methods and the challenges posed by real-world industrial scenarios necessitate the urgent enhancement of the adaptive capabilities of AD models. Therefore, this thesis presents an integrated framework that combines the concepts of continual learning (CL) and anomaly detection (AD) to achieve the objective of anomaly detection in continual learning (ADCL). To evaluate the efficacy of the framework, a thorough comparative analysis is conducted to assess the performance of three specific methods for the AD task: the EfficientAD, Patch Distribution Modeling Framework (PaDiM) and the Discriminatively Trained Reconstruction Anomaly Embedding Model (DRAEM). Moreover, the framework incorporates the use of replay techniques to enable continual learning (CL). In order to determine the superior technique, a comprehensive evaluation is carried out using diverse metrics that measure the relative performance of each method. To validate the proposed approach, a robust real-world dataset called MVTec AD is employed, consisting of images with pixel-based anomalies. This dataset serves as a reliable benchmark for Anomaly Detection in the context of Continual Learning, offering a solid foundation for further advancements in this field of study.The significant attention surrounding the application of anomaly detection (AD) in identifying defects within industrial environments using only normal samples has prompted research and development in this area. However, traditional AD methods have been primarily focused on the current set of examples, resulting in a limitation known as catastrophic forgetting when encountering new tasks. The inflexibility of these methods and the challenges posed by real-world industrial scenarios necessitate the urgent enhancement of the adaptive capabilities of AD models. Therefore, this thesis presents an integrated framework that combines the concepts of continual learning (CL) and anomaly detection (AD) to achieve the objective of anomaly detection in continual learning (ADCL). To evaluate the efficacy of the framework, a thorough comparative analysis is conducted to assess the performance of three specific methods for the AD task: the EfficientAD, Patch Distribution Modeling Framework (PaDiM) and the Discriminatively Trained Reconstruction Anomaly Embedding Model (DRAEM). Moreover, the framework incorporates the use of replay techniques to enable continual learning (CL). In order to determine the superior technique, a comprehensive evaluation is carried out using diverse metrics that measure the relative performance of each method. To validate the proposed approach, a robust real-world dataset called MVTec AD is employed, consisting of images with pixel-based anomalies. This dataset serves as a reliable benchmark for Anomaly Detection in the context of Continual Learning, offering a solid foundation for further advancements in this field of study

    Detecting Invasive Insects Using Uncewed Aerial Vehicles and Variational Autoencoders

    Get PDF
    In this thesis, we use machine learning techniques to address limitations in our ability to monitor pest insect migrations. Invasive insect populations, such as the brown marmorated stink bug (BMSB), cause significant economic and environmental damages. In order to mitigate these damages, tracking BMSB migration is vital, but it also poses a challenge. The current state-of-the-art solution to track insect migrations is called mark-release-recapture. In mark-release-recapture, a researcher marks insects with a fluorescent powder, releases them back into the wild, and searches for the insects using ultra-violet flashlights at suspected migration destination locations. However, this involves a significant amount of labor and has a low recapture rate. By automating the insect search step, the recapture rate can be improved, reducing the amount of labor required in the process and improving the quality of the data. We propose a solution to the BMSB migration tracking problem using an unmanned aerial vehicle (UAV) to collect video data of the area of interest. Our system uses an ultra violet (UV) lighting array and digital cameras mounted on the bottom of the UAV, as well as artificial intelligence algorithms such as convolutional neural networks (CNN), and multiple hypotheses tracking (MHT) techniques. Specifically, we propose a novel computer vision method for insect detection using a Convolutional Variational Auto Encoder (CVAE). Our experimental results show that our system can detect BMSB with high precision and recall, outperforming the current state-of-the-art. Additionally, we associate insect observations using MHT, improving detection results and accurately counting real-world insects

    The State of Applying Artificial Intelligence to Tissue Imaging for Cancer Research and Early Detection

    Full text link
    Artificial intelligence represents a new frontier in human medicine that could save more lives and reduce the costs, thereby increasing accessibility. As a consequence, the rate of advancement of AI in cancer medical imaging and more particularly tissue pathology has exploded, opening it to ethical and technical questions that could impede its adoption into existing systems. In order to chart the path of AI in its application to cancer tissue imaging, we review current work and identify how it can improve cancer pathology diagnostics and research. In this review, we identify 5 core tasks that models are developed for, including regression, classification, segmentation, generation, and compression tasks. We address the benefits and challenges that such methods face, and how they can be adapted for use in cancer prevention and treatment. The studies looked at in this paper represent the beginning of this field and future experiments will build on the foundations that we highlight

    How to Do Machine Learning with Small Data? -- A Review from an Industrial Perspective

    Full text link
    Artificial intelligence experienced a technological breakthrough in science, industry, and everyday life in the recent few decades. The advancements can be credited to the ever-increasing availability and miniaturization of computational resources that resulted in exponential data growth. However, because of the insufficient amount of data in some cases, employing machine learning in solving complex tasks is not straightforward or even possible. As a result, machine learning with small data experiences rising importance in data science and application in several fields. The authors focus on interpreting the general term of "small data" and their engineering and industrial application role. They give a brief overview of the most important industrial applications of machine learning and small data. Small data is defined in terms of various characteristics compared to big data, and a machine learning formalism was introduced. Five critical challenges of machine learning with small data in industrial applications are presented: unlabeled data, imbalanced data, missing data, insufficient data, and rare events. Based on those definitions, an overview of the considerations in domain representation and data acquisition is given along with a taxonomy of machine learning approaches in the context of small data

    Matalaulotteisen affordanssiesityksen oppiminen ja tämän hyödyntäminen robottijärjestelmän koulutuksessa

    Get PDF
    The development of data-driven approaches, such as deep learning, has led to the emergence of systems that have achieved human-like performance in wide variety of tasks. For robotic tasks, deep data-driven models are introduced to create adaptive systems without the need of explicitly programming them. These adaptive systems are needed in situations, where task and environment changes remain unforeseen. Convolutional neural networks (CNNs) have become the standard way to process visual data in robotics. End-to-end neural network models that operate the entire control task can perform various complex tasks with little feature engineering. However, the adaptivity of these systems goes hand in hand with the level of variation in the training data. Training end-to-end deep robotic systems requires a lot of domain-, task-, and hardware-specific data, which is often costly to provide. In this work, we propose to tackle this issue by employing a deep neural network with a modular architecture, consisting of separate perception, policy, and trajectory parts. Each part of the system is trained fully on synthetic data or in simulation. The data is exchanged between parts of the system as low-dimensional representations of affordances and trajectories. The performance is then evaluated in a zero-shot transfer scenario using the Franka Panda robotic arm. Results demonstrate that a low-dimensional representation of scene affordances extracted from an RGB image is sufficient to successfully train manipulator policies.Tietopohjaisten oppimismenetelmien etenkin syväoppimisen viimeaikainen kehitys on synnyttänyt järjestelmiä, jotka ovat saavuttaneet ihmistasoisen suorituskyvyn ihmisälyä vaativissa tehtävissä. Syväoppimiseen pohjautuvia robottijärjestelmiä ollaan kehitetty, jotta ympäristön ja tehtävän muutoksiin mukautuvaisempia robotteja voitaisiin ottaa käyttöön. Konvoluutioneuroverkkojen käyttö kuvatiedon käsittelyssä robotiikassa on yleistä. Neuroverkkomallit, jotka käsittelevät anturitietoa ja suorittavat päätöksenteon ja säädön, voivat oppia monimutkaisia tehtäviä ilman käsin tehtyä kehitystyötä. Näiden järjestelmien kyky mukautua ympäristön muutoksiin on kuitenkin suoraan verrannollinen koulutustiedon monimuotoisuuteen. Syväoppimiseen pohjautuva robottijärjestelmä vaatii oppiakseen suuren määrän ympäristö-, tehtävä-, ja laitteisto-ominaista koulutustietoa, mikä joudutaan yleensä kerätä tehottomasti käsin. Tämän työn tarkoitus on esittää ratkaisu yllämainittuun tehottomuuteen. Esittelemme neuroverkkoarkkitehtuurin, joka koostuu kolmesta erillisestä komponentista. Nämä komponentit koulutetaan erikseen ja koulutus ollaan ainoastaan toteutettu simulaatiossa tai synteettisellä tiedolla ilman fyysisen maailman lisäkouluttautumista Ensimmäinen komponentti tuottaa RGB-kuvasta matalaulotteisen affordanssiesityksen. Tämän esityksen pohjalta toinen komponentti tuottaa matalaulotteisten liikerataesityksen. Kolmas komponentti luo tämän esityksen pohjalta täysimittaisen liikeradan teollisuusrobotille. Järjestelmän suorituskykyä arvioidaan fyysisessä ympäristössä ilman lisäkoulutusta Franka Panda -teollisuusrobotilla. Tulokset osoittavat, että kuvatieto voidaan esittää matalaulotteisena affordanssiesityksenä ja tätä esitystä voidaan käyttää säätötehtävän oppimiseen
    corecore