132 research outputs found

    Machine Learning Based AFP Inspection: A Tool for Characterization and Integration

    Get PDF
    Automated Fiber Placement (AFP) has become a standard manufacturing technique in the creation of large scale composite structures due to its high production rates. However, the associated rapid layup that accompanies AFP manufacturing has a tendency to induce defects. We forward an inspection system that utilizes machine learning (ML) algorithms to locate and characterize defects from profilometry scans coupled with a data storage system and a user interface (UI) that allows for informed manufacturing. A Keyence LJ-7080 blue light profilometer is used for fast 2D height profiling. After scans are collected, they are process by ML algorithms, displayed to an operator through the UI, and stored in a database. The overall goal of the inspection system is to add an additional tool for AFP manufacturing. Traditional AFP inspection is done manually adding to manufacturing time and being subject to inspector errors or fatigue. For large parts, the inspection process can be cumbersome. The proposed inspection system has the capability of accelerating this process while still keeping a human inspector integrated and in control. This allows for the rapid capability of the automated inspection software and the robustness of a human checking for defects that the system either missed or misclassified

    Towards a more efficient and cost-sensitive extreme learning machine: A state-of-the-art review of recent trend

    Get PDF
    In spite of the prominence of extreme learning machine model, as well as its excellent features such as insignificant intervention for learning and model tuning, the simplicity of implementation, and high learning speed, which makes it a fascinating alternative method for Artificial Intelligence, including Big Data Analytics, it is still limited in certain aspects. These aspects must be treated to achieve an effective and cost-sensitive model. This review discussed the major drawbacks of ELM, which include difficulty in determination of hidden layer structure, prediction instability and Imbalanced data distributions, the poor capability of sample structure preserving (SSP), and difficulty in accommodating lateral inhibition by direct random feature mapping. Other drawbacks include multi-graph complexity, global memory size, one-by-one or chuck-by-chuck (a block of data), global memory size limitation, and challenges with big data. The recent trend proposed by experts for each drawback is discussed in detail towards achieving an effective and cost-sensitive mode

    High-performance ensembles of the Online Sequential Extreme Learning Machine algorithm for regression and time series forecasting

    Get PDF
    Orientador: André Leon Sampaio GradvohlDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de TecnologiaResumo: As ferramentas baseadas em aprendizado de máquina têm sido utilizadas para previsão em séries temporais, devido à sua capacidade de identificar relações nos conjuntos de dados sem serem programadas explicitamente para isto. Algumas séries temporais podem ser caracterizadas como fluxos de dados, e consequentemente podem apresentar desvios de conceito, o que traz alguns desafios a mais para as técnicas tradicionais de aprendizado de máquina. A utilização de técnicas de aprendizado online, como os algoritmos e ensembles derivados do Online Sequential Extreme Learning Machine são adequados para previsão em fluxo de dados com desvios de conceito. No entanto, as previsões baseadas em fluxos de dados frequentemente possuem uma séria restrição relacionada ao tempo de execução dos algoritmos, devido à alta taxa de entrada das amostras. O objetivo deste trabalho foi verificar as acelerações no tempo de execução, proporcionadas pela aplicação de técnicas de computação de alto desempenho no algoritmo Online Sequential Extreme Learning Machine e em três ensembles que o utilizam como base, quando comparadas às respectivas abordagens convencionais. Para tanto, neste trabalho são propostas versões de alto desempenho implementadas em Linguagem C com a biblioteca Intel MKL e com o padrão MPI. A Intel MKL fornece funções que exploram os recursos multithread em processadores com vários núcleos, o que também expande o paralelismo para arquiteturas de multiprocessadores. O MPI permite paralelizar as tarefas com memória distribuída em vários processos, que podem ser alocados em um único nó computacional ou distribuídos por vários nós. Em resumo, a proposta deste trabalho consiste em uma paralelização de dois níveis, onde cada modelo do ensemble é alocado em um processo MPI e as funções internas de cada modelo são paralelizadas em um conjunto de threads por meio da biblioteca Intel MKL. Para os experimentos, foi utilizado um conjunto de dados sintético e outro real com desvios de conceito. Cada conjunto possui em torno de 175.000 instâncias contendo entre 6 e 10 atributos, e um fluxo online foi simulado com cerca de 170.000 instâncias. Os resultados experimentais mostraram que, em geral, os ensembles de alto desempenho melhoraram o tempo de execução, quando comparados com sua versão serial, com desempenho até 10 vezes mais rápido, mantendo a acurácia das previsões. Os testes foram realizados em três ambientes de alto desempenho distintos e também num ambiente convencional simulando um desktop ou um notebookAbstract: Tools based on machine learning have been used for time series forecasting because of their ability to identify relationships in data sets without being explicitly programmed for it. Some time series can be characterized as data streams, and consequently can present concept drifts, which brings some additional challenges to the traditional techniques of machine learning. The use of online learning techniques, such as algorithms and ensembles derived from the Online Sequential Extreme Learning Machine, are suitable for forecasting data streams with concept drifts. Nevertheless, data streams forecasting often have a serious constraint related to the execution time of the algorithms due to the high incoming samples rate. The objective of this work was to verify the accelerations in the execution time, provided by the adoption of high-performance computing techniques in the Online Sequential Extreme Learning Machine algorithm and in three ensembles that use it as a base, when compared to the respective conventional approaches. For this purpose, we proposed high-performance versions implemented in C programming language with the Intel MKL library and the MPI standard. Intel MKL provides functions that explore the multithread features in multicore CPUs, which expands the parallelism to multiprocessors architectures. MPI allows us to parallelize tasks with distributed memory on several processes, which can be allocated within a single computational node, or distributed over several nodes. In summary, our proposal consists of a two-level parallelization, where we allocated each ensemble model into an MPI process, and we parallelized the internal functions of each model in a set of threads through Intel MKL library. For the experiments, we used a synthetic and a real dataset with concept drifts. Each dataset has around 175,000 instances containing between 6 and 10 attributes, and an online data stream has been simulated with about 170,000 instances. Experimental results showed that, in general, high-performance ensembles improved execution time when compared with its serial version, performing up to 10-fold faster, maintaining the predictions' accuracy. The tests were performed in three distinct high-performance environments and also in a conventional environment simulating a desktop or a notebookMestradoSistemas de Informação e ComunicaçãoMestre em Tecnologi

    Evaluation of Clustering Algorithms on HPC Platforms

    Full text link
    [EN] Clustering algorithms are one of the most widely used kernels to generate knowledge from large datasets. These algorithms group a set of data elements (i.e., images, points, patterns, etc.) into clusters to identify patterns or common features of a sample. However, these algorithms are very computationally expensive as they often involve the computation of expensive fitness functions that must be evaluated for all points in the dataset. This computational cost is even higher for fuzzy methods, where each data point may belong to more than one cluster. In this paper, we evaluate different parallelisation strategies on different heterogeneous platforms for fuzzy clustering algorithms typically used in the state-of-the-art such as the Fuzzy C-means (FCM), the Gustafson-Kessel FCM (GK-FCM) and the Fuzzy Minimals (FM). The experimental evaluation includes performance and energy trade-offs. Our results show that depending on the computational pattern of each algorithm, their mathematical foundation and the amount of data to be processed, each algorithm performs better on a different platform.This work has been partially supported by the Spanish Ministry of Science and Innovation, under the Ramon y Cajal Program (Grant No. RYC2018-025580-I) and by the Spanish "Agencia Estatal de Investigacion" under grant PID2020-112827GB-I00 /AEI/ 10.13039/501100011033, and under grants RTI2018-096384-B-I00, RTC-2017-6389-5 and RTC2019-007159-5, by the Fundacion Seneca del Centro de Coordinacion de la Investigacion de la Region de Murcia under Project 20813/PI/18, and by the "Conselleria de Educacion, Investigacion, Cultura y Deporte, Direccio General de Ciencia i Investigacio, Proyectos AICO/2020", Spain, under Grant AICO/2020/302.Cebrian, JM.; Imbernón, B.; Soto, J.; Cecilia-Canales, JM. (2021). Evaluation of Clustering Algorithms on HPC Platforms. Mathematics. 9(17):1-20. https://doi.org/10.3390/math917215612091

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Novel deep cross-domain framework for fault diagnosis or rotary machinery in prognostics and health management

    Get PDF
    Improving the reliability of engineered systems is a crucial problem in many applications in various engineering fields, such as aerospace, nuclear energy, and water declination industries. This requires efficient and effective system health monitoring methods, including processing and analyzing massive machinery data to detect anomalies and performing diagnosis and prognosis. In recent years, deep learning has been a fast-growing field and has shown promising results for Prognostics and Health Management (PHM) in interpreting condition monitoring signals such as vibration, acoustic emission, and pressure due to its capacity to mine complex representations from raw data. This doctoral research provides a systematic review of state-of-the-art deep learning-based PHM frameworks, an empirical analysis on bearing fault diagnosis benchmarks, and a novel multi-source domain adaptation framework. It emphasizes the most recent trends within the field and presents the benefits and potentials of state-of-the-art deep neural networks for system health management. Besides, the limitations and challenges of the existing technologies are discussed, which leads to opportunities for future research. The empirical study of the benchmarks highlights the evaluation results of the existing models on bearing fault diagnosis benchmark datasets in terms of various performance metrics such as accuracy and training time. The result of the study is very important for comparing or testing new models. A novel multi-source domain adaptation framework for fault diagnosis of rotary machinery is also proposed, which aligns the domains in both feature-level and task-level. The proposed framework transfers the knowledge from multiple labeled source domains into a single unlabeled target domain by reducing the feature distribution discrepancy between the target domain and each source domain. Besides, the model can be easily reduced to a single-source domain adaptation problem. Also, the model can be readily updated to unsupervised domain adaptation problems in other fields such as image classification and image segmentation. Further, the proposed model is modified with a novel conditional weighting mechanism that aligns the class-conditional probability of the domains and reduces the effect of irrelevant source domain which is a critical issue in multi-source domain adaptation algorithms. The experimental verification results show the superiority of the proposed framework over state-of-the-art multi-source domain-adaptation models

    Simulation and implementation of novel deep learning hardware architectures for resource constrained devices

    Get PDF
    Corey Lammie designed mixed signal memristive-complementary metal–oxide–semiconductor (CMOS) and field programmable gate arrays (FPGA) hardware architectures, which were used to reduce the power and resource requirements of Deep Learning (DL) systems; both during inference and training. Disruptive design methodologies, such as those explored in this thesis, can be used to facilitate the design of next-generation DL systems

    Machine Learning for Microcontroller-Class Hardware -- A Review

    Get PDF
    The advancements in machine learning opened a new opportunity to bring intelligence to the low-end Internet-of-Things nodes such as microcontrollers. Conventional machine learning deployment has high memory and compute footprint hindering their direct deployment on ultra resource-constrained microcontrollers. This paper highlights the unique requirements of enabling onboard machine learning for microcontroller class devices. Researchers use a specialized model development workflow for resource-limited applications to ensure the compute and latency budget is within the device limits while still maintaining the desired performance. We characterize a closed-loop widely applicable workflow of machine learning model development for microcontroller class devices and show that several classes of applications adopt a specific instance of it. We present both qualitative and numerical insights into different stages of model development by showcasing several use cases. Finally, we identify the open research challenges and unsolved questions demanding careful considerations moving forward.Comment: Accepted for publication at IEEE Sensors Journa
    • …
    corecore