438 research outputs found
Shape-based defect classification for Non Destructive Testing
The aim of this work is to classify the aerospace structure defects detected
by eddy current non-destructive testing. The proposed method is based on the
assumption that the defect is bound to the reaction of the probe coil impedance
during the test. Impedance plane analysis is used to extract a feature vector
from the shape of the coil impedance in the complex plane, through the use of
some geometric parameters. Shape recognition is tested with three different
machine-learning based classifiers: decision trees, neural networks and Naive
Bayes. The performance of the proposed detection system are measured in terms
of accuracy, sensitivity, specificity, precision and Matthews correlation
coefficient. Several experiments are performed on dataset of eddy current
signal samples for aircraft structures. The obtained results demonstrate the
usefulness of our approach and the competiveness against existing descriptors.Comment: 5 pages, IEEE International Worksho
Innovative Techniques for Testing and Diagnosing SoCs
We rely upon the continued functioning of many electronic devices for our everyday welfare,
usually embedding integrated circuits that are becoming even cheaper and smaller
with improved features. Nowadays, microelectronics can integrate a working computer
with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC).
SoCs are also employed on automotive safety-critical applications, but need to be tested
thoroughly to comply with reliability standards, in particular the ISO26262 functional
safety for road vehicles.
The goal of this PhD. thesis is to improve SoC reliability by proposing innovative
techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals,
and GPUs. The proposed approaches in the sequence appearing in this thesis are described
as follows:
1. Embedded Memory Diagnosis: Memories are dense and complex circuits which
are susceptible to design and manufacturing errors. Hence, it is important to understand
the fault occurrence in the memory array. In practice, the logical and physical
array representation differs due to an optimized design which adds enhancements to
the device, namely scrambling. This part proposes an accurate memory diagnosis
by showing the efforts of a software tool able to analyze test results, unscramble
the memory array, map failing syndromes to cell locations, elaborate cumulative
analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing
syndromes were analyzed as case studies gathered on an industrial automotive
32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually,
and results were confirmed by real photos taken from a microscope.
2. Functional Test Pattern Generation: The key for a successful test is the pattern applied
to the device. They can be structural or functional; the former usually benefits
from embedded test modules targeting manufacturing errors and is only effective
before shipping the component to the client. The latter, on the other hand, can be
applied during mission minimally impacting on performance but is penalized due
to high generation time. However, functional test patterns may benefit for having
different goals in functional mission mode. Part III of this PhD thesis proposes
three different functional test pattern generation methods for CPU cores embedded
in SoCs, targeting different test purposes, described as follows:
a. Functional Stress Patterns: Are suitable for optimizing functional stress during
I
Operational-life Tests and Burn-in Screening for an optimal device reliability
characterization
b. Functional Power Hungry Patterns: Are suitable for determining functional
peak power for strictly limiting the power of structural patterns during manufacturing
tests, thus reducing premature device over-kill while delivering high test
coverage
c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns
with functional ones, allowing its execution periodically during mission.
In addition, an external hardware communicating with a devised SBST was proposed.
It helps increasing in 3% the fault coverage by testing critical Hardly
Functionally Testable Faults not covered by conventional SBST patterns.
An automatic functional test pattern generation exploiting an evolutionary algorithm
maximizing metrics related to stress, power, and fault coverage was employed
in the above-mentioned approaches to quickly generate the desired patterns. The
approaches were evaluated on two industrial cases developed by STMicroelectronics;
8051-based and a 32-bit Power Architecture SoCs. Results show that generation
time was reduced upto 75% in comparison to older methodologies while
increasing significantly the desired metrics.
3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices
are suitable for generating structural patterns, testing and activating mitigation techniques,
and validating robust hardware and software applications. GPGPUs are
known for fast parallel computation used in high performance computing and advanced
driver assistance where reliability is the key point. Moreover, GPGPU manufacturers
do not provide design description code due to content secrecy. Therefore,
commercial fault injectors using the GPGPU model is unfeasible, making radiation
tests the only resource available, but are costly. In the last part of this thesis, we
propose a software implemented fault injector able to inject bit-flip in memory elements
of a real GPGPU. It exploits a software debugger tool and combines the
C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in
program variables. The goal is to validate robust parallel algorithms by studying
fault propagation or activating redundancy mechanisms they possibly embed. The
effectiveness of the tool was evaluated on two robust applications: redundant parallel
matrix multiplication and floating point Fast Fourier Transform
Cross layer reliability estimation for digital systems
Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost.
One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern.
Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults.
For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability.
This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains
Artificial Intelligence in Civil Engineering
Artificial intelligence is a branch of computer science, involved in the research, design, and application of intelligent computer. Traditional methods for modeling and optimizing complex structure systems require huge amounts of computing resources, and artificial-intelligence-based solutions can often provide valuable alternatives for efficiently solving problems in the civil engineering. This paper summarizes recently developed methods and theories in the developing direction for applications of artificial intelligence in civil engineering, including evolutionary computation, neural networks, fuzzy systems, expert system, reasoning, classification, and learning, as well as others like chaos theory, cuckoo search, firefly algorithm, knowledge-based engineering, and simulated annealing. The main research trends are also pointed out in the end. The paper provides an overview of the advances of artificial intelligence applied in civil engineering
Turbofan Engine Behaviour Forecasting using Flight Data and Machine Learning Methods
The modern gas turbine engine widely used for aircraft propulsion is a complex
integrated system which undergoes deterioration during operation due to the
degradation of its gas path components. This dissertation outlines the importance of
Engine Condition Monitoring (ECM) for a more efficient maintenance planning.
Different ML approaches are compared with the application of predicting engine
behaviour aiming at finding the optimal time for engine removal. The selected models
were OLS, ARIMA, NeuralProphet, and Cond-LSTM.
Long operating and maintenance history of two mature CF6-80C2 turbofan engines were
used for the analysis, which allowed for the identification of the impact of different
factors on engine performance. These factors were also considered when training the ML
models, which resulted in models capable of performing prediction under specified
operation and flight conditions. The Machine Learning (ML) models provided
forecasting of the Exhaust Gas Temperature (EGT) parameter at take-off phase.
Cond-LSTM is shown to be a reliable tool for forecasting engine EGT with a Mean
Absolute Error (MAE) of 7.64?, allowing for gradual performance deterioration under
specific operation type. In addition, forecasting engine performance parameters has
shown to be useful for identifying the optimal time for performing important
maintenance action, such as engine gas path cleaning. This thesis has shown that engine
removal forecast can be more precise by using sophisticated trend monitoring and
advanced ML methods.O moderno motor de turbina a gás amplamente utilizado para propulsão de aeronaves é
um sistema integrado complexo que sofre deterioração durante a operação devido à
degradação de seus componentes do percurso do gás. Esta dissertação destaca a
importância da monitorização da condição do motor para um planejamento de
manutenção mais eficiente. Diferentes abordagens de Machine Learning (ML) são
comparadas visando a aplicação de previsão do comportamento do motor com o objetivo
de encontrar o momento ideal para a remoção do motor. Os modelos selecionados foram
OLS, ARIMA, NeuralProphet e Cond-LSTM.
O longo histórico de operação e manutenção de dois motores turbofan CF6-80C2
maduros foi usado para a análise, o que permitiu a identificação do impacto de diferentes
fatores no desempenho do motor. Esses fatores também foram considerados no
treinamento dos modelos de ML, o que resultou em modelos capazes de realizar a
previsão em operação e condições de voo especificadas. Os modelos ML forneceram
previsão do parâmetro Exhaust Gas Temperature (EGT) na fase de decolagem.
O Cond-LSTM demonstrou ser uma ferramenta confiável para previsão do EGT do motor
com um erro absoluto médio de 7,64 ?, permitindo a deterioração gradual do
desempenho sob um tipo específico de operação. Além disso, a previsão dos parâmetros
de desempenho do motor tem se mostrado útil para identificar o momento ideal para
realizar ações de manutenção importantes, como a limpeza do percurso do gás do motor.
Esta tese mostrou que a previsão de remoção do motor pode ser mais precisa usando um
monitoramento sofisticado de tendências e métodos avançados de ML
Deployment and Operation of Complex Software in Heterogeneous Execution Environments
This open access book provides an overview of the work developed within the SODALITE project, which aims at facilitating the deployment and operation of distributed software on top of heterogeneous infrastructures, including cloud, HPC and edge resources. The experts participating in the project describe how SODALITE works and how it can be exploited by end users. While multiple languages and tools are available in the literature to support DevOps teams in the automation of deployment and operation steps, still these activities require specific know-how and skills that cannot be found in average teams. The SODALITE framework tackles this problem by offering modelling and smart editing features to allow those we call Application Ops Experts to work without knowing low level details about the adopted, potentially heterogeneous, infrastructures. The framework offers also mechanisms to verify the quality of the defined models, generate the corresponding executable infrastructural code, automatically wrap application components within proper execution containers, orchestrate all activities concerned with deployment and operation of all system components, and support on-the-fly self-adaptation and refactoring
Roadmap on signal processing for next generation measurement systems
Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System
Smart Sensor Monitoring in Machining of Difficult-to-cut Materials
The research activities presented in this thesis are focused on the development of smart sensor monitoring procedures applied to diverse machining processes with particular reference to the machining of difficult-to-cut materials. This work will describe the whole smart sensor monitoring procedure starting from the configuration of the multiple sensor monitoring system for each specific application and proceeding with the methodologies for sensor signal detection and analysis aimed at the extraction of signal features to feed to intelligent decision-making systems based on artificial neural networks. The final aim is to perform tool condition monitoring in advanced machining processes in terms of tool wear diagnosis and forecast, in the perspective of zero defect manufacturing and green technologies.
The work has been addressed within the framework of the national MIUR PON research project CAPRI, acronym for “Carrello per atterraggio con attuazione intelligente” (Landing Gear with Intelligent Actuation), and the research project STEP FAR, acronym for “Sviluppo di materiali e Tecnologie Ecocompatibili, di Processi di Foratura, taglio e di Assemblaggio Robotizzato” (Development of eco-compatible materials and technologies for robotised drilling and assembly processes). Both projects are sponsored by DAC, the Campania Technological Aerospace District, and involve two aerospace industries, Magnaghi Aeronautica S.p.A. and Leonardo S.p.A., respectively. Due to the industrial framework in which the projects were developed and taking advantage of the support from the industrial partners, the project activities have been carried out with the aim to contribute to the scientific research in the field of machining process monitoring as well as to promote the industrial applicability of the results.
The thesis was structured in order to illustrate all the methodologies, the experimental tests and the results obtained from the research activities. It begins with an introduction to “Sensor monitoring of machining processes” (Chapter 2) with particular attention to the main sensor monitoring applications and the types of sensors which are employed in machining. The key methods for advanced sensor signal processing, including the implementation of sensor fusion technology, are discussed in details as they represent the basic input for cognitive decision-making systems construction. The chapter finally presents a brief discussion on cloud-based manufacturing which will represent one of the future developments of this research work.
Chapters 3 and 4 illustrate the case studies of machining process sensor monitoring investigated in the research work. Within the CAPRI project, the feasibility of the dry turning process of Ti6Al4V alloy (Chapter 3) was studied with particular attention to the optimization of the machining parameters avoiding the use of coolant fluids. Since very rapid tool wear is experienced during dry machining of Titanium alloys, the multiple sensor monitoring system was used in order to develop a methodology based on a smart system for on line tool wear detection in terms of maximum flank wear land. Within the STEP FAR project, the drilling process of carbon fibre reinforced (CFRP) composite materials was studied using diverse experimental set-ups. Regarding the tools, three different types of drill bit were employed, including traditional as well as innovative geometry ones. Concerning the investigated materials, two different types of stack configurations were employed, namely CFRP/CFRP stacks and hybrid Al/CFRP stacks. Consequently, the machining parameters for each experimental campaign were varied, and also the methods for signal analysis were changed to verify the performance of the different methodologies. Finally, for each case different neural network configurations were investigated for cognitive-based decision making. First of all, the applicability of the system was tested in order to perform tool wear diagnosis and forecast. Then, the discussion proceeds with a further aim of the research work, which is the reduction of the number of selected sensor signal features, in order to improve the performance of the cognitive decision-making system, simplify modelling and facilitate the implementation of these methodologies in a cloud manufacturing approach to tool condition monitoring.
Sensor fusion methodologies were applied to the extracted and selected sensor signal features in the perspective of feature reduction with the purpose to implement these procedures for big data analytics within the Industry 4.0 framework. In conclusion, the positive impact of the proposed tool condition monitoring methodologies based on multiple sensor signal acquisition and processing is illustrated, with particular reference to the reliable assessment of tool state in order to avoid too early or too late cutting tool substitution that negatively affect machining time and cost
- …