93 research outputs found
Design and validation of a structural health monitoring system for aeronautical structures.
Structural Health Monitoring (SHM) is an area where the main objective is the verification of the state or the health of the structures in order to ensure proper performance and maintenance cost savings using a sensor network attached to the structure, continuous monitoring and algorithms. Different benefits are derived from the implementation of SHM, some of them are: knowledge about the behavior of the structure under different loads and different environmental changes, knowledge of the current state in order to verify the integrity of the structure and determine whether a structure can work properly or whether it needs to be maintained or replaced and, therefore, to reduce maintenance costs. The paradigm of damage identification (comparison between the data collected from the structure without damages and the current
structure in orderto determine if there are any changes) can be tackled as a pattern recognition problem. Some statistical techniques as Principal Component Analysis (PCA) or Independent Component Analysis (ICA) are very useful for this purpose because they allow obtaining the most relevant information from a large amount of variables.
This thesis uses an active piezoelectric system to develop statistical data driven approaches for the detection, localization and classification of damages in structures. This active piezoelectric system is permanently attached to the surface of the structure under test in order to apply vibrational excitations and sensing the dynamical responses propagated through the structure at different points. As pattern recognition technique, PCA is used to perform the main task of the proposed methodology: to build a base-line model of the structure without damage and subsequentlyto compare the data from the current structure (under test) with this model. Moreover, different damage indices are calculated to detect abnormalities in the structure under test. Besides, the localization of the damage can be determined by means of the contribution of each sensor to each index. This contribution is calculated by several different methods and their comparison is performed. To classify different damages, the damage detection methodology is extended using a Self-Organizing Map (SOM), which is properly trained and validated to build a pattern baseline model using projections of the data onto the PCAmodel and damage detection indices. This baseline is further used as a reference for blind diagnosis tests of structures. Additionally, PCA is replaced by ICAas pattern recognition technique. A comparison between the two methodologies is performed highlighting advantages and disadvantages. In order to study the performance of the damage classification methodology under different scenarios, the methodology is tested using data from a structure under several different temperatures.
The methodologies developed in this work are tested and validated using different structures, in particular an aircraft turbine blade, an aircraft wing skeleton, an aircraft fuselage,some aluminium plates and some composite matarials plates.La monitorización de daños en estructuras (SHM por sus siglas en inglés) es un área que tiene como principal objetivo la verificación del estado o la salud de la estructura con el fin de asegurar el correcto funcionamiento de esta y ahorrar costos de mantenimiento.
Para esto se hace uso de sensores que son adheridos a la estructura, monitorización continua y algoritmos. Diferentes beneficios se obtienen de la aplicación de SHM, algunos de ellos son: el conocimiento sobre el desempeño de la estructura cuando esta es sometida a diversas cargas y cambios ambientales, el conocimiento del estado actual de la estructura con el fin de determinar la integridad de la estructura y definir si esta puede trabajar adecuadamente o si por el contrario debe ser reparada o reemplazada con el correspondiente beneficio del ahorro de gastos de mantenimiento. El paradigma de la identificación de daños (comparación entre los datos obtenidos de la estructura sin daños y la estructura en un estado posterior para determinar cambios) puede ser abordado como un problema de reconocimiento de patrones. Algunas técnicas estadísticas tales como Análisis de Componentes Principales (PCA por sus siglas en inglés) o Análisis de Componentes Independientes (ICA por sus siglas en ingles) son muy útiles para este propósito puesto que permiten obtener la información más relevante de una gran cantidad de variables.
Esta tesis hace uso de un sistema piezoeléctrico activo para el desarrollo de algoritmos estadísticos de manejo de datos para la detección, localización y clasificación de daños en estructuras. Este sistema piezoeléctrico activo está permanentemente adherido a la superficie de la estructura bajo prueba con el objeto de aplicar señales vibracionales de excitación y recoger las respuestas dinámicas propagadas a través de la estructura en diferentes puntos.
Como técnica de reconocimiento de patrones se usa Análisis de Componentes Principales para realizar la tarea principal de la metodología propuesta: construir un modelo PCA base de la
estructura sin daño y posteriormente compararlo con los datos de la estructura bajo prueba.
Adicionalmente, algunos índices de daños son calculados para detectar anormalidades en la estructura bajo prueba. Para la localización de daños se usan las contribuciones de cada sensor
a cada índice, las cuales son calculadas mediante varios métodos de contribución y comparadas para mostrar sus ventajas y desventajas.
Para la clasificación de daños, se amplia la metodología de detección añadiendo el uso de Mapas auto-organizados, los cuales son adecuadamente entrenados y validados para construir un modelo patrón base usando proyecciones de los datos sobre el modelo PCA base e índices de detección de daños. Este patrón es usado como referencia para realizar un diagnóstico ciego de la estructura. Adicionalmente, dentro de la metodología propuesta, se utiliza ICA en lugar de PCA como técnica de reconocimiento de patrones. Se incluye también una comparación entre la aplicación de las dos técnicas para mostrar las ventajas y desventajas.
Para estudiar el desempeño de la metodología de clasificación de daños bajo diferentes escenarios, esta se prueba usando datos obtenidos de una estructura sometida a diferentes temperaturas.
Las metodologías desarrolladas en este trabajo fueron probadas y validadas usando diferentes estructuras, en particular un álabe de turbina, un esqueleto de ala y un fuselaje de avión, así como algunas placas de aluminio y de material compuest
Investigation of wireless power transfer-based eddy current non-destructive testing and evaluation
PhD ThesisEddy current testing (ECT) is a non-contact inspection widely used as non-destructive
testing and evaluation (NDT&E) of pipeline and rail lines due to its high sensitivity to surface
and subsurface defects, cheap operating cost, tolerance to harsh environments, and capability
of a customisable probe for complex geometric surfaces. However, the remote field of
transmitter-receiver (Tx-Rx) ECT depends on the Tx-Rx coils gap, orientation, and lift-off
distance, despite each coil responding to the effect of sample parameters according to its liftoff distance. They bring challenges to accurate defect detection and characterisation by
weakening the ECT probe’s transfer response, affecting sensitivity to the defect, distorting the
amplitude of the extracted features, and responding with fewer feature points at non-efficient
energy transfer. Therefore, this study proposed a magnetically-coupled resonant wireless power
transfer (WPT)-based ECT (WPTECT) concept to build the relationship between Tx-Rx coil at
maximum energy transfer response, including shifting and splitting (resonance) frequency
behaviour.
The proposed WPTECT system was investigated in three different studies viz., (1)
investigated the multiple resonance point features for detection and characterisation of slots on
two different aluminium samples using a series-series (SS) topology of WPTECT; (2) mapped
and scanned pipeline with a natural dent defect using a flexible printed coil (FPC) array probe
based on the parallel-parallel (PP) topology of WPTECT; and (3) evaluated five different
WPTECT topologies for optimal response and extracted features and characterised entire
parameters of inclined angular Rolling Contact Fatigue (RCF) cracks in a rail-line material via
an optimised topology. Multiple feature extraction, selection, and fusion were evaluated for the
defect profile and compared in the study, unattainable by other ECT methods.
The first study's contribution investigated multiple resonances and principal component
analysis (PCA) features of the transfer response from scanning (eight) slots on two aluminium
samples. The results have shown the potential of the multiple features for slot depth and width
characterisation and demonstrated that the eddy-current density is highest at two points
proportionate to the slot width. The second study's contribution provided a larger area scanning
capability in a single probe amenable to complex geometrical structures like curvature surfaces.
Among the extracted individual and fused features for defect reconstruction, the multi-layer
feed-forward Deep learning-based multiple feature fusion has better 3D defect reconstruction,
whilst the second resonances feature provided better local information than the first one for
investigating pipeline dent area. The third study's contribution optimised WPTECT topology
for multiple feature points capability and its optimal features extraction at the desired lift-off
conditions. The PP and combined PP and SS (PS-PS) WPTECT topologies responded with
multiple resonances compared to the other three topologies, with single resonance, under the
same experimental situation. However, the extracted features from PS-PS topology provided
the lowest sensitivity to lift-off distances and reconstructed depth, width, and inclined angle of
RCF cracks with a maximum correlation, R2
-value of 96.4%, 93.1%, and 79.1%, respectively,
and root-mean-square-error of 0.05mm, 0.08mm, and 6.60
, respectively.
The demonstrated magnetically-coupled resonant WPTECT Tx-Rx probe characterised
defects in oil and gas pipelines and rail lines through multiple features for multiple parameters
information. Further work can investigate the phase of the transfer response as expected to offer
robust features for material characterisation. The WPTECT system can be miniaturised using
WPT IC chips as portable systems to characterise multiple layers parameters. It can further
evaluate the thickness and gap between two concentric conductive tubes; pressure tube
encircled by calandria tube in nuclear reactor fuel channels.PTDF Nigeri
Theory and Engineering of Scheduling Parallel Jobs
Scheduling is very important for an efficient utilization of modern parallel computing systems. In this thesis, four main research areas for scheduling are investigated: the interplay and distribution of decision makers, the efficient schedule computation, efficient scheduling for the memory hierarchy and energy-efficiency. The main result is a provably fast and efficient scheduling algorithm for malleable jobs. Experiments show the importance and possibilities of scheduling considering the memory hierarchy
Fingerprinting of complex bioprocess data
PhD ThesisThe focus of the research is on the analysis of complex bioprocess datasets with the
ultimate goal of forming a link between the data and its underlying biological patterns.
The challenges associated with investigating complex bioprocess data include the high
dimensionality of the underlying measurements, the limited number of “observations”,
and the complexity of selecting meaningful features to characterise the data. Contained
within these data is a wealth of information that can contribute to inferring process
outcomes and providing insight into improving productivity and process efficiency. To
address these challenges, there is a real need for techniques to analyse and extract
knowledge from the data. This thesis investigates an integrated discrete wavelet
transform (DWT) and multiway principal components analysis (MPCA) approach to
extract meaningful information from different types of bioprocess data.
The integrated methodology is demonstrated by application to two types of bioprocess
data: a near infrared (NIR) dataset collected from an industrial monoclonal antibodies
(MAb) process, and an electrospray ionisation mass spectrometry (ESI-MS) dataset
generated during the development of recombinant mammalian cell lines. The objective
of the thesis was to develop a methodology that enabled the extraction of information
from these two data sets. For the industrial NIR dataset, the genealogy or parent-child
relationship of batch process from monoclonal antibodies (MAb) manufacturing was
investigated whilst for the ESI-MS dataset goal was to identify characteristics that
would enable the differentiation between high and low cell producers.
The main challenges of the NIR and ESI-MS data sets lay in the complexity of the
spectra. The NIR spectra usually have broad overlapping peaks and baseline shifts.
Furthermore, as the NIR spectra used in this thesis were collected from batch process,
there is an extra dimension in the data that of batch. On the one hand, the extra dimension provides extra information but on the other, it presents a further challenge
as the data now is three-dimensional and requires additional pre-processing, including
data matrix unfolding and batch alignment. Similar to the NIR spectra, the ESI-MS
dataset also faces the problem of baseline shifts along with other complexities
including high noise to signal ratio, shifts in the mass-to-charge ratio, and
differences in signal intensities. These challenges lead to difficulties in extracting
relevant information about the feature of interest. The proposed methodology was
proven effective in extracting meaningful information from both data sets.
In summary, the proposed method which utilised the integration of discrete wavelet
transform and multiway principal component analysis was able to differentiate the
distinguished characteristics of the spectra in the datasets thereby providing
understanding of the relationships between spectral data and the underlying behaviour
of the process.International Islamic University Malaysia,
Ministry of High Education Malaysi
Recommended from our members
Data-driven modeling and optimization of sequential batch-continuous process
Driven by the need to lower capital expenditures and operating costs, as well as by competitive pressure to increase product quality and consistency, modern chemical processes have become increasingly complex. These trends are manifest, on the one hand, in complex equipment configurations and, on the other hand, in a broad array of sensors (and control systems), which generate large quantities of operating data. Of particular interest is the combination of two traditional routes of chemical processing: batch and continuous. Batch to continuous processes (B2C), which constitute the topic of this dissertation, comprise of a batch section, which is responsible for preparing the materials that are then processed in the continuous section. In addition to merging the modeling, control and optimization approaches related to the batch and continuous operating paradigms --which are radically different in many aspects-- challenges related to analyzing the operation of such processes arise from the multi-phase flow. In particular, we will be considering the case where a particulate solid is suspended in a liquid ``carrier'', in the batch stage, and the two-phase mixture is conveyed through the continuous stage. Our explicit goal is to provide a complete operating solution for such processes, starting with the development of meaningful and computationally efficient mathematical models, continuing with a control and fault detection solution, and finally, a production scheduling concept. Owing to process complexity, we reject out of hand the use of first-principles models, which are inevitably high dimensional and computationally expensive, and focus on data-driven approaches instead. Raw data obtained from chemical industry are subject to noise, equipment malfunction and communication failures and, as such, data recorded in process historian databases may contain outliers and measurement noise. Without proper pretreatment, the accuracy and performance of a model derived from such data may be inadequate. In the next chapter of this dissertation, we address this issue, and evaluate several data outlier removal techniques and filtering methods using actual production data from an industrial B2C system. We also address a specific challenge of B2C systems, that is, synchronizing the timing of the batch data need with the data collected from the continuous section of the process. Variable-wise unfolded data (a typical approach for batch processes) exhibit measurement gaps between the batches; however, this type of behavior cannot be found in the subsequent continuous section. These data gaps have an impact on data analysis and, in order to address this issue, we provide a method for filling in the missing values. The batch characteristic values are assigned in the gaps to match the data length with the continuous process, a procedure that preserves meaningful process correlations. Data-driven modeling techniques such as principal component analysis (PCA) and partial least squares (PLS) regression are well-established for modeling batch or continuous processes. In this thesis, we consider them from the perspective of the B2C systems under consideration. Specific challenges that arise during modeling of these systems are related to nonlinearity, which, in turn, is due to multiple operating modes associated with different product types/product grades. In order to deal with this, we propose partitioning the gap-filled data set into subsets using k-means clustering. Using the clustering method, a large data set that reflects multiple operating modes and the associated nonlinearity can be broken down into subsets in which the system exhibits a potentially linear behavior. Also, in order to further increase the model accuracy, the inputs to the model need to be refined. Unrelated variables may corrupt the resulting model by introducing unnecessary noise and irrelevant information. By properly eliminating any uninformative variables, the model performance can be improved along with the interpretability. We use variable selection methods to investigate the model coefficients or variable importance in projection (VIP) values to determine the variables to retain in the model. Developing a model to estimate the final product quality poses different challenges. Measuring and quantifying the final product quality online can be limited due to physical and economic constraints. Physically, there are some quantities that cannot be measured due to sensor sizes or surrounding environments. Economically, the offline ``lab'' measurements may lead to destroying the sample used for the testing. These constraints lead to multiple sampling rates. The process measurements are stored and available continuously in real-time, but the quality measurements have much lower sampling rate. In order to account for this discrepancy, the online process measurements are down-sampled to match the sampling frequency of the lab measurements, and subsequently, soft sensors are can be developed to estimated the final product quality. With the soft sensor in place, the process needs to be optimized to maximize the plant efficiency. Using the real-time optimization, the optimal sequence of manipulated inputs that minimizes the off-spec products are calculated. In addition, the optimal sequences of setpoints can be calculated by carrying out the scheduling calculation with the process model. Traditionally, the scheduling calculation is carried out without taking the process dynamics into account, which could result in off-spec products if a disturbance is introduced. Incorporating the process dynamics into the scheduling layer poses many different challenges numerically. The proposed time scale bridging model (SBM) is able to capture the input-output behavior of the process while greatly reducing the computational complexity and time.Chemical Engineerin
Development of innovative analytical methods based on spectroscopic techniques and multivariate statistical analysis for quality control in the food and pharmaceutical fields.
The increasing demand on quality assurance and ever more stringent regulations in food and pharmaceutical fields are promoting the need for analytical techniques enabling to provide reliable and accurate results. However, traditional analytical methods are labor-intensive, time-consuming, expensive and they usually require skilled personnel for performing the analysis. For these reasons, in the last decades, quality control protocols based on the employment of spectroscopic methods have been developed for many different application fields, including pharmaceutical and food ones. Vibrational spectroscopic techniques can be an adequate alternative for acquiring both chemical and physical information related to homogenous and heterogenous matrices of interest. Moreover, the significant development of powerful data-driven methodologies allowed to develop algorithms for the optimal extraction and processing of the complex spectroscopic signals allowing to apply combined approaches for quantitative and qualitative purposes.
The present Doctoral Thesis has been focused on the development of ad-hoc analytical strategies based on the application of spectroscopic techniques coupled with multivariate data analysis approaches for providing alternative analytical protocols for quality control in food and pharmaceutical sectors.
Regarding applications in food sector, excitation-emission Fluorescence Spectroscopy, Near Infrared Spectroscopy (NIRS) and NIR Hyperspectral Imaging (HSI) have been tested for solving analytical issues of independent case-studies. Unsupervised approaches based on Principal Component Analysis (PCA) and Parallel Factor Analysis (PARAFAC) have been applied on fluorescence data for characterizing green tea samples, while quantitative predictive approaches as Partial Least Squares regression have been used to correlate NIR spectra with quality parameters of extra-virgin olive oil samples. HSI was applied to study dynamic chemical processes which occur during cheese ripening with the aim to map chemical and sensory changes over time.
The rapid technical progress in terms of spectroscopic instrumentations has led to have more flexible portable systems suitable for performing measurements directly in the field or in a manufacturing plant. Within this scenario, NIR spectroscopy proved to be one of the most powerful Process Analytical Technologies (PAT) for monitoring and controlling complex manufacturing processes. In this thesis, two applications based on the implementation of miniaturized NIR sensors have been performed for the real-time powder blending monitoring of pharmaceutical and food formulation, respectively. The main challenges in blending monitoring are related to the assessment of the homogeneity of multicomponent formulations, which is crucial to ensure the safety and effectiveness of a solid pharmaceutical formulation or the quality of a food product. In the third chapter of this thesis, tailor made qualitative chemometric strategies for obtaining a global understanding of blending processes and to optimize the endpoint detection are presented
- …