3,260 research outputs found

    Widely Linear State Space Filtering of Improper Complex Signals

    Get PDF
    Complex signals are the backbone of many modern applications, such as power systems, communication systems, biomedical sciences and military technologies. However, standard complex valued signal processing approaches are suited to only a subset of complex signals known as proper, and are inadequate of the generality of complex signals, as they do not fully exploit the available information. This is mainly due to the inherent blindness of the algorithms to the complete second order statistics of the signals, or due to under-modelling of the underlying system. The aim of this thesis is to provide enhanced complex valued, state space based, signal processing solutions for the generality of complex signals and systems. This is achieved based on the recent advances in the so called augmented complex statistics and widely linear modelling, which have brought to light the limitations of conventional statistical complex signal processing approaches. Exploiting these developments, we propose a class of widely linear adaptive state space estimation techniques, which provide a unified framework and enhanced performance for the generality of complex signals, compared with conventional approaches. These include the linear and nonlinear Kalman and particle filters, whereby it is shown that catering for the complete second order information and system models leads to significant performance gains. The proposed techniques are also extended to the case of cooperative distributed estimation, where nodes in a network collaborate locally to estimate signals, under a framework that caters for general complex signals, as well as the cross-correlations between observation noises, unlike earlier solutions. The analysis of the algorithms are supported by numerous case studies, including frequency estimation in three phase power systems, DIFAR sonobuoy underwater target tracking, and real-world wind modeling and prediction.Open Acces

    Kernel-based fault diagnosis of inertial sensors using analytical redundancy

    Get PDF
    Kernel methods are able to exploit high-dimensional spaces for representational advantage, while only operating implicitly in such spaces, thus incurring none of the computational cost of doing so. They appear to have the potential to advance the state of the art in control and signal processing applications and are increasingly seeing adoption across these domains. Applications of kernel methods to fault detection and isolation (FDI) have been reported, but few in aerospace research, though they offer a promising way to perform or enhance fault detection. It is mostly in process monitoring, in the chemical processing industry for example, that these techniques have found broader application. This research work explores the use of kernel-based solutions in model-based fault diagnosis for aerospace systems. Specifically, it investigates the application of these techniques to the detection and isolation of IMU/INS sensor faults – a canonical open problem in the aerospace field. Kernel PCA, a kernelised non-linear extension of the well-known principal component analysis (PCA) algorithm, is implemented to tackle IMU fault monitoring. An isolation scheme is extrapolated based on the strong duality known to exist between probably the most widely practiced method of FDI in the aerospace domain – the parity space technique – and linear principal component analysis. The algorithm, termed partial kernel PCA, benefits from the isolation properties of the parity space method as well as the non-linear approximation ability of kernel PCA. Further, a number of unscented non-linear filters for FDI are implemented, equipped with data-driven transition models based on Gaussian processes - a non-parametric Bayesian kernel method. A distributed estimation architecture is proposed, which besides fault diagnosis can contemporaneously perform sensor fusion. It also allows for decoupling faulty sensors from the navigation solution

    Digital Filters

    Get PDF
    The new technology advances provide that a great number of system signals can be easily measured with a low cost. The main problem is that usually only a fraction of the signal is useful for different purposes, for example maintenance, DVD-recorders, computers, electric/electronic circuits, econometric, optimization, etc. Digital filters are the most versatile, practical and effective methods for extracting the information necessary from the signal. They can be dynamic, so they can be automatically or manually adjusted to the external and internal conditions. Presented in this book are the most advanced digital filters including different case studies and the most relevant literature

    NASA Space Engineering Research Center Symposium on VLSI Design

    Get PDF
    The NASA Space Engineering Research Center (SERC) is proud to offer, at its second symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories and the electronics industry. These featured speakers share insights into next generation advances that will serve as a basis for future VLSI design. Questions of reliability in the space environment along with new directions in CAD and design are addressed by the featured speakers

    Predictive Maintenance of an External Gear Pump using Machine Learning Algorithms

    Get PDF
    The importance of Predictive Maintenance is critical for engineering industries, such as manufacturing, aerospace and energy. Unexpected failures cause unpredictable downtime, which can be disruptive and high costs due to reduced productivity. This forces industries to ensure the reliability of their equip-ment. In order to increase the reliability of equipment, maintenance actions, such as repairs, replacements, equipment updates, and corrective actions are employed. These actions affect the flexibility, quality of operation and manu-facturing time. It is therefore essential to plan maintenance before failure occurs.Traditional maintenance techniques rely on checks conducted routinely based on running hours of the machine. The drawback of this approach is that maintenance is sometimes performed before it is required. Therefore, conducting maintenance based on the actual condition of the equipment is the optimal solu-tion. This requires collecting real-time data on the condition of the equipment, using sensors (to detect events and send information to computer processor).Predictive Maintenance uses these types of techniques or analytics to inform about the current, and future state of the equipment. In the last decade, with the introduction of the Internet of Things (IoT), Machine Learning (ML), cloud computing and Big Data Analytics, manufacturing industry has moved forward towards implementing Predictive Maintenance, resulting in increased uptime and quality control, optimisation of maintenance routes, improved worker safety and greater productivity.The present thesis describes a novel computational strategy of Predictive Maintenance (fault diagnosis and fault prognosis) with ML and Deep Learning applications for an FG304 series external gear pump, also known as a domino pump. In the absence of a comprehensive set of experimental data, synthetic data generation techniques are implemented for Predictive Maintenance by perturbing the frequency content of time series generated using High-Fidelity computational techniques. In addition, various types of feature extraction methods considered to extract most discriminatory informations from the data. For fault diagnosis, three types of ML classification algorithms are employed, namely Multilayer Perceptron (MLP), Support Vector Machine (SVM) and Naive Bayes (NB) algorithms. For prognosis, ML regression algorithms, such as MLP and SVM, are utilised. Although significant work has been reported by previous authors, it remains difficult to optimise the choice of hyper-parameters (important parameters whose value is used to control the learning process) for each specific ML algorithm. For instance, the type of SVM kernel function or the selection of the MLP activation function and the optimum number of hidden layers (and neurons).It is widely understood that the reliability of ML algorithms is strongly depen-dent upon the existence of a sufficiently large quantity of high-quality training data. In the present thesis, due to the unavailability of experimental data, a novel high-fidelity in-silico dataset is generated via a Computational Fluid Dynamic (CFD) model, which has been used for the training of the underlying ML metamodel. In addition, a large number of scenarios are recreated, ranging from healthy to faulty ones (e.g. clogging, radial gap variations, axial gap variations, viscosity variations, speed variations). Furthermore, the high-fidelity dataset is re-enacted by using degradation functions to predict the remaining useful life (fault prognosis) of an external gear pump.The thesis explores and compares the performance of MLP, SVM and NB algo-rithms for fault diagnosis and MLP and SVM for fault prognosis. In order to enable fast training and reliable testing of the MLP algorithm, some predefined network architectures, like 2n neurons per hidden layer, are used to speed up the identification of the precise number of neurons (shown to be useful when the sample data set is sufficiently large). Finally, a series of benchmark tests are presented, enabling to conclude that for fault diagnosis, the use of wavelet features and a MLP algorithm can provide the best accuracy, and the MLP al-gorithm provides the best prediction results for fault prognosis. In addition, benchmark examples are simulated to demonstrate the mesh convergence for the CFD model whereas, quantification analysis and noise influence on training data are performed for ML algorithms

    Transmissores-recetores de baixa complexidade para redes óticas

    Get PDF
    Traditional coherent (COH) transceivers allow encoding of information in both quadratures and the two orthogonal polarizations of the electric field. Nevertheless, such transceivers used today are based on the intradyne scheme, which requires two 90o optical hybrids and four pairs of balanced photodetectors for dual-polarization transmission systems, making its overall cost unattractive for short-reach applications. Therefore, SSB methods with DD reception, commonly referred to as self-coherent (SCOH) transceivers, can be employed as a cost-effective alternative to the traditional COH transceivers. Nevertheless, the performance of SSB systems is severely degraded. This work provides a novel SCOH transceiver architecture with improved performance for short-reach applications. In particular, the development of phase reconstruction digital signal processing (DSP) techniques, the development of other DSP subsystems that relax the hardware requirement, and their performance optimization are the main highlights of this research. The fundamental principle of the proposed transceiver is based on the reception of the signal that satisfies the minimum phase condition upon DD. To reconstruct the missing phase information imposed by DD, a novel DCValue method exploring the SSB and the DC-Value properties of the minimum phase signal is developed in this Ph.D. study. The DC-Value method facilitates the phase reconstruction process at the Nyquist sampling rate and requires a low intensity pilot signal. Also, the experimental validation of the DC-Value method was successfully carried out for short-reach optical networks. Additionally, an extensive study was performed on the DC-Value method to optimize the system performance. In the optimization process, it was found that the estimation of the CCF is an important parameter to exploit all advantages of the DC-Value method. A novel CCF estimation technique was proposed. Further, the performance of the DC-Value method is optimized employing the rate-adaptive probabilistic constellation shaping.Os sistemas de transcetores coerentes tradicionais permitem a codificação de informação em ambas quadraturas e em duas polarizações ortogonais do campo elétrico. Contudo, estes transcetores utilizados atualmente são baseados num esquema intradino, que requer dois híbridos óticos de 90o e quatro pares de foto detetores para sistemas de transmissão com polarização dupla, fazendo com que o custo destes sistemas seja pouco atrativo para aplicações de curto alcance. Por isso, métodos de banda lateral única com deteção direta, também referidos como transcetores coerentes simplificados, podem ser implementados como uma alternativa de baixo custo aos sistemas coerentes tradicionais. Contudo, o desempenho de sistemas de banda lateral única tradicionais é gravemente degradado pelo batimento sinal-sinal. Nesta tese foi desenvolvida uma nova arquitetura de transcetor coerente simplificada com um melhor desempenho para aplicações de curto alcance. Em particular, o desenvolvimento de técnicas de processamento digital de sinal para a reconstrução de fase, bem como de outros subsistemas de processamento digital de sinal que minimizem os requerimentos de hardware e a sua otimização de desempenho são o foco principal desta tese. O princípio fundamental do transcetor proposto é baseado na receção de um sinal que satisfaz a condição mínima de fase na deteção direta. Para reconstruir a informação de fase em falta causada pela deteção direta, um novo método de valor DC que explora sinais de banda lateral única e as propriedades DC da condição de fase mínima é desenvolvido nesta tese. O método de valor DC facilita a reconstrução da fase à frequência de amostragem de Nyquist e requer um sinal piloto de baixa intensidade. Além disso, a validação experimental do método de valor DC foi executada com sucesso em ligações óticas de curto alcance. Adicionalmente, foi realizado um estudo intensivo do método de valor DC para otimizar o desempenho do sistema. Neste processo de otimização, verificou-se que o fator de contribuição da portadora é um parâmetro importante para explorar todas as vantagens do método de valor DC. Neste contexto, é proposto um novo método para a sua estimativa. Por último, o desempenho do método de valor DC é otimizado recorrendo a mapeamento probabilístico de constelação com taxa adaptativa.Programa Doutoral em Engenharia Eletrotécnic
    corecore