29 research outputs found

    Source Separation in the Presence of Side-information

    Get PDF
    The source separation problem involves the separation of unknown signals from their mixture. This problem is relevant in a wide range of applications from audio signal processing, communication, biomedical signal processing and art investigation to name a few. There is a vast literature on this problem which is based on either making strong assumption on the source signals or availability of additional data. This thesis proposes new algorithms for source separation with side information where one observes the linear superposition of two source signals plus two additional signals that are correlated with the mixed ones. The first algorithm is based on two ingredients: first, we learn a Gaussian mixture model (GMM) for the joint distribution of a source signal and the corresponding correlated side information signal; second, we separate the signals using standard computationally efficient conditional mean estimators. This also puts forth new recovery guarantees for this source separation algorithm. In particular, under the assumption that the signals can be perfectly described by a GMM model, we characterize necessary and sufficient conditions for reliable source separation in the asymptotic regime of low-noise as a function of the geometry of the underlying signals and their interaction. It is shown that if the subspaces spanned by the innovation components of the source signals with respect to the side information signals have zero intersection, provided that we observe a certain number of linear measurements from the mixture, then we can reliably separate the sources; otherwise we cannot. The second algorithms is based on deep learning where we introduce a novel self-supervised algorithm for the source separation problem. Source separation is intrinsically unsupervised and the lack of training data makes it a difficult task for artificial intelligence to solve. The proposed framework takes advantage of the available data and delivers near perfect separation results in real data scenarios. Our proposed frameworks – which provide new ways to incorporate side information to aid the solution of the source separation problem – are also employed in a real-world art investigation application involving the separation of mixtures of X-Ray images. The simulation results showcase the superiority of our algorithm against other state-of-the-art algorithms

    Sparse machine learning methods with applications in multivariate signal processing

    Get PDF
    This thesis details theoretical and empirical work that draws from two main subject areas: Machine Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application of sparse machine learning methods to multivariate signal processing. In particular, methods that enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility. The methods presented can be seen as modular building blocks that can be applied to a variety of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set of multivariate signals. In addition to testing on benchmark datasets, a series of empirical evaluations on real world datasets were carried out. These included: the classification of musical genre from polyphonic audio files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed Sensing (CS); analysis of human perception of different modulations of musical key from Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate the efficacy of the framework and highlight interesting directions of future research

    Analysis of the Response of Modal Parameters to Damage in CFRP Laminates Using a Novel Modal Identification Method

    Get PDF
    Nowadays, composite materials are widely used in several industries, e.g. the aeronautical, automotive, and marine, due to their excellent properties, such as stiffness and strength to weight ratios and high resistance to corrosion. However, they are prone to develop Barely Visible Impact Damage (BVID) from low to medium energy impacts (i.e. 1 – 10 m/s and 11 – 30 m/s respectively) that are reported to occur during both service and maintenance, such as bird strike; hailstones and tool drops. Therefore, Structural Health Monitoring (SHM) techniques have been developed to allow identifying damage at an early stage, in an attempt to avoid catastrophic consequences. Vibration measurement was conducted on healthy and damaged Carbon Fibre Reinforced Polymers (CFRPs) specimens. Damage is introduced to the specimen through a static indentation and the work done by the hemispherical indenter measured. This test was mainly for the purpose of damage introduction in the test samples. In this work, the effects of damage on the individual mode were studied to understand the response pattern of the modal parameters. It is intended that the current study will inform the development of a new damage identification method based on the variations between healthy and damaged specimen’s dynamic results. A new modal identification method (“Elliptical Plane”) that uses an alternative plot of the receptance has been developed in this work. The Elliptical Plane method used the energy dissipated per cycle of vibration as a starting point, to identify modal constants from Frequency Response Functions (FRFs). In comparison with the method of inverse, this new method produces accurate results, for systems that are lightly damped with its modes well-spaced. The sine of the phase of the receptance is plotted against the amplitude of the receptance, through which damping was calculated from the slope of a linear fit to the resulting plot. The results show that, there are other relevant properties of the plot that were not yet delve into by researchers. The shape of the plot is elliptical, near the resonant frequencies, whereby both parts of the modal constants (real and imaginary) can be determined from numerical curve-fitting. The method offers a new perspective on the way the receptance may be represented, in the Elliptical Plane, which may bring valuable insights for other researchers in the field. The novel method is discussed through both numerical and experimental examples. It is a simple method and easy to use. Interestingly, as the energy level increases, the percentage changes in both the modal frequency and damping increases. The linear equations reveal that there is a correlation between the increase in energy and the percentage variation in modal frequency and damping, especially from a threshold energy level determined to be between 15J and 20J for the analysed cases. Finally, modal identification is conducted on the healthy and damaged specimens, and the results were analysed with BETAlab software and the Elliptical Modal identification method. It was observed that the Elliptical Modal identification method provides some interesting results. For instance, a comparison between the modal damping from the ellipse and BETAlab methods revealed that, the level of reduction in the modal damping from the ellipse method is higher than that of the BETAlab. This behaviour offers a promising future in the area of damage identification in structures

    Metamaterials for Computational Imaging

    Get PDF
    <p>Metamaterials extend the design space, flexibility, and control of optical material systems and so yield fundamentally new computational imaging systems. A computational imaging system relies heavily on the design of measurement modes. Metamaterials provide a great deal of control over the generation of the measurement modes of an aperture. On the other side of the coin, computational imaging uses the data that that can be measured by an imaging system, which may limited, in an optimal way thereby producing the best possible image within the physical constraints of a system. The synergy of these two technologies - metamaterials and computational imaging - allows for entirely novel imaging systems. These contributions are realized in the concept of a frequency-diverse metamaterial imaging system that will be presented in this thesis. This 'metaimager' uses the same electromagnetic flexibility that metamaterials have shown in many other contexts to construct an imaging aperture suitable for single-pixel operation that can measure arbitrary measurement modes, constrained only by the size of the aperture and resonant elements. It has no lenses, no moving parts, a small form-factor, and is low-cost.</p><p>In this thesis we present an overview of work done by the author in the area of metamaterial imaging systems. We first discuss novel transformation-optical lenses enabled by metamaterials which demonstrate the electromagnetic flexibility of metamaterials. We then introduce the theory of computational and compressed imaging using the language of Fourier optics, and derive the forward model needed to apply computational imaging to the metaimager system. We describe the details of the metamaterials used to construct the metaimager and their application to metamaterial antennas. The experimental tools needed to characterize the metaimager, including far-field and near-field antenna characterization, are described. We then describe the design, operation, and characterization of a one-dimensional metaimager capable of collecting two-dimensional images, and then a two-dimensional metaimager capable of collecting two-dimensional images. The imaging results for the one-dimensional metaimager are presented including two-dimensional (azimuth and range) images of point scatters, and video-rate imaging. The imaging results for the two-dimensional metaimager are presented including analysis of the system's resolution, signal-to-noise sensitivity, acquisition rate, human targets, and integration of optical and structured-light sensors. Finally, we discuss explorations into methods of tuning metamaterial radiators which could be employed to significantly increase the capabilities of such a metaimaging system, and describe several systems that have been designed for the integration of tuning into metamaterial imaging systems.</p>Dissertatio

    Characterization and Modelling of Composites, Volume II

    Get PDF
    Composites have been increasingly used in various structural components in the aerospace, marine, automotive, and wind energy sectors. Composites’ material characterization is a vital part of the product development and production process. Physical, mechanical, and chemical characterization helps developers to further their understanding of products and materials, thus ensuring quality control. Achieving an in-depth understanding and consequent improvement of the general performance of these materials, however, still requires complex material modeling and simulation tools, which are often multiscale and encompass multiphysics. This Special Issue is aimed at soliciting promising, recent developments in composite modeling, simulation, and characterization, in both design and manufacturing areas, including experimental as well as industrial-scale case studies. All submitted manuscripts will undergo a rigorous review and will only be considered for publication if they meet journal standards

    Large scale estimation of distribution algorithms for continuous optimisation

    Get PDF
    Modern real world optimisation problems are increasingly becoming large scale. However, searching in high dimensional search spaces is notoriously difficult. Many methods break down as dimensionality increases and Estimation of Distribution Algorithm (EDA) is especially prone to the curse of dimensionality. In this thesis, we device new EDA variants that are capable of searching in large dimensional continuous domains. We in particular (i) investigated heavy tails search distributions, (ii) we clarify a controversy in the literature about the capabilities of Gaussian versus Cauchy search distributions, (iii) we constructed a new way of projecting a large dimensional search space to low dimensional subspaces in a way that gives us control of the size of covariance of the search distribution and we develop adaptation techniques to exploit this and (iv) we proposed a random embedding technique in EDA that takes advantage of low intrinsic dimensional structure of problems. All these developments avail us with new techniques to tackle high dimensional optimization problems
    corecore