156 research outputs found

    Adaptive signal processing algorithms for noncircular complex data

    No full text
    The complex domain provides a natural processing framework for a large class of signals encountered in communications, radar, biomedical engineering and renewable energy. Statistical signal processing in C has traditionally been viewed as a straightforward extension of the corresponding algorithms in the real domain R, however, recent developments in augmented complex statistics show that, in general, this leads to under-modelling. This direct treatment of complex-valued signals has led to advances in so called widely linear modelling and the introduction of a generalised framework for the differentiability of both analytic and non-analytic complex and quaternion functions. In this thesis, supervised and blind complex adaptive algorithms capable of processing the generality of complex and quaternion signals (both circular and noncircular) in both noise-free and noisy environments are developed; their usefulness in real-world applications is demonstrated through case studies. The focus of this thesis is on the use of augmented statistics and widely linear modelling. The standard complex least mean square (CLMS) algorithm is extended to perform optimally for the generality of complex-valued signals, and is shown to outperform the CLMS algorithm. Next, extraction of latent complex-valued signals from large mixtures is addressed. This is achieved by developing several classes of complex blind source extraction algorithms based on fundamental signal properties such as smoothness, predictability and degree of Gaussianity, with the analysis of the existence and uniqueness of the solutions also provided. These algorithms are shown to facilitate real-time applications, such as those in brain computer interfacing (BCI). Due to their modified cost functions and the widely linear mixing model, this class of algorithms perform well in both noise-free and noisy environments. Next, based on a widely linear quaternion model, the FastICA algorithm is extended to the quaternion domain to provide separation of the generality of quaternion signals. The enhanced performances of the widely linear algorithms are illustrated in renewable energy and biomedical applications, in particular, for the prediction of wind profiles and extraction of artifacts from EEG recordings

    BMICA-independent component analysis based on B-spline mutual information estimator

    Get PDF
    The information theoretic concept of mutual information provides a general framework to evaluate dependencies between variables. Its estimation however using B-Spline has not been used before in creating an approach for Independent Component Analysis. In this paper we present a B-Spline estimator for mutual information to find the independent components in mixed signals. Tested using electroencephalography (EEG) signals the resulting BMICA (B-Spline Mutual Information Independent Component Analysis) exhibits better performance than the standard Independent Component Analysis algorithms of FastICA, JADE, SOBI and EFICA in similar simulations. BMICA was found to be also more reliable than the 'renown' FastICA

    Extracting High Temperature Event radiance from satellite images and correcting for saturation using Independent Component Analysis

    Get PDF
    We present a novel method for extracting the radiance from High Temperature Events (HTEs) recorded by geostationary imagers using Independent Component Analysis (ICA). We use ICA to decompose the image cube collected by the instrument into a sum of the outer products of independent, maximally non-Gaussian time series and images of their spatial distribution, and then reassemble the image cube using only sources that appear to be HTEs. Integrating spatially gives the time series of total HTE radiance emission. In this study we test the technique on a number of simulated HTE events, and then apply it to a number of volcanic HTEs observed by the SEVIRI instrument. We find that the technique performs well on small localised eruptions and can be used to correct for saturation. The technique offers the advantage of obviating the need for a priori knowledge of the area being imaged, beyond some basic assumptions about the nature of the processes affecting radiance in the scene, namely that (i) HTE sources are statistically independent from other processes, (ii) the radiance registered at the sensor is a linear mixture of the HTE signal and those from other processes, and (iii) HTE sources can be reliably identified for the reconstruction process. This results in only five free parameters — the dimensions of the image cube, an estimate of the data dimensionality and a threshold for distinguishing between HTE and nonHTE sources. While we have focused here on volcanic HTEs, the methodology can, in principle, be extended to studies of other kinds of HTEs such as those associated with biomass burning.This research was undertaken as part of the NERC consortium project “How does the Earth's crust grow at divergent plate boundaries? A unique opportunity in Afar, Ethiopia” (grant number NE/E005535/1). CO is additionally supported by the UK National Centre for Earth Observation “Dynamic Earth and Geohazards” theme (http://comet.nerc.ac.uk/).This is the final published version. It first appeared at http://www.sciencedirect.com/science/article/pii/S0034425714004337?np=y#

    Integrating Transformations in Probabilistic Circuits

    Full text link
    This study addresses the predictive limitation of probabilistic circuits and introduces transformations as a remedy to overcome it. We demonstrate this limitation in robotic scenarios. We motivate that independent component analysis is a sound tool to preserve the independence properties of probabilistic circuits. Our approach is an extension of joint probability trees, which are model-free deterministic circuits. By doing so, it is demonstrated that the proposed approach is able to achieve higher likelihoods while using fewer parameters compared to the joint probability trees on seven benchmark data sets as well as on real robot data. Furthermore, we discuss how to integrate transformations into tree-based learning routines. Finally, we argue that exact inference with transformed quantile parameterized distributions is not tractable. However, our approach allows for efficient sampling and approximate inference

    Accelerating Audio Data Analysis with In-Network Computing

    Get PDF
    Digital transformation will experience massive connections and massive data handling. This will imply a growing demand for computing in communication networks due to network softwarization. Moreover, digital transformation will host very sensitive verticals, requiring high end-to-end reliability and low latency. Accordingly, the emerging concept “in-network computing” has been arising. This means integrating the network communications with computing and also performing computations on the transport path of the network. This can be used to deliver actionable information directly to end users instead of raw data. However, this change of paradigm to in-network computing raises disruptive challenges to the current communication networks. In-network computing (i) expects the network to host general-purpose softwarized network functions and (ii) encourages the packet payload to be modified. Yet, today’s networks are designed to focus on packet forwarding functions, and packet payloads should not be touched in the forwarding path, under the current end-to-end transport mechanisms. This dissertation presents fullstack in-network computing solutions, jointly designed from network and computing perspectives to accelerate data analysis applications, specifically for acoustic data analysis. In the computing domain, two design paradigms of computational logic, namely progressive computing and traffic filtering, are proposed in this dissertation for data reconstruction and feature extraction tasks. Two widely used practical use cases, Blind Source Separation (BSS) and anomaly detection, are selected to demonstrate the design of computing modules for data reconstruction and feature extraction tasks in the in-network computing scheme, respectively. Following these two design paradigms of progressive computing and traffic filtering, this dissertation designs two computing modules: progressive ICA (pICA) and You only hear once (Yoho) for BSS and anomaly detection, respectively. These lightweight computing modules can cooperatively perform computational tasks along the forwarding path. In this way, computational virtual functions can be introduced into the network, addressing the first challenge mentioned above, namely that the network should be able to host general-purpose softwarized network functions. In this dissertation, quantitative simulations have shown that the computing time of pICA and Yoho in in-network computing scenarios is significantly reduced, since pICA and Yoho are performed, simultaneously with the data forwarding. At the same time, pICA guarantees the same computing accuracy, and Yoho’s computing accuracy is improved. Furthermore, this dissertation proposes a stateful transport module in the network domain to support in-network computing under the end-to-end transport architecture. The stateful transport module extends the IP packet header, so that network packets carry message-related metadata (message-based packaging). Additionally, the forwarding layer of the network device is optimized to be able to process the packet payload based on the computational state (state-based transport component). The second challenge posed by in-network computing has been tackled by supporting the modification of packet payloads. The two computational modules mentioned above and the stateful transport module form the designed in-network computing solutions. By merging pICA and Yoho with the stateful transport module, respectively, two emulation systems, i.e., in-network pICA and in-network Yoho, have been implemented in the Communication Networks Emulator (ComNetsEmu). Through quantitative emulations, the experimental results showed that in-network pICA accelerates the overall service time of BSS by up to 32.18%. On the other hand, using in-network Yoho accelerates the overall service time of anomaly detection by a maximum of 30.51%. These are promising results for the design and actual realization of future communication networks

    Blind source separation via independent and sparse component analysis with application to temporomandibular disorder

    Get PDF
    Blind source separation (BSS) addresses the problem of separating multi channel signals observed by generally spatially separated sensors into their constituent underlying sources. The passage of these sources through an unknown mixing medium results in these observed multichannel signals. This study focuses on BSS, with special emphasis on its application to the temporomandibular joint disorder (TMD). TMD refers to all medical problems related to the temporomandibular joint (TMJ), which holds the lower jaw (mandible) and the temporal bone (skull). The overall objective of the work is to extract the two TMJ sound sources generated by the two TMJs, from the bilateral recordings obtained from the auditory canals, so as to aid the clinician in diagnosis and planning treatment policies. Firstly, the concept of 'variable tap length' is adopted in convolutive blind source separation. This relatively new concept has attracted attention in the field of adaptive signal processing, notably the least mean square (LMS) algorithm, but has not yet been introduced in the context of blind signal separation. The flexibility of the tap length of the proposed approach allows for the optimum tap length to be found, thereby mitigating computational complexity or catering for fractional delays arising in source separation. Secondly, a novel fixed point BSS algorithm based on Ferrante's affine transformation is proposed. Ferrante's affine transformation provides the freedom to select the eigenvalues of the Jacobian matrix of the fixed point function and thereby improves the convergence properties of the fixed point iteration. Simulation studies demonstrate the improved convergence of the proposed approach compared to the well-known fixed point FastICA algorithm. Thirdly, the underdetermined blind source separation problem using a filtering approach is addressed. An extension of the FastICA algorithm is devised which exploits the disparity in the kurtoses of the underlying sources to estimate the mixing matrix and thereafter achieves source recovery by employing the i-norm algorithm. Additionally, it will be shown that FastICA can also be utilised to extract the sources. Furthermore, it is illustrated how this scenario is particularly suitable for the separation of TMJ sounds. Finally, estimation of fractional delays between the mixtures of the TMJ sources is proposed as a means for TMJ separation. The estimation of fractional delays is shown to simplify the source separation to a case of in stantaneous BSS. Then, the estimated delay allows for an alignment of the TMJ mixtures, thereby overcoming a spacing constraint imposed by a well- known BSS technique, notably the DUET algorithm. The delay found from the TMJ bilateral recordings corroborates with the range reported in the literature. Furthermore, TMJ source localisation is also addressed as an aid to the dental specialist.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore