242 research outputs found

    Tensor decomposition techniques for analysing time-varying networks

    Get PDF
    The aim of this Ph.D thesis is the study of time-varying networks via theoretical and data-driven approaches. Networks are natural objects to represent a vast variety of systems in nature, e.g., communication networks (phone calls and e-mails), online social networks (Facebook, Twitter), infrastructural networks, etc. Considering the temporal dimension of networks helps to better understand and predict complex phenomena, by taking into account both the fact that links in the network are not continuously active over time and the potential relation between multiple dimensions, such as space and time. A fundamental challenge in this area is the definition of mathematical models and tools able to capture topological and dynamical aspects and to reproduce properties observed on the real dynamics of networks. Thus, the purpose of this thesis is threefold: 1) we will focus on the analysis of the complex mesoscale patterns, as community like structures and their evolution in time, that characterize time-varying networks; 2) we will study how these patterns impact dynamical processes that occur over the network; 3) we will sketch a generative model to study the interplay between topological and temporal patterns of time-varying networks and dynamical processes occurring over the network, e.g., disease spreading. To tackle these problems, we adopt and extend an approach at the intersection between multi-linear algebra and machine learning: the decomposition of time-varying networks represented as tensors (multi-dimensional arrays). In particular, we focus on the study of Non-negative Tensor Factorization (NTF) techniques to detect complex topological and temporal patterns in the network. We first extend the NTF framework to tackle the problem of detecting anomalies in time-varying networks. Then, we propose a technique to approximate and reconstruct time-varying networks affected by missing information, to both recover the missing values and to reproduce dynamical processes on top of the network. Finally, we focus on the analysis of the interplay between the discovered patterns and dynamical processes. To this aim, we use the NTF as an hint to devise a generative model of time-varying networks, in which we can control both the topological and temporal patterns, to identify which of them has a major impact on the dynamics

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Robust Learning Enabled Intelligence for the Internet-of-Things: A Survey From the Perspectives of Noisy Data and Adversarial Examples

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordThe Internet-of-Things (IoT) has been widely adopted in a range of verticals, e.g., automation, health, energy and manufacturing. Many of the applications in these sectors, such as self-driving cars and remote surgery, are critical and high stakes applications, calling for advanced machine learning (ML) models for data analytics. Essentially, the training and testing data that are collected by massive IoT devices may contain noise (e.g., abnormal data, incorrect labels and incomplete information) and adversarial examples. This requires high robustness of ML models to make reliable decisions for IoT applications. The research of robust ML has received tremendous attentions from both academia and industry in recent years. This paper will investigate the state-of-the-art and representative works of robust ML models that can enable high resilience and reliability of IoT intelligence. Two aspects of robustness will be focused on, i.e., when the training data of ML models contains noises and adversarial examples, which may typically happen in many real-world IoT scenarios. In addition, the reliability of both neural networks and reinforcement learning framework will be investigated. Both of these two machine learning paradigms have been widely used in handling data in IoT scenarios. The potential research challenges and open issues will be discussed to provide future research directions.Engineering and Physical Sciences Research Council (EPSRC

    Employing data fusion & diversity in the applications of adaptive signal processing

    Get PDF
    The paradigm of adaptive signal processing is a simple yet powerful method for the class of system identification problems. The classical approaches consider standard one-dimensional signals whereby the model can be formulated by flat-view matrix/vector framework. Nevertheless, the rapidly increasing availability of large-scale multisensor/multinode measurement technology has render no longer sufficient the traditional way of representing the data. To this end, the author, who from this point onward shall be referred to as `we', `us', and `our' to signify the author myself and other supporting contributors i.e. my supervisor, my colleagues and other overseas academics specializing in the specific pieces of research endeavor throughout this thesis, has applied the adaptive filtering framework to problems that employ the techniques of data diversity and fusion which includes quaternions, tensors and graphs. At the first glance, all these structures share one common important feature: invertible isomorphism. In other words, they are algebraically one-to-one related in real vector space. Furthermore, it is our continual course of research that affords a segue of all these three data types. Firstly, we proposed novel quaternion-valued adaptive algorithms named the n-moment widely linear quaternion least mean squares (WL-QLMS) and c-moment WL-LMS. Both are as fast as the recursive-least-squares method but more numerically robust thanks to the lack of matrix inversion. Secondly, the adaptive filtering method is applied to a more complex task: the online tensor dictionary learning named online multilinear dictionary learning (OMDL). The OMDL is partly inspired by the derivation of the c-moment WL-LMS due to its parsimonious formulae. In addition, the sequential higher-order compressed sensing (HO-CS) is also developed to couple with the OMDL to maximally utilize the learned dictionary for the best possible compression. Lastly, we consider graph random processes which actually are multivariate random processes with spatiotemporal (or vertex-time) relationship. Similar to tensor dictionary, one of the main challenges in graph signal processing is sparsity constraint in the graph topology, a challenging issue for online methods. We introduced a novel splitting gradient projection into this adaptive graph filtering to successfully achieve sparse topology. Extensive experiments were conducted to support the analysis of all the algorithms proposed in this thesis, as well as pointing out potentials, limitations and as-yet-unaddressed issues in these research endeavor.Open Acces
    • …
    corecore