47 research outputs found

    Matrix completion of noisy graph signals via proximal gradient minimization

    Get PDF
    ©2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper takes on the problem of recovering the missing entries of an incomplete matrix, which is known as matrix completion, when the columns of the matrix are signals that lie on a graph and the available observations are noisy. We solve a version of the problem regularized with the Laplacian quadratic form by means of the proximal gradient method, and derive theoretical bounds on the recovery error. Moreover, in order to speed up the convergence of the proximal gradient, we propose an initialization method that utilizes the structural information contained in the Laplacian matrix of the graph.Peer ReviewedPostprint (author's final draft

    Matrix completion and extrapolation via kernel regression

    Get PDF
    Matrix completion and extrapolation (MCEX) are dealt with here over reproducing kernel Hilbert spaces (RKHSs) in order to account for prior information present in the available data. Aiming at a faster and low-complexity solver, the task is formulated as a kernel ridge regression. The resultant MCEX algorithm can also afford online implementation, while the class of kernel functions also encompasses several existing approaches to MC with prior information. Numerical tests on synthetic and real datasets show that the novel approach performs faster than widespread methods such as alternating least squares (ALS) or stochastic gradient descent (SGD), and that the recovery error is reduced, especially when dealing with noisy data

    Generalization error bounds for kernel matrix completion and extrapolation

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Prior information can be incorporated in matrix completion to improve estimation accuracy and extrapolate the missing entries. Reproducing kernel Hilbert spaces provide tools to leverage the said prior information, and derive more reliable algorithms. This paper analyzes the generalization error of such approaches, and presents numerical tests confirming the theoretical resultsThis work is supported by ERDF funds (TEC2013-41315-R and TEC2016-75067-C4-2), the Catalan Government (2017 SGR 578), and NSF grants(1500713, 1514056, 1711471 and 1509040).Peer ReviewedPostprint (published version

    Compressive sensing-based data uploading in time-driven public sensing applications

    Get PDF
    Over the last few years, the technology of mobile phones greatly got increased. People gain and upload more and more information through their mobile phones in an easy way. Accordingly, a new sensing technology emerges, referred to as public sensing (PS). The core idea behind PS is to exploit the crowdedness of smart mobile devices to opportunistically provide real-time sensor data considering spatial and environmental dimensions. Recently, PS has been applied in many different application scenarios, such as environmental monitoring, traffic analysis, and indoor mapping. However, PS applications face several challenges. One of the most prominent challenges is the users acceptance to participate in the PS applications. In order to convince users to participate in the PS applications, several incentives mechanisms have been developed. However, the main two requirements - which should be met by any PS application - are the users privacy and the energy costs of running the PS application. In fact, there exist several energy consumers in PS applications. For example, many PS applications require the mobile devices to fix their position and frequently send this position data to the PS server. Similarly, the mobile devices waste energy when they receive sensing queries outside the sensing areas. However, the most energy-expensive task is to frequently acquire and send data to the PS server. In this thesis, we tackle the problem of energy consumption in a special category of PS applications in which the participating mobile devices are periodically queried for sensor data, such as acceleration and images. To reduce the energy overhead of uploading lots of information, we exploit the fact that processing approximately one thousand instructions consumes energy equal to that of transmitting one bit of information. Accordingly, we exploit data compression to reduce the number of bit that will be transmitted from the participating mobile devices to the PS server. Although, he technical literature has many compression methods, such as derivative-based prediction, Cosine transform, Wavelet transform; we designed a framework based on the compressive sensing (CS) theory. In the last decade, CS has been proven as a promising candidate for compressing N-dimensional data. Moreover, it shows satisfactory results when used for inferring missing data. Accordingly, we exploit CS to compress 1D data (e.g. acceleration, gravity) and 2D data (e.g. images). To efficiently utilize the CS method on resources-taxed devices such as the smart mobile devices, we start with identifying the most lightweight measurements matrices which will be implemented on the mobile devices. We examine several matrices, such as the random measurement matrix, the random Gaussian matrix, and the Toeplitz matrix. Our analysis mainly bases on the recovery accuracy and the dissipated energy from the mobile device's battery. Additionally, we perform a comparative study with other compressors, including the cosine transform and the lossless ZIP compressor. To further confirm that CS has a high recovery accuracy, we implemented an activity recognition algorithm at the server side. To this end, we exploit the dynamic time warping (DTW) algorithm as a pattern matching tool between a set of stored patterns and the recovered data. Several experiments have been performed which show the high accuracy of both CS and DTW to recover several activities such as walking, running, and jogging. In terms of energy, CS significantly reduce the battery consumption relative to the other baseline compressors. Finally, we prove the possibility of exploiting the CS-based compression method for manipulating 1D data as well as 2D data, i.e. images. The main challenge is to perform image encoding on the mobile devices, despite the complex matrix operations between the image pixels and the sensing matrices. To overcome this problem, we divide the image into a number of cells and subsequently, we perform the encoding process on each cell individually. Accordingly, the compression process is iteratively achieved. The evaluation results show promising results for 2D compression-based on the CS theory in terms of the saved energy consumption and the recovery accuracy

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
    corecore