20 research outputs found

    Green compressive sampling reconstruction in IoT networks

    Get PDF
    In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering of the CS measurements, we focus on the energy efficiency of the signal reconstruction stage given the CS measurements. As a first novel contribution, we present an analysis of the energy consumption within the IoT network under two computing architectures. In the first one, reconstruction takes place within the IoT network and the reconstructed data are encoded and transmitted out of the IoT network; in the second one, all the CS measurements are forwarded to off-network devices for reconstruction and storage, i.e., reconstruction is off-loaded. Our analysis shows that the two architectures significantly differ in terms of consumed energy, and it outlines a theoretically motivated criterion to select a green CS reconstruction computing architecture. Specifically, we present a suitable decision function to determine which architecture outperforms the other in terms of energy efficiency. The presented decision function depends on a few IoT network features, such as the network size, the sink connectivity, and other systems’ parameters. As a second novel contribution, we show how to overcome classical performance comparison of different CS reconstruction algorithms usually carried out w.r.t. the achieved accuracy. Specifically, we consider the consumed energy and analyze the energy vs. accuracy trade-off. The herein presented approach, jointly considering signal processing and IoT network issues, is a relevant contribution for designing green compressive sampling architectures in IoT networks

    Generalized Tensor Summation Compressive Sensing Network (GTSNET) : An Easy to Learn Compressive Sensing Operation

    Get PDF
    The efforts in compressive sensing (CS) literature can be divided into two groups: finding a measurement matrix that preserves the compressed information at its maximum level, and finding a robust reconstruction algorithm. In the traditional CS setup, the measurement matrices are selected as random matrices, and optimization-based iterative solutions are used to recover the signals. Using random matrices when handling large or multi-dimensional signals is cumbersome especially when it comes to iterative optimizations. Recent deep learning-based solutions increase reconstruction accuracy while speeding up recovery, but jointly learning the whole measurement matrix remains challenging. For this reason, state-of-the-art deep learning CS solutions such as convolutional compressive sensing network (CSNET) use block-wise CS schemes to facilitate learning. In this work, we introduce a separable multi-linear learning of the CS matrix by representing the measurement signal as the summation of the arbitrary number of tensors. As compared to block-wise CS, tensorial learning eases blocking artifacts and improves performance, especially at low measurement rates (MRs), such as {MRs} < 0.1. The software implementation of the proposed network is publicly shared at https://github.com/mehmetyamac/GTSNET.Peer reviewe

    System-on-Chip Solution for Patients Biometric: A Compressive Sensing-Based Approach

    Get PDF
    IEEE The ever-increasing demand for biometric solutions for the internet of thing (IoT)-based connected health applications is mainly driven by the need to tackle fraud issues, along with the imperative to improve patient privacy, safety and personalized medical assistance. However, the advantages offered by the IoT platforms come with the burden of big data and its associated challenges in terms of computing complexity, bandwidth availability and power consumption. This paper proposes a solution to tackle both privacy issues and big data transmission by incorporating the theory of compressive sensing (CS) and a simple, yet, efficient identification mechanism using the electrocardiogram (ECG) signal as a biometric trait. Moreover, the paper presents the hardware implementation of the proposed solution on a system on chip (SoC) platform with an optimized architecture to further reduce hardware resource usage. First, we investigate the feasibility of compressing the ECG data while maintaining a high identification quality. The obtained results show a 98.88&#x0025; identification rate using only a compression ratio of 30&#x0025;. Furthermore, the proposed system has been implemented on a Zynq SoC using heterogeneous software/hardware solution, which is able to accelerate the software implementation by a factor of 7.73 with a power consumption of 2.318 W

    Optimized Biosignals Processing Algorithms for New Designs of Human Machine Interfaces on Parallel Ultra-Low Power Architectures

    Get PDF
    The aim of this dissertation is to explore Human Machine Interfaces (HMIs) in a variety of biomedical scenarios. The research addresses typical challenges in wearable and implantable devices for diagnostic, monitoring, and prosthetic purposes, suggesting a methodology for tailoring such applications to cutting edge embedded architectures. The main challenge is the enhancement of high-level applications, also introducing Machine Learning (ML) algorithms, using parallel programming and specialized hardware to improve the performance. The majority of these algorithms are computationally intensive, posing significant challenges for the deployment on embedded devices, which have several limitations in term of memory size, maximum operative frequency, and battery duration. The proposed solutions take advantage of a Parallel Ultra-Low Power (PULP) architecture, enhancing the elaboration on specific target architectures, heavily optimizing the execution, exploiting software and hardware resources. The thesis starts by describing a methodology that can be considered a guideline to efficiently implement algorithms on embedded architectures. This is followed by several case studies in the biomedical field, starting with the analysis of a Hand Gesture Recognition, based on the Hyperdimensional Computing algorithm, which allows performing a fast on-chip re-training, and a comparison with the state-of-the-art Support Vector Machine (SVM); then a Brain Machine Interface (BCI) to detect the respond of the brain to a visual stimulus follows in the manuscript. Furthermore, a seizure detection application is also presented, exploring different solutions for the dimensionality reduction of the input signals. The last part is dedicated to an exploration of typical modules for the development of optimized ECG-based applications

    Compressive Sensing with Low-Power Transfer and Accurate Reconstruction of EEG Signals

    Get PDF
    Tele-monitoring of EEG in WBAN is essential as EEG is the most powerful physiological parameters to diagnose any neurological disorder. Generally, EEG signal needs to record for longer periods which results in a large volume of data leading to huge storage and communication bandwidth requirements in WBAN. Moreover, WBAN sensor nodes are battery operated which consumes lots of energy. The aim of this research is, therefore, low power transmission of EEG signal over WBAN and its accurate reconstruction at the receiver to enable continuous online-monitoring of EEG and real time feedback to the patients from the medical experts. To reduce data rate and consequently reduce power consumption, compressive sensing (CS) may be employed prior to transmission. Nonetheless, for EEG signals, the accuracy of reconstruction of the signal with CS depends on a suitable dictionary in which the signal is sparse. As the EEG signal is not sparse in either time or frequency domain, identifying an appropriate dictionary is paramount. There are a plethora of choices for the dictionary to be used. Wavelet bases are of interest due to the availability of associated systems and methods. However, the attributes of wavelet bases that can lead to good quality of reconstruction are not well understood. For the first time in this study, it is demonstrated that in selecting wavelet dictionaries, the incoherence with the sensing matrix and the number of vanishing moments of the dictionary should be considered at the same time. In this research, a framework is proposed for the selection of an appropriate wavelet dictionary for EEG signal which is used in tandem with sparse binary matrix (SBM) as the sensing matrix and ST-SBL method as the reconstruction algorithm. Beylkin (highly incoherent with SBM and relatively high number of vanishing moments) is identified as the best dictionary to be used amongst the dictionaries are evaluated in this thesis. The power requirements for the proposed framework are also quantified using a power model. The outcomes will assist to realize the computational complexity and online implementation requirements of CS for transmitting EEG in WBAN. The proposed approach facilitates the energy savings budget well into the microwatts range, ensuring a significant savings of battery life and overall system’s power. The study is intended to create a strong base for the use of EEG in the high-accuracy and low-power based biomedical applications in WBAN

    A HIGHLY-SCALABLE DC-COUPLED DIRECT-ADC NEURAL RECORDING CHANNEL ARCHITECTURE WITH INPUT-ADAPTIVE RESOLUTION

    Get PDF
    This thesis presents the design, development, and characterization of a novel neural recording channel architecture with (a) quantization resolution that is adaptive to the input signal's level of activity, (b) fully-dynamic power consumption that is linearly proportional to the recording resolution, and (c) immunity to DC offset and drifts at the input. Our results demonstrate the proposed design's capability in conducting neural recording with near lossless input-adaptive data compression, leading to a significant reduction in the energy required for both recording and data transmission, hence allowing for a potential high scaling of the number of recording channels integrated on a single implanted microchip without the need to increase the power budget. The proposed channel with the implemented compression technique is implemented in a standard 130nm CMOS technology with overall power consumption of 7.6uW and active area of 92×92µm for the implemented digital-backend

    A HIGHLY-SCALABLE DC-COUPLED DIRECT-ADC NEURAL RECORDING CHANNEL ARCHITECTURE WITH INPUT-ADAPTIVE RESOLUTION

    Get PDF
    This thesis presents the design, development, and characterization of a novel neural recording channel architecture with (a) quantization resolution that is adaptive to the input signal's level of activity, (b) fully-dynamic power consumption that is linearly proportional to the recording resolution, and (c) immunity to DC offset and drifts at the input. Our results demonstrate the proposed design's capability in conducting neural recording with near lossless input-adaptive data compression, leading to a significant reduction in the energy required for both recording and data transmission, hence allowing for a potential high scaling of the number of recording channels integrated on a single implanted microchip without the need to increase the power budget. The proposed channel with the implemented compression technique is implemented in a standard 130nm CMOS technology with overall power consumption of 7.6uW and active area of 9292m for the implemented digital-backend
    corecore