4,589 research outputs found

    ECG Signal Reconstruction on the IoT-Gateway and Efficacy of Compressive Sensing Under Real-time Constraints

    Get PDF
    Remote health monitoring is becoming indispensable, though, Internet of Things (IoTs)-based solutions have many implementation challenges, including energy consumption at the sensing node, and delay and instability due to cloud computing. Compressive sensing (CS) has been explored as a method to extend the battery lifetime of medical wearable devices. However, it is usually associated with computational complexity at the decoding end, increasing the latency of the system. Meanwhile, mobile processors are becoming computationally stronger and more efficient. Heterogeneous multicore platforms (HMPs) offer a local processing solution that can alleviate the limitations of remote signal processing. This paper demonstrates the real-time performance of compressed ECG reconstruction on ARM's big.LITTLE HMP and the advantages they provide as the primary processing unit of the IoT architecture. It also investigates the efficacy of CS in minimizing power consumption of a wearable device under real-time and hardware constraints. Results show that both the orthogonal matching pursuit and subspace pursuit reconstruction algorithms can be executed on the platform in real time and yield optimum performance on a single A15 core at minimum frequency. The CS extends the battery life of wearable medical devices up to 15.4% considering ECGs suitable for wellness applications and up to 6.6% for clinical grade ECGs. Energy consumption at the gateway is largely due to an active internet connection; hence, processing the signals locally both mitigates system's latency and improves gateway's battery life. Many remote health solutions can benefit from an architecture centered around the use of HMPs, a step toward better remote health monitoring systems.Peer reviewedFinal Published versio

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    High-resolution distributed sampling of bandlimited fields with low-precision sensors

    Full text link
    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval per snapshot is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an almost arbitrary discrete-valued manner, while retaining the same (exponential) distortion-rate characteristics. Achievable information scaling laws for field reconstruction over a bounded region are also derived: With N one-bit sensors per Nyquist-interval, Θ(logN)\Theta(\log N) Nyquist-intervals, and total network bitrate Rnet=Θ((logN)2)R_{net} = \Theta((\log N)^2) (per-sensor bitrate Θ((logN)/N)\Theta((\log N)/N)), the maximum pointwise distortion goes to zero as D=O((logN)2/N)D = O((\log N)^2/N) or D=O(Rnet2βRnet)D = O(R_{net} 2^{-\beta \sqrt{R_{net}}}). This is shown to be possible with only nearest-neighbor communication, distributed coding, and appropriate interpolation algorithms. For a fixed, nonzero target distortion, the number of fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal Processing and re-submitted to the IEEE Transactions on Information Theor

    On Distributed Linear Estimation With Observation Model Uncertainties

    Full text link
    We consider distributed estimation of a Gaussian source in a heterogenous bandwidth constrained sensor network, where the source is corrupted by independent multiplicative and additive observation noises, with incomplete statistical knowledge of the multiplicative noise. For multi-bit quantizers, we derive the closed-form mean-square-error (MSE) expression for the linear minimum MSE (LMMSE) estimator at the FC. For both error-free and erroneous communication channels, we propose several rate allocation methods named as longest root to leaf path, greedy and integer relaxation to (i) minimize the MSE given a network bandwidth constraint, and (ii) minimize the required network bandwidth given a target MSE. We also derive the Bayesian Cramer-Rao lower bound (CRLB) and compare the MSE performance of our proposed methods against the CRLB. Our results corroborate that, for low power multiplicative observation noises and adequate network bandwidth, the gaps between the MSE of our proposed methods and the CRLB are negligible, while the performance of other methods like individual rate allocation and uniform is not satisfactory

    Energy Harvesting Wireless Communications: A Review of Recent Advances

    Get PDF
    This article summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed as well as models for energy consumption at the nodes.Comment: To appear in the IEEE Journal of Selected Areas in Communications (Special Issue: Wireless Communications Powered by Energy Harvesting and Wireless Energy Transfer

    Bayesian Spatial Field Reconstruction with Unknown Distortions in Sensor Networks

    Get PDF
    Spatial regression of random fields based on potentially biased sensing information is proposed in this paper. One major concern in such applications is that since it is not known a-priori what the accuracy of the collected data from each sensor is, the performance can be negatively affected if the collected information is not fused appropriately. For example, the data collector may measure the phenomenon inappropriately, or alternatively, the sensors could be out of calibration, thus introducing random gain and bias to the measurement process. Such readings would be systematically distorted, leading to incorrect estimation of the spatial field. To combat this detrimental effect, we develop a robust version of the spatial field model based on a mixture of Gaussian process experts. We then develop two different approaches for Bayesian spatial field reconstruction: the first algorithm is the Spatial Best Linear Unbiased Estimator (S-BLUE), in which one considers the quadratic loss function and restricts the estimator to the linear family of transformations; the second algorithm is based on empirical Bayes, which utilises a two-stage estimation procedure to produce accurate predictive inference in the presence of "misbehaving" sensors. In addition, we develop the distributed version of these two approaches to drastically improve the computational efficiency in large-scale settings. We present extensive simulation results using both synthetic datasets and semi-synthetic datasets with real temperature measurements and simulated distortions to draw useful conclusions regarding the performance of each of the algorithms

    A Comprehensive Review of Distributed Coding Algorithms for Visual Sensor Network (VSN)

    Get PDF
    Since the invention of low cost camera, it has been widely incorporated into the sensor node in Wireless Sensor Network (WSN) to form the Visual Sensor Network (VSN). However, the use of camera is bringing with it a set of new challenges, because all the sensor nodes are powered by batteries. Hence, energy consumption is one of the most critical issues that have to be taken into consideration. In addition to this, the use of batteries has also limited the resources (memory, processor) that can be incorporated into the sensor node. The life time of a VSN decreases quickly as the image is transferred to the destination. One of the solutions to the aforementioned problem is to reduce the data to be transferred in the network by using image compression. In this paper, a comprehensive survey and analysis of distributed coding algorithms that can be used to encode images in VSN is provided. This also includes an overview of these algorithms, together with their advantages and deficiencies when implemented in VSN. These algorithms are then compared at the end to determine the algorithm that is more suitable for VSN

    Steered mixture-of-experts for light field images and video : representation and coding

    Get PDF
    Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution
    corecore