1,003 research outputs found

    Towards Energy Neutrality in Energy Harvesting Wireless Sensor Networks: A Case for Distributed Compressive Sensing?

    Full text link
    This paper advocates the use of the emerging distributed compressive sensing (DCS) paradigm in order to deploy energy harvesting (EH) wireless sensor networks (WSN) with practical network lifetime and data gathering rates that are substantially higher than the state-of-the-art. In particular, we argue that there are two fundamental mechanisms in an EH WSN: i) the energy diversity associated with the EH process that entails that the harvested energy can vary from sensor node to sensor node, and ii) the sensing diversity associated with the DCS process that entails that the energy consumption can also vary across the sensor nodes without compromising data recovery. We also argue that such mechanisms offer the means to match closely the energy demand to the energy supply in order to unlock the possibility for energy-neutral WSNs that leverage EH capability. A number of analytic and simulation results are presented in order to illustrate the potential of the approach.Comment: 6 pages. This work will be presented at the 2013 IEEE Global Communications Conference (GLOBECOM), Atlanta, US, December 201

    Query Processing For The Internet-of-Things: Coupling Of Device Energy Consumption And Cloud Infrastructure Billing

    Get PDF

    Cloud Instance Management and Resource Prediction For Computation-as-a-Service Platforms

    Get PDF
    Computation-as-a-Service (CaaS) offerings have gained traction in the last few years due to their effectiveness in balancing between the scalability of Software-as-a-Service and the customisation possibilities of Infrastructure-as-a-Service platforms. To function effectively, a CaaS platform must have three key properties: (i) reactive assignment of individual processing tasks to available cloud instances (compute units) according to availability and predetermined time-to-completion (TTC) constraints; (ii) accurate resource prediction; (iii) efficient control of the number of cloud instances servicing workloads, in order to optimize between completing workloads in a timely fashion and reducing resource utilization costs. In this paper, we propose three approaches that satisfy these properties (respectively): (i) a service rate allocation mechanism based on proportional fairness and TTC constraints; (ii) Kalman-filter estimates for resource prediction; and (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of compute units servicing workloads. The integration of our three proposals into a single CaaS platform is shown to provide for more than 27% reduction in Amazon EC2 spot instance cost against methods based on reactive resource prediction and 38% to 60% reduction of the billing cost against the current state-of-the-art in CaaS platforms (Amazon Lambda and Autoscale)

    Energy Harvesting for the Internet-of-Things: Measurements and Probability Models

    Get PDF
    The success of future Internet-of-Things (IoT) based application deployments depends on the ability of wireless sensor platforms to sustain uninterrupted operation based on environmental energy harvesting. In this paper, we deploy a multitransducer platform for photovoltaic and piezoelectric energy harvesting and collect raw data about the harvested power in commonly-encountered outdoor and indoor scenarios. We couple the generated power profiles with probability mixture models and make our data and processing code freely available to the research community for wireless sensors and IoT-oriented applications. Our aim is to provide data-driven probability models that characterize the energy production process, which will substantially facilitate the coupling of energy harvesting statistics with energy consumption models for processing and transceiver designs within upcoming IoT deployments

    Media Query Processing for the Internet-of-Things: Coupling of Device Energy Consumption and Cloud Infrastructure Billing

    Get PDF
    Audio/visual recognition and retrieval applications have recently garnered significant attention within Internet-of-Things (IoT) oriented services, given that video cameras and audio processing chipsets are now ubiquitous even in low-end embedded systems. In the most typical scenario for such services, each device extracts audio/visual features and compacts them into feature descriptors, which comprise media queries. These queries are uploaded to a remote cloud computing service that performs content matching for classification or retrieval applications. Two of the most crucial aspects for such services are: (i)(i) controlling the device energy consumption when using the service; (ii)(ii) reducing the billing cost incurred from the cloud infrastructure provider. In this paper we derive analytic conditions for the optimal coupling between the device energy consumption and the incurred cloud infrastructure billing. Our framework encapsulates: the energy consumption to produce and transmit audio/visual queries, the billing rates of the cloud infrastructure, the number of devices concurrently connected to the same cloud server, the query volume constraint of each cluster of devices, and the statistics of the query data production volume per device. Our analytic results are validated via a deployment with: (i)(i) the device side comprising compact image descriptors (queries) computed on Beaglebone Linux embedded platforms and transmitted to Amazon Web Services (AWS) Simple Storage Service; (ii)(ii) the cloud side carrying out image similarity detection via AWS Elastic Compute Cloud (EC2) instances, with the AWS Auto Scaling being used to control the number of instances according to the demand.This work was supported in part by the European Union (Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant 655282 – F. Renna), in part by EPSRC under Grant EP/M00113X/1 and Grant EP/K033166/1, and in part by Innovate U.K. (project ACAME under Grant 131983)

    Physics-guided machine learning approaches to predict the ideal stability properties of fusion plasmas

    Get PDF
    One of the biggest challenges to achieve the goal of producing fusion energy in tokamak devices is the necessity of avoiding disruptions of the plasma current due to instabilities. The disruption event characterization and forecasting (DECAF) framework has been developed in this purpose, integrating physics models of many causal events that can lead to a disruption. Two different machine learning approaches are proposed to improve the ideal magnetohydrodynamic (MHD) no-wall limit component of the kinetic stability model included in DECAF. First, a random forest regressor (RFR), was adopted to reproduce the DCON computed change in plasma potential energy without wall effects, , for a large database of equilibria from the national spherical torus experiment (NSTX). This tree-based method provides an analysis of the importance of each input feature, giving an insight into the underlying physics phenomena. Secondly, a fully-connected neural network has been trained on sets of calculations with the DCON code, to get an improved closed form equation of the no-wall limit as a function of the relevant plasma parameters indicated by the RFR. The neural network has been guided by physics theory of ideal MHD in its extension outside the domain of the NSTX experimental data. The estimated value of has been incorporated into the DECAF kinetic stability model and tested against a set of experimentally stable and unstable discharges. Moreover, the neural network results were used to simulate a real-time stability assessment using only quantities available in real-time. Finally, the portability of the model was investigated, showing encouraging results by testing the NSTX-trained algorithm on the mega ampere spherical tokamak (MAST)

    Studies of Shock Wave Interactions with Homogeneous and Isotropic Turbulence

    Get PDF
    A nearly homogeneous nearly isotropic compressible turbulent flow interacting with a normal shock wave has been studied experimentally in a large shock tube facility. Spatial resolution of the order of 8 Kolmogorov viscous length scales was achieved in the measurements of turbulence. A variety of turbulence generating grids provide a wide range of turbulence scales. Integral length scales were found to substantially decrease through the interaction with the shock wave in all investigated cases with flow Mach numbers ranging from 0.3 to 0.7 and shock Mach numbers from 1.2 to 1.6. The outcome of the interaction depends strongly on the state of compressibility of the incoming turbulence. The length scales in the lateral direction are amplified at small Mach numbers and attenuated at large Mach numbers. Even at large Mach numbers amplification of lateral length scales has been observed in the case of fine grids. In addition to the interaction with the shock the present work has documented substantial compressibility effects in the incoming homogeneous and isotropic turbulent flow. The decay of Mach number fluctuations was found to follow a power law similar to that describing the decay of incompressible isotropic turbulence. It was found that the decay coefficient and the decay exponent decrease with increasing Mach number while the virtual origin increases with increasing Mach number. A mechanism possibly responsible for these effects appears to be the inherently low growth rate of compressible shear layers emanating from the cylindrical rods of the grid

    Dithen: A Computation-as-a-Service Cloud Platform For Large-Scale Multimedia Processing

    Get PDF
    We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com
    corecore