140 research outputs found

    EETA: An Energy Efficient Transmission Alignment for Wireless Sensor Network Applications

    Get PDF
    Energy conserving MAC protocols performing adaptive duty-cycling mechanism have been widely studied to improve the energy efficiency in Wireless Sensor Networks (WSNs). In particular, several asynchronous Low Power Listening (LPL) MAC protocols such as B-MAC, X-MAC and ContikiMAC transmit a long preamble or consecutive data packets for an efficient rendezvous between senders and receivers. However, the rendezvous results in the challenging problem of unnecessary channel utilization since the senders occupy a large portion of the medium. Furthermore, when a traffic generation time overlaps with other neighbouring nodes, they frequently encounter spatially-correlated contention incurring excessive channel contention. In this paper, we propose a novel traffic distribution scheme called an Energy Efficient Transmission Alignment (EETA), that shifts a traffic generation time of the application layer. By using a MAC layer feedback including contention information, the cross-layer framework determines whether the node delays its transmission or not. EETA is robust from the heavy contending environment due to its traffic distribution feature. We evaluate the performance of EETA through diverse experiments on the TelosB platform. The results show that EETA improves the overall energy efficiency by up to 35%, and reduces the latency by up to 48% compared to the existing scheme

    Online Nonparametric Anomaly Detection based on Geometric Entropy Minimization

    Full text link
    We consider the online and nonparametric detection of abrupt and persistent anomalies, such as a change in the regular system dynamics at a time instance due to an anomalous event (e.g., a failure, a malicious activity). Combining the simplicity of the nonparametric Geometric Entropy Minimization (GEM) method with the timely detection capability of the Cumulative Sum (CUSUM) algorithm we propose a computationally efficient online anomaly detection method that is applicable to high-dimensional datasets, and at the same time achieve a near-optimum average detection delay performance for a given false alarm constraint. We provide new insights to both GEM and CUSUM, including new asymptotic analysis for GEM, which enables soft decisions for outlier detection, and a novel interpretation of CUSUM in terms of the discrepancy theory, which helps us generalize it to the nonparametric GEM statistic. We numerically show, using both simulated and real datasets, that the proposed nonparametric algorithm attains a close performance to the clairvoyant parametric CUSUM test.Comment: to appear in IEEE International Symposium on Information Theory (ISIT) 201

    PowerForecaster: Predicting Smartphone Power Impact of Continuous Sensing Applications at Pre-installation Time

    Get PDF
    Today's smartphone application (hereinafter 'app') markets miss a key piece of information, power consumption of apps. This causes a severe problem for continuous sensing apps as they consume significant power without users' awareness. Users have no choice but to repeatedly install one app after another and experience their power use. To break such an exhaustive cycle, we propose PowerForecaster, a system that provides users with power use of sensing apps at pre-installation time. Such advanced power estimation is extremely challenging since the power cost of a sensing app largely varies with users' physical activities and phone use patterns. We observe that the time for active sensing and processing of an app can vary up to three times with 27 people's sensor traces collected over three weeks. PowerForecaster adopts a novel power emulator that emulates the power use of a sensing app while reproducing users' physical activities and phone use patterns, achieving accurate, personalized power estimation. Our experiments with three commercial apps and two research prototypes show that PowerForecaster achieves 93.4% accuracy under 20 use cases. Also, we optimize the system to accelerate emulation speed and reduce overheads, and show the effectiveness of such optimization techniques.

    ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ์„ผ์„œ ๋ณด์ • ๋ฐ ๋ชจ๋ฐ”์ผ ์„ผ์„œ ๋ฐฐ์น˜๋ฅผ ํ†ตํ•œ ๋„์‹œ ๋ฏธ์„ธ๋จผ์ง€ ์„ผ์„œ๋„คํŠธ์›Œํฌ ์ •ํ™•๋„ ํ–ฅ์ƒ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„๊ณตํ•™๋ถ€, 2020. 8. ์ด๋™์ค€.Particulate matter (PM) sensor has been widely deployed to increase spatiotemporal resolution in the urban environment. As a cost-effective PM monitoring solution, low-cost PM sensor ideally stands for dense sensor network nodes. However, low-cost PM sensor remains the doubt of its data reliability. In this paper, we investigate the accuracy of low-cost PM sensor by co-locating a governmental beta attenuation monitor (BAM) for 7.5 months and increase the accuracy with data-driven calibration. We research linear/nonlinear calibration (i.e. multiple linear regression (MLR)/multilayer perceptron (MLP)) and introduce a novel combined calibration. The methods are evaluated by field experiments and are compared with other methods and studies. Also, the data-driven calibration model can utilize for but only a co-located sensor node but also other sensor nodes by using a sensor network. The feasibility of sensor network calibration has been evaluated with experiments.๋„์‹œ ๋Œ€๊ธฐ ์งˆ ์ธก์ •์˜ ์‹œ๊ณต๊ฐ„ ํ•ด์ƒ๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋ฏธ์„ธ๋จผ์ง€ ์„ผ์„œ๊ฐ€ ๊ด‘๋ฒ”์œ„ํ•˜๊ฒŒ ๋ฐฐ์น˜๋˜๊ณ  ์žˆ๋‹ค. ๊ณ ํ•ด์ƒ๋„์˜ ๋ฏธ์„ธ๋จผ์ง€ ์ธก์ •์„ ์œ„ํ•œ ํ˜„์‹ค์ ์ธ ๋Œ€์•ˆ์œผ๋กœ ์ €๊ฐ€ํ˜• ๋ฏธ์„ธ๋จผ์ง€๊ฐ€ ๋Œ€ํ‘œ์ ์œผ๋กœ ์ด์šฉ๋˜๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์ €๊ฐ€ํ˜• ๋ฏธ์„ธ๋จผ์ง€ ์„ผ์„œ์˜ ์ธก์ • ๋ฐ์ดํ„ฐ ์‹ ๋ขฐ์„ฑ์— ๋Œ€ํ•œ ์˜๋ฌธ์ ์€ ํ•ด๊ฒฐ๋˜์ง€ ์•Š๊ณ  ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ €๊ฐ€ํ˜• ๋ฏธ์„ธ๋จผ์ง€ ์„ผ์„œ์˜ ์žฅ๊ธฐ๊ฐ„ ์ •ํ™•๋„ ํ‰๊ฐ€๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€์œผ๋ฉฐ, ์ด๋ฅผ ์œ„ํ•˜์—ฌ ๋ฉ€ํ‹ฐ ์„ผ์„œ ํ”Œ๋žซํผ์„ ์ œ์ž‘ํ•˜๊ณ  ์ด๋ฅผ ๊ณ ์‹ ๋ขฐ๋„์˜ ์ •๋ถ€ ๊ด€์ธก์†Œ์— ํ•จ๊ป˜ ๋ฐฐ์น˜ํ•˜์˜€๋‹ค. ์„ ํ˜•/๋น„์„ ํ˜• ์ถ”์ • ๋ชจ๋ธ์ธ ๋‹ค์ค‘ ์„ ํ˜•ํšŒ๊ท€ ๋ชจ๋ธ๊ณผ ์ธ๊ณต์‹ ๊ฒฝ๋ง์ธ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก ์„ ์ ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์˜€์œผ๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ฉํ•œ ์ถ”์ • ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ด ๋ฐฉ๋ฒ•๋“ค์€ ์‹ค์™ธ ๋ฐฐ์น˜ ์‹คํ—˜์„ ํ†ตํ•ด ํ‰๊ฐ€๋˜์—ˆ์œผ๋ฉฐ ํƒ€ ์ถ”์ • ๋ชจ๋ธ๊ณผ ํƒ€ ์—ฐ๊ตฌ์™€์˜ ๋น„๊ต ๋ถ„์„์„ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ๋˜ํ•œ ๊ด€์ธก์†Œ์— ๋ฐฐ์น˜ํ•˜์—ฌ ์ƒ์„ฑ๋œ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์€ ์„ผ์„œ ๋„คํŠธ์›Œํฌ๋ฅผ ํ†ตํ•ด ๋‹ค๋ฅธ ๋…ธ๋“œ์— ์ „๋‹ฌํ•˜์—ฌ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋Ÿฌํ•œ ์ ‘๊ทผ์— ๋Œ€ํ•œ ํƒ€๋‹น์„ฑ ํ‰๊ฐ€๋Š” ์‹คํ—˜์„ ํ†ตํ•ด ํ™•์ธํ•˜์˜€๋‹ค.1 Introduction 1 2 System Description 3 2.1 System Elements 3 2.1.1 Beta Attenuation Monitor 3 2.1.2 Multi-Sensor Platform 4 2.2 System Con guration 8 2.2.1 Sensor Platform Deployment 8 2.2.2 Calibration Procedures and Evaluation 9 3 Data-Driven Sensor Calibration 12 3.1 Related Studies 12 3.1.1 w/o Calibration Model 12 3.1.2 Previous Researches 13 3.2 Linear/Nonlinear Calibration 15 3.2.1 Linear Calibration: Multiple Linear Regression 15 3.2.2 Nonlinear Calibration: Multilayer Perceptron 17 3.2.3 Limitation on Linear/Nonlinear Calibration 19 3.3 SMART calibration 21 3.3.1 Concepts of Calibration 21 3.3.2 Procedures of SMART Calibration 24 3.4 Experiments and Results 26 3.4.1 Comparison w/ Other Calibration Methods 28 3.4.2 Comparison w/ Other Studies 30 3.4.3 Further Analysis of Calibration Model 31 4 Sensor Network Calibration 33 4.1 Related study 34 4.1.1 Sensor Network Calibration 34 4.1.2 Mobile Sensor Node 35 4.2 Transfer Calibration 35 4.2.1 Concepts of Transfer Calibration 35 4.3 Rendezvous Calibration 36 4.4 Experiments and Results 37 5 Conclusion and Future Work 40 5.1 Conclusion 40 5.2 Future Work 41Maste

    Distributed degree-based link scheduling for collision avoidance in wireless sensor networks.

    Get PDF
    Wireless sensor networks (WSNs) consist of multiple sensor nodes, which communicate with each other under the constrained energy resources. Retransmissions caused by collision and interference during the communication among sensor nodes increase overall network delay. Since the network delay increases as the node's waiting time increases, the network performance is reduced. Thus, the link scheduling scheme is needed to communicate without collision and interference. In the distributed WSNs environment, a sensor node has limited information about its neighboring nodes. Therefore, a comprehensive link scheduling scheme is required for distributed WSNs. Many schemes in the literature prevent collision and interference through time division multiple access (TDMA) protocol. However, considering the collision and interference in TDMA-based schedule increases the delay time and decreases the communication efficiency. This paper proposes the distributed degree-based link scheduling (DDLS) scheme, based on the TDMA. The DDLS scheme achieves the link scheduling more efficiently than the existing schemes and has the low delay and the duty cycle in the distributed environment. Communication between sensor nodes in the proposed DDLS schemes is based on collision avoidance maximal independent link set, which enables to assign collision-free timeslots to sensor nodes, and meanwhile decreases the number of timeslots needed and has low delay time and the duty cycle. Simulation results show that the proposed DDLS scheme reduces the scheduling length by average 81%, the transmission delay by 82%, and duty cycle by over 85% in comparison with distributed collision-free low-latency scheduling scheme.N/

    A Survey on Resource Management in IoT Operating Systems

    Get PDF
    Recently, the Internet of Things (IoT) concept has attracted a lot of attention due to its capability to translate our physical world into a digital cyber world with meaningful information. The IoT devices are smaller in size, sheer in number, contain less memory, use less energy, and have more computational capabilities. These scarce resources for IoT devices are powered by small operating systems (OSs) that are specially designed to support the IoT devices' diverse applications and operational requirements. These IoT OSs are responsible for managing the constrained resources of IoT devices efficiently and in a timely manner. In this paper, discussions on IoT devices and OS resource management are provided. In detail, the resource management mechanisms of the state-of-the-art IoT OSs, such as Contiki, TinyOS, and FreeRTOS, are investigated. The different dimensions of their resource management approaches (including process management, memory management, energy management, communication management, and file management) are studied, and their advantages and limitations are highlighted

    A Case for Time Slotted Channel Hopping for ICN in the IoT

    Full text link
    Recent proposals to simplify the operation of the IoT include the use of Information Centric Networking (ICN) paradigms. While this is promising, several challenges remain. In this paper, our core contributions (a) leverage ICN communication patterns to dynamically optimize the use of TSCH (Time Slotted Channel Hopping), a wireless link layer technology increasingly popular in the IoT, and (b) make IoT-style routing adaptive to names, resources, and traffic patterns throughout the network--both without cross-layering. Through a series of experiments on the FIT IoT-LAB interconnecting typical IoT hardware, we find that our approach is fully robust against wireless interference, and almost halves the energy consumed for transmission when compared to CSMA. Most importantly, our adaptive scheduling prevents the time-slotted MAC layer from sacrificing throughput and delay

    Near Lossless Time Series Data Compression Methods using Statistics and Deviation

    Full text link
    The last two decades have seen tremendous growth in data collections because of the realization of recent technologies, including the internet of things (IoT), E-Health, industrial IoT 4.0, autonomous vehicles, etc. The challenge of data transmission and storage can be handled by utilizing state-of-the-art data compression methods. Recent data compression methods are proposed using deep learning methods, which perform better than conventional methods. However, these methods require a lot of data and resources for training. Furthermore, it is difficult to materialize these deep learning-based solutions on IoT devices due to the resource-constrained nature of IoT devices. In this paper, we propose lightweight data compression methods based on data statistics and deviation. The proposed method performs better than the deep learning method in terms of compression ratio (CR). We simulate and compare the proposed data compression methods for various time series signals, e.g., accelerometer, gas sensor, gyroscope, electrical power consumption, etc. In particular, it is observed that the proposed method achieves 250.8\%, 94.3\%, and 205\% higher CR than the deep learning method for the GYS, Gactive, and ACM datasets, respectively. The code and data are available at https://github.com/vidhi0206/data-compression .Comment: 6 pages, 2 figures and 9 tables are include
    • โ€ฆ
    corecore