34 research outputs found

    A survey of network lifetime maximization techniques in wireless sensor networks

    No full text
    Emerging technologies, such as the Internet of things, smart applications, smart grids and machine-to-machine networks stimulate the deployment of autonomous, selfconfiguring, large-scale wireless sensor networks (WSNs). Efficient energy utilization is crucially important in order to maintain a fully operational network for the longest period of time possible. Therefore, network lifetime (NL) maximization techniques have attracted a lot of research attention owing to their importance in terms of extending the flawless operation of battery-constrained WSNs. In this paper, we review the recent developments in WSNs, including their applications, design constraints and lifetime estimation models. Commencing with the portrayal of rich variety definitions of NL design objective used for WSNs, the family of NL maximization techniques is introduced and some design guidelines with examples are provided to show the potential improvements of the different design criteri

    Extending Wireless Rechargeable Sensor Network Life without Full Knowledge

    Get PDF
    When extending the life of Wireless Rechargeable Sensor Networks (WRSN), one challenge is charging networks as they grow larger. Overcoming this limitation will render a WRSN more practical and highly adaptable to growth in the real world. Most charging algorithms require a priori full knowledge of sensor nodes’ power levels in order to determine the nodes that require charging. In this work, we present a probabilistic algorithm that extends the life of scalable WRSN without a priori power knowledge and without full network exploration. We develop a probability bound on the power level of the sensor nodes and utilize this bound to make decisions while exploring a WRSN.We verify the algorithm by simulating a wireless power transfer unmanned aerial vehicle, and charging a WRSN to extend its life. Our results show that, without knowledge, our proposed algorithm extends the life of a WRSN on average 90% of what an optimal full knowledge algorithm can achieve. This means that the charging robot does not need to explore the whole network, which enables the scaling of WRSN. We analyze the impact of network parameters on our algorithm and show that it is insensitive to a large range of parameter values

    Increased energy efficiency in LTE networks through reduced early handover

    Get PDF
    “A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy”.Long Term Evolution (LTE) is enormously adopted by several mobile operators and has been introduced as a solution to fulfil ever-growing Users (UEs) data requirements in cellular networks. Enlarged data demands engage resource blocks over prolong time interval thus results into more dynamic power consumption at downlink in Basestation. Therefore, realisation of UEs requests come at the cost of increased power consumption which directly affects operator operational expenditures. Moreover, it also contributes in increased CO2 emissions thus leading towards Global Warming. According to research, Global Information and Communication Technology (ICT) systems consume approximately 1200 to 1800 Terawatts per hour of electricity annually. Importantly mobile communication industry is accountable for more than one third of this power consumption in ICT due to increased data requirements, number of UEs and coverage area. Applying these values to global warming, telecommunication is responsible for 0.3 to 0.4 percent of worldwide CO2 emissions. Moreover, user data volume is expected to increase by a factor of 10 every five years which results in 16 to 20 percent increase in associated energy consumption which directly effects our environment by enlarged global warming. This research work focuses on the importance of energy saving in LTE and initially propose bandwidth expansion based energy saving scheme which combines two resource blocks together to form single super RB, thereby resulting in reduced Physical Downlink Control Channel Overhead (PDCCH). Thus, decreased PDCCH overhead helps in reduced dynamic power consumption up to 28 percent. Subsequently, novel reduced early handover (REHO) based idea is proposed and combined with bandwidth expansion to form enhanced energy ii saving scheme. System level simulations are performed to investigate the performance of REHO scheme; it was found that reduced early handover provided around 35% improved energy saving while compared to LTE standard in 3rd Generation Partnership Project (3GPP) based scenario. Since there is a direct relationship between energy consumption, CO2 emissions and vendors operational expenditure (OPEX); due to reduced power consumption and increased energy efficiency, REHO subsequently proven to be a step towards greener communication with lesser CO2 footprint and reduced operational expenditure values. The main idea of REHO lies in the fact that it initiate handovers earlier and turn off freed resource blocks as compare to LTE standard. Therefore, the time difference (Transmission Time Intervals) between REHO based early handover and LTE standard handover is a key component for energy saving achieved, which is estimated through axiom of Euclidean geometry. Moreover, overall system efficiency is investigated through the analysis of numerous performance related parameters in REHO and LTE standard. This led to a key finding being made to guide the vendors about the choice of energy saving in relation to radio link failure and other important parameters

    Detecting IoT user behavior and sensitive information in encrypted IoT -app traffic

    Get PDF
    Many people use smart-home devices, also known as the Internet of Things (IoT), in their daily lives. Most IoT devices come with a companion mobile application that users need to install in their smartphone or tablet in order to control, configure, and interface with the IoT device. IoT devices send information about their users from their app directly to the IoT manufacturer's cloud; we call this the ''app-to-cloud way''. In this research, we invent a tool called IoT-app privacy inspector that can automatically infer the following from the IoT network traffic: the packet that reveals user interaction type with the IoT device via its app (e.g. login), the packets that carry sensitive Personal Identifiable Information (PII), the content type of such sensitive information (e.g. user's location). We use Random Forest classifier as a supervised machine learning algorithm to extract features from network traffic. To train and test the three different multi-class classifiers, we collect and label network traffic from different IoT devices via their apps. We obtain the following classification accuracy values for the three aforementioned types of information: 99.4%, 99.8%, and 99.8%. This tool can help IoT users take an active role in protecting their privacy

    Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    Get PDF
    From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified

    Advancing Data Collection, Management, and Analysis for Quantifying Residential Water Use via Low Cost, Open Source, Smart Metering Infrastructure

    Get PDF
    Urbanization, climate change, aging infrastructure, and the cost of delivering water to residential customers make it vital that we achieve a higher efficiency in the management of urban water resources. Understanding how water is used at the household level is vital for this objective.Water meters measure water use for billing purposes, commonly at a monthly, or coarser temporal resolutions. This is insufficient to understand where water is used (i.e., the distribution of water use across different fixtures like toilets, showers, outdoor irrigation), when water is used (i.e., identifying peaks of consumption, instantaneous or at hourly, daily, weekly intervals), the efficiency of water using fixtures, or water use behaviors across different households. Most smart meters available today are not capable of collecting data at the temporal resolutions needed to fully characterize residential water use, and managing this data represents a challenge given the rapidly increasing volume of data generated. The research in this dissertation presents low cost, open source cyberinfrastructure (datalogging and data management systems) to collect and manage high temporal resolution, residential water use data. Performance testing of the cyberinfrastructure demonstrated the scalability of the system to multiple hundreds of simultaneous data collection devices. Using this cyberinfrastructure, we conducted a case study application in the cities of Logan and Providence, Utah where we found significant variability in the temporal distribution, timing, and volumes of indoor water use. This variability can impact the design of water conservation programs, estimations and forecast of water demand, and sizing of future water infrastructure. Outdoor water use was the largest component of residential water use, yet homeowners were not significantly overwatering their landscapes. Opportunities to improve the efficiency of water using fixtures and to conserve water by promoting behavior changes exist among participants
    corecore