305 research outputs found

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Edge Computing for IoT

    Full text link
    Over the past few years, The idea of edge computing has seen substantial expansion in both academic and industrial circles. This computing approach has garnered attention due to its integrating role in advancing various state-of-the-art technologies such as Internet of Things (IoT) , 5G, artificial intelligence, and augmented reality. In this chapter, we introduce computing paradigms for IoT, offering an overview of the current cutting-edge computing approaches that can be used with IoT. Furthermore, we go deeper into edge computing paradigms, specifically focusing on cloudlet and mobile edge computing. After that, we investigate the architecture of edge computing-based IoT, its advantages, and the technologies that make Edge computing-based IoT possible, including artificial intelligence and lightweight virtualization. Additionally, we review real-life case studies of how edge computing is applied in IoT-based Intelligent Systems, including areas like healthcare, manufacturing, agriculture, and transportation. Finally, we discuss current research obstacles and outline potential future directions for further investigation in this domain.Comment: 19 pages, 5 figures, Book Chapter In: Donta, P.K., Hazra, A., Lov\'en, L. (eds) Learning Techniques for the Internet of Things. Springer, Cha

    DESIGN FRAMEWORK FOR INTERNET OF THINGS BASED NEXT GENERATION VIDEO SURVEILLANCE

    Get PDF
    Modern artificial intelligence and machine learning opens up new era towards video surveillance system. Next generation video surveillance in Internet of Things (IoT) environment is an emerging research area because of high bandwidth, big-data generation, resource constraint video surveillance node, high energy consumption for real time applications. In this thesis, various opportunities and functional requirements that next generation video surveillance system should achieve with the power of video analytics, artificial intelligence and machine learning are discussed. This thesis also proposes a new video surveillance system architecture introducing fog computing towards IoT based system and contributes the facilities and benefits of proposed system which can meet the forthcoming requirements of surveillance. Different challenges and issues faced for video surveillance in IoT environment and evaluate fog-cloud integrated architecture to penetrate and eliminate those issues. The focus of this thesis is to evaluate the IoT based video surveillance system. To this end, two case studies were performed to penetrate values towards energy and bandwidth efficient video surveillance system. In one case study, an IoT-based power efficient color frame transmission and generation algorithm for video surveillance application is presented. The conventional way is to transmit all R, G and B components of all frames. Using proposed technique, instead of sending all components, first one color frame is sent followed by a series of gray-scale frames. After a certain number of gray-scale frames, another color frame is sent followed by the same number of gray-scale frames. This process is repeated for video surveillance system. In the decoder, color information is formulated from the color frame and then used to colorize the gray-scale frames. In another case study, a bandwidth efficient and low complexity frame reproduction technique that is also applicable in IoT based video surveillance application is presented. Using the second technique, only the pixel intensity that differs heavily comparing to previous frame’s corresponding pixel is sent. If the pixel intensity is similar or near similar comparing to the previous frame, the information is not transferred. With this objective, the bit stream is created for every frame with a predefined protocol. In cloud side, the frame information can be reproduced by implementing the reverse protocol from the bit stream. Experimental results of the two case studies show that the IoT-based proposed approach gives better results than traditional techniques in terms of both energy efficiency and quality of the video, and therefore, can enable sensor nodes in IoT to perform more operations with energy constraints

    Distributed Anomaly Detection using Autoencoder Neural Networks in WSN for IoT

    Full text link
    Wireless sensor networks (WSN) are fundamental to the Internet of Things (IoT) by bridging the gap between the physical and the cyber worlds. Anomaly detection is a critical task in this context as it is responsible for identifying various events of interests such as equipment faults and undiscovered phenomena. However, this task is challenging because of the elusive nature of anomalies and the volatility of the ambient environments. In a resource-scarce setting like WSN, this challenge is further elevated and weakens the suitability of many existing solutions. In this paper, for the first time, we introduce autoencoder neural networks into WSN to solve the anomaly detection problem. We design a two-part algorithm that resides on sensors and the IoT cloud respectively, such that (i) anomalies can be detected at sensors in a fully distributed manner without the need for communicating with any other sensors or the cloud, and (ii) the relatively more computation-intensive learning task can be handled by the cloud with a much lower (and configurable) frequency. In addition to the minimal communication overhead, the computational load on sensors is also very low (of polynomial complexity) and readily affordable by most COTS sensors. Using a real WSN indoor testbed and sensor data collected over 4 consecutive months, we demonstrate via experiments that our proposed autoencoder-based anomaly detection mechanism achieves high detection accuracy and low false alarm rate. It is also able to adapt to unforeseeable and new changes in a non-stationary environment, thanks to the unsupervised learning feature of our chosen autoencoder neural networks.Comment: 6 pages, 7 figures, IEEE ICC 201

    Extension of Cloud Computing to Small Satellites

    Get PDF
    Time-to-insight is a critical measure in a number of satellite mission applications: detection and warning of fast-moving events like fires and floods, or identification and tracking of satellites or missiles, for example. Current data flows delay the time-to-insight on the order of minutes or hours, as all collected data must be downlinked in one or more contact windows, then transited over terrestrial networks to the location of the analytic software. Additionally, mission applications on spacecraft are often static: built prior to launch, they cannot rapidly adapt to changing needs based on these insights. To reduce time-to-insight and provide a dynamic application update capability, Amazon Web Services (AWS), D-Orbit, and Unibap conducted a joint experiment in which we deployed AWS edge compute and network management software onto Unibap’s SpaceCloud® iX5 platform for edge computing in space, integrated onto a D-Orbit ION Satellite Carrier launched into low-earth orbit (LEO) in early 2022. In this paper, we present the results of this experiment. We will discuss the software specifics and network management capabilities we developed to write mission applications and update those mission applications on-orbit, and detail the process of mission deployment and modification, communications latency, and data volume reduction. We will also discuss how the space and satellite community can use this capability to deploy new applications, performing complex tasks and reducing time-to-insight, to cloud-enabled satellites immediately without needing to wait for a new launch

    When Edge Computing Meets Compact Data Structures

    Full text link
    Edge computing enables data processing and storage closer to where the data are created. Given the largely distributed compute environment and the significantly dispersed data distribution, there are increasing demands of data sharing and collaborative processing on the edge. Since data shuffling can dominate the overall execution time of collaborative processing jobs, considering the limited power supply and bandwidth resource in edge environments, it is crucial and valuable to reduce the communication overhead across edge devices. Compared with data compression, compact data structures (CDS) seem to be more suitable in this case, for the capability of allowing data to be queried, navigated, and manipulated directly in a compact form. However, the relevant work about applying CDS to edge computing generally focuses on the intuitive benefit from reduced data size, while few discussions about the challenges are given, not to mention empirical investigations into real-world edge use cases. This research highlights the challenges, opportunities, and potential scenarios of CDS implementation in edge computing. Driven by the use case of shuffling-intensive data analytics, we proposed a three-layer architecture for CDS-aided data processing and particularly studied the feasibility and efficiency of the CDS layer. We expect this research to foster conjoint research efforts on CDS-aided edge data analytics and to make wider practical impacts

    FORTE: an extensible framework for robustness and efficiency in data transfer pipelines

    Get PDF
    In the age of big data and growing product complexity, it is common to monitor many aspects of a product or system, in order to extract well-founded intelligence and draw conclusions, to continue driving innovation. Automating and scaling processes in data-pipelines becomes essential to keep pace with increasing rates of data generated by such practices, while meeting security, governance, scalability and resource-efficiency demands.We present FORTE, an extensible framework for robustness and transfer-efficiency in data pipelines. We identify sources of potential bottlenecks and explore the design space of approaches to deal with the challenges they pose. We study and evaluate synergetic effects of data compression and in-memory processing as well as task scheduling, in association with pipeline performance.A prototype implementation of FORTE is implemented and studied in a use-case at Volvo Trucks for high-volume production-level data sets, in the order of magnitude of hundreds of gigabytes to terabytes per burst. Various general-purpose lossless data compression algorithms are evaluated, in order to balance compression effectiveness and time in the pipeline.All in all, FORTE enables to deal with trade-offs and achieve benefits in latency and sustainable rate (up to 1.8 times better), effectiveness in resource utilisation, all while also enabling additional features such as integrity verification, logging, monitoring and traceability, as well as cataloguing of transferred data. We also note that the resource efficiency improvements achievable with FORTE, and its extensibility, can imply further benefits regarding scheduling, orchestration and energy-efficiency in such pipelines

    A Survey on Semantic Communications for Intelligent Wireless Networks

    Get PDF
    With deployment of 6G technology, it is envisioned that competitive edge of wireless networks will be sustained and next decade's communication requirements will be stratified. Also 6G will aim to aid development of a human society which is ubiquitous and mobile, simultaneously providing solutions to key challenges such as, coverage, capacity, etc. In addition, 6G will focus on providing intelligent use-cases and applications using higher data-rates over mill-meter waves and Tera-Hertz frequency. However, at higher frequencies multiple non-desired phenomena such as atmospheric absorption, blocking, etc., occur which create a bottleneck owing to resource (spectrum and energy) scarcity. Hence, following same trend of making efforts towards reproducing at receiver, exact information which was sent by transmitter, will result in a never ending need for higher bandwidth. A possible solution to such a challenge lies in semantic communications which focuses on meaning (context) of received data as opposed to only reproducing correct transmitted data. This in turn will require less bandwidth, and will reduce bottleneck due to various undesired phenomenon. In this respect, current article presents a detailed survey on recent technological trends in regard to semantic communications for intelligent wireless networks. We focus on semantic communications architecture including model, and source and channel coding. Next, we detail cross-layer interaction, and various goal-oriented communication applications. We also present overall semantic communications trends in detail, and identify challenges which need timely solutions before practical implementation of semantic communications within 6G wireless technology. Our survey article is an attempt to significantly contribute towards initiating future research directions in area of semantic communications for intelligent 6G wireless networks

    Design for energy-efficient and reliable fog-assisted healthcare IoT systems

    Get PDF
    Cardiovascular disease and diabetes are two of the most dangerous diseases as they are the leading causes of death in all ages. Unfortunately, they cannot be completely cured with the current knowledge and existing technologies. However, they can be effectively managed by applying methods of continuous health monitoring. Nonetheless, it is difficult to achieve a high quality of healthcare with the current health monitoring systems which often have several limitations such as non-mobility support, energy inefficiency, and an insufficiency of advanced services. Therefore, this thesis presents a Fog computing approach focusing on four main tracks, and proposes it as a solution to the existing limitations. In the first track, the main goal is to introduce Fog computing and Fog services into remote health monitoring systems in order to enhance the quality of healthcare. In the second track, a Fog approach providing mobility support in a real-time health monitoring IoT system is proposed. The handover mechanism run by Fog-assisted smart gateways helps to maintain the connection between sensor nodes and the gateways with a minimized latency. Results show that the handover latency of the proposed Fog approach is 10%-50% less than other state-of-the-art mobility support approaches. In the third track, the designs of four energy-efficient health monitoring IoT systems are discussed and developed. Each energy-efficient system and its sensor nodes are designed to serve a specific purpose such as glucose monitoring, ECG monitoring, or fall detection; with the exception of the fourth system which is an advanced and combined system for simultaneously monitoring many diseases such as diabetes and cardiovascular disease. Results show that these sensor nodes can continuously work, depending on the application, up to 70-155 hours when using a 1000 mAh lithium battery. The fourth track mentioned above, provides a Fog-assisted remote health monitoring IoT system for diabetic patients with cardiovascular disease. Via several proposed algorithms such as QT interval extraction, activity status categorization, and fall detection algorithms, the system can process data and detect abnormalities in real-time. Results show that the proposed system using Fog services is a promising approach for improving the treatment of diabetic patients with cardiovascular disease

    Design and Analysis of Multiplexer based Approximate Adder for Low Power Applications

    Get PDF
    Low power consumption is crucial for error-acceptable multimedia devices, with picture compression approaches leveraging various digital processing architectures and algorithms. Humans can assemble useful information from partially inaccurate outputs in many multimedia applications. As a result, producing exact outputs is not required. The demand for an exact outcome is fading because new innovative systems are forgiving of faults. In the domain where error-tolerance is accepted, approximate computing is a new paradigm that relaxes the requirement for an accurate modeling while offering power, time, and delay benefits. Adders are an essential arithmetic module for regulating power and memory usage in digital systems. The recent implementation and use of approximate adders have been supported by trade-off characteristics such as delay, lower power consumption. This study examines the delay and power consumption of conventional and approximate adders. Also, a simple, fast, and power-efficient multiplexer-based approximate adder is proposed, and its performance outperforms the adders compared with existing adders. The proposed adder can be utilized in error-tolerant and various digital signal processing applications where exact results are not required. The proposed and existing adders are designed using EDA software for the performance calculations. With a delay of 81 pS, the proposed adder circuit reduces power consumption compared to the exact one. The experiment shows that the designed approximate adder can be used to implement circuits for image processing systems because it has a smaller delay and uses less energy
    • …
    corecore