40,073 research outputs found

    IoT Approaches for Distributed Computing

    Get PDF

    Modeling the Internet of Things: a simulation perspective

    Full text link
    This paper deals with the problem of properly simulating the Internet of Things (IoT). Simulating an IoT allows evaluating strategies that can be employed to deploy smart services over different kinds of territories. However, the heterogeneity of scenarios seriously complicates this task. This imposes the use of sophisticated modeling and simulation techniques. We discuss novel approaches for the provision of scalable simulation scenarios, that enable the real-time execution of massively populated IoT environments. Attention is given to novel hybrid and multi-level simulation techniques that, when combined with agent-based, adaptive Parallel and Distributed Simulation (PADS) approaches, can provide means to perform highly detailed simulations on demand. To support this claim, we detail a use case concerned with the simulation of vehicular transportation systems.Comment: Proceedings of the IEEE 2017 International Conference on High Performance Computing and Simulation (HPCS 2017

    A fully distributed robust optimal control approach for air-conditioning systems considering uncertainties of communication link in IoT-enabled building automation systems

    Get PDF
    Internet of Things (IoT) technologies are increasingly implemented in buildings as the cost-effective smart sensing infrastructure of building automation systems (BASs). They are also dispersed computing resources for novel distributed optimal control approaches. However, wireless communication networks are critical to fulfill these tasks with the performance influenced by inherent uncertainties in networks, e.g., unpredictable occurrence of link failures. Centralized and hierarchical distributed approaches are vulnerable against link failure, while the robustness of fully distributed approaches depends on the algorithms adopted. This study therefore proposes a fully distributed robust optimal control approach for air-conditioning systems considering uncertainties of communication link in IoT-enabled BASs. The distributed algorithm is adopted that agents know their out-neighbors only. Agents directly coordinate with the connected neighbors for global optimization. Tests are conducted to test and validate the proposed approach by comparing with existing approaches, i.e., the centralized, the hierarchical distributed and the fully distributed approaches. Results show that different approaches are vulnerable against to uncertainties of communication link to different extents. The proposed approach always guarantees the optimal control performance under normal conditions and conditions with link failures, verifying its high robustness. It also has low computation complexity and high optimization efficiency, thus applicable on IoT-enabled BASs

    On Lightweight Privacy-Preserving Collaborative Learning for IoT Objects

    Full text link
    The Internet of Things (IoT) will be a main data generation infrastructure for achieving better system intelligence. This paper considers the design and implementation of a practical privacy-preserving collaborative learning scheme, in which a curious learning coordinator trains a better machine learning model based on the data samples contributed by a number of IoT objects, while the confidentiality of the raw forms of the training data is protected against the coordinator. Existing distributed machine learning and data encryption approaches incur significant computation and communication overhead, rendering them ill-suited for resource-constrained IoT objects. We study an approach that applies independent Gaussian random projection at each IoT object to obfuscate data and trains a deep neural network at the coordinator based on the projected data from the IoT objects. This approach introduces light computation overhead to the IoT objects and moves most workload to the coordinator that can have sufficient computing resources. Although the independent projections performed by the IoT objects address the potential collusion between the curious coordinator and some compromised IoT objects, they significantly increase the complexity of the projected data. In this paper, we leverage the superior learning capability of deep learning in capturing sophisticated patterns to maintain good learning performance. Extensive comparative evaluation shows that this approach outperforms other lightweight approaches that apply additive noisification for differential privacy and/or support vector machines for learning in the applications with light data pattern complexities.Comment: 12 pages,IOTDI 201

    Load Balancing for Resource Optimization in Internet of Things (IoT) Systems

    Get PDF
    Internet of Things (IoT) has been recognised as a promising area for automating numerous processes, however, the major problem with IoT is its potential for rising complexities. Several approaches have moved attention to the edge nodes associated with IoT, hence concepts of edge-computing, resource allocation and load balancing are tantamount to a more robust heterogeneous IoT. The resource optimization terrain comes with several complications for the resource allocation and scheduling algorithms. Load balancing, one of the key strategies for improving system performance and resource utilization in distributed and parallel computing, generally views an effective load balancer as a 'traffic controller' of resources by directing tasks to available and capable resources. In this paper, a framework appropriate for modelling and reasoning about IoT resource optimization is developed. Further, implementation of an optimized resource allocation algorithm taking into consideration the users' quality of experience (QoE) and the quality of service (QoS) is made available. Simulation results authenticate analysis and validate the improved performance over existing work

    A novel approach for energy- and memory-efficient data loss prevention to support Internet of Things networks

    Get PDF
    Internet of Things integrates various technologies, including wireless sensor networks, edge computing, and cloud computing, to support a wide range of applications such as environmental monitoring and disaster surveillance. In these types of applications, IoT devices operate using limited resources in terms of battery, communication bandwidth, processing, and memory capacities. In this context, load balancing, fault tolerance, and energy and memory efficiency are among the most important issues related to data dissemination in IoT networks. In order to successfully cope with the abovementioned issues, two main approaches—data-centric storage and distributed data storage—have been proposed in the literature. Both approaches suffer from data loss due to memory and/or energy depletion in the storage nodes. Even though several techniques have been proposed so far to overcome the abovementioned problems, the proposed solutions typically focus on one issue at a time. In this article, we propose a cross-layer optimization approach to increase memory and energy efficiency as well as support load balancing. The optimization problem is a mixed-integer nonlinear programming problem, and we solve it using a genetic algorithm. Moreover, we integrate the data-centric storage features into distributed data storage mechanisms and present a novel heuristic approach, denoted as Collaborative Memory and Energy Management, to solve the underlying optimization problem. We also propose analytical and simulation frameworks for performance evaluation. Our results show that the proposed method outperforms the existing approaches in various IoT scenarios

    Mobile Edge Computing for Future Internet-of-Things

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Integrating sensors, the Internet, and wireless systems, Internet-of-Things (IoT) provides a new paradigm of ubiquitous connectivity and pervasive intelligence. The key enabling technology underlying IoT is mobile edge computing (MEC), which is anticipated to realize and reap the promising benefits of IoT applications by placing various cloud resources, such as computing and storage resources closer to smart devices and objects. Challenges of designing efficient and scalable MEC platforms for future IoT arise from the physical limitations of computing and battery resources of IoT devices, heterogeneity of computing and wireless communication capabilities of IoT networks, large volume of data arrivals and massive number connections, and large-scale data storage and delivery across the edge network. To address these challenges, this thesis proposes four efficient and scalable task offloading and cooperative caching approaches are proposed. Firstly, for the multi-user single-cell MEC scenario, the base station (BS) can only have outdated knowledge of IoT device channel conditions due to the time-varying nature of practical wireless channels. To this end, a hybrid learning approach is proposed to optimize the real-time local processing and predictive computation offloading decisions in a distributed manner. Secondly, for the multi-user multi-cell MEC scenario, an energy-efficient resource management approach is developed based on distributed online learning to tackle the heterogeneity of computing and wireless transmission capabilities of edge servers and IoT devices. The proposed approach optimizes the decisions on task offloading, processing, and result delivery between edge servers and IoT devices to minimize the time-average energy consumption of MEC. Thirdly, for the computing resource allocation under large-scale network, a distributed online collaborative computing approach is proposed based on Lyapunov optimization for data analysis in IoT application to minimize the time-average energy consumption of network. Finally, for the storage resource allocation under large-scale network, a distributed IoT data delivery approach based on online learning is proposed for caching application in mobile applications. A new profitable cooperative region is established for every IoT data request admitted at an edge server, to avoid invalid request dispatching
    • …
    corecore