9 research outputs found

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Evaluating Population Based Training on Small Datasets

    Get PDF
    Recently, there has been an increased interest in using artificial neural networks in the severely resource-constrained devices found in Internet-of-Things networks, in order to perform actions learned from the raw sensor data gathered by these devices. Unfortunately, training neural networks to achieve optimal prediction accuracy requires tuning multiple hyper-parameters, a process which has traditionally taken many times the computation time of a single training run of the neural network. In this paper, we empirically evaluate the Population Based Training algorithm, a method which simultaneously both trains and tunes a neural network, on datasets of similar size to what we might encounter in an IoT scenario. We determine that the population based training algorithm achieves prediction accuracy comparable to a traditional grid or random search on small datasets, and achieves state-of-the-art results for the Biodeg dataset.publishedVersio

    Latency Minimization for Multiuser Computation Offloading in Fog-Radio Access Networks

    Full text link
    This paper considers computation offloading in fog-radio access networks (F-RAN), where multiple user equipments (UEs) offload their computation tasks to the F-RAN through a number of fog nodes. Each UE can choose one of the fog nodes to offload its task, and each fog node may simultaneously serve multiple UEs. Depending on the computation burden at the fog nodes, the tasks may be computed by the fog nodes or further offloaded to the cloud via capacity-limited fronthaul links. To compute all UEs tasks as fast as possible, joint optimization of UE-Fog association, radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs. This min-max problem is formulated as a mixed integer nonlinear program (MINP). We first show that the MINP can be reformulated as a continuous optimization problem, and then employ the majorization minimization (MM) approach to finding a solution for it. The MM approach that we develop herein is unconventional in that---each MM subproblem is inexactly solved with the same provable convergence guarantee as the conventional exact MM. In addition, we also consider a cooperative offloading model, where the fog nodes compress-and-forward their received signals to the cloud. Under this model, a similar min-max latency optimization problem is formulated and tackled again by the inexact MM approach. Simulation results show that the proposed algorithms outperform some heuristic offloading strategies, and that the cooperative offloading is generally better than the non-cooperative one.Comment: 11 pages, 8 figure

    Adaptive trading of Cloud of Things resources

    Get PDF
    Cloud of Things (CoT) consists of heterogeneous Cloud and Internet of Things (IoT) resources. CoT increasingly requires adaptive run-time management due to the CoT dynamism, environmental uncertainties and unpredictable changes in IoT resources. Adapting to these changes benefits particularly trading of CoT resources where the adaptability of traded resources and applications remains a challenge. Runtime changes in CoT trading environments can impact vital aspects including resource allocation, resource utilisation and application performance. This paper adopts monitoring, analysis, planning and execution (MAPE) model from autonomic computing to support adaptations when trading CoT resources. This is achieved by applying the MAPE model to systematically capture and identify changes in CoT environment. Based on the identified adaptations, an adaptive model is proposed to react to these changes

    Efficient Methods for Distributed Machine Learning and Resource Management in the Internet-of-Things

    Get PDF
    University of Minnesota Ph.D. dissertation.June 2019. Major: Electrical/Computer Engineering. Advisor: Georgios Giannakis. 1 computer file (PDF); x, 190 pages.Undoubtedly, this century evolves in a world of interconnected entities, where the notion of Internet-of-Things (IoT) plays a central role in the proliferation of linked devices and objects. In this context, the present dissertation deals with large-scale networked systems including IoT that consist of heterogeneous components, and can operate in unknown environments. The focus is on the theoretical and algorithmic issues at the intersection of optimization, machine learning, and networked systems. Specifically, the research objectives and innovative claims include: (T1) Scalable distributed machine learning approaches for efficient IoT implementation; and, (T2) Enhanced resource management policies for IoT by leveraging machine learning advances. Conventional machine learning approaches require centralizing the users' data on one machine or in a data center. Considering the massive amount of IoT devices, centralized learning becomes computationally intractable, and rises serious privacy concerns. The widespread consensus today is that besides data centers at the cloud, future machine learning tasks have to be performed starting from the network edge, namely mobile devices. The first contribution offers innovative distributed learning methods tailored for heterogeneous IoT setups, and with reduced communication overhead. The resultant distributed algorithm can afford provably reduced communication complexity in distributed machine learning. From learning to control, reinforcement learning will play a critical role in many complex IoT tasks such as autonomous vehicles. In this context, the thesis introduces a distributed reinforcement learning approach featured with its high communication efficiency. Optimally allocating computing and communication resources is a crucial task in IoT. The second novelty pertains to learning-aided optimization tools tailored for resource management tasks. To date, most resource management schemes are based on a pure optimization viewpoint (e.g., the dual (sub)gradient method), which incurs suboptimal performance. From the vantage point of IoT, the idea is to leverage the abundant historical data collected by devices, and formulate the resource management problem as an empirical risk minimization task --- a central topic in machine learning research. By cross-fertilizing advances of optimization and learning theory, a learn-and-adapt resource management framework is developed. An upshot of the second part is its ability to account for the feedback-limited nature of tasks in IoT. Typically, solving resource allocation problems necessitates knowledge of the models that map a resource variable to its cost or utility. Targeting scenarios where models are not available, a model-free learning scheme is developed in this thesis, along with its bandit version. These algorithms come with provable performance guarantees, even when knowledge about the underlying systems is obtained only through repeated interactions with the environment. The overarching objective of this dissertation is to wed state-of-the-art optimization and machine learning tools with the emerging IoT paradigm, in a way that they can inspire and reinforce the development of each other, with the ultimate goal of benefiting daily life

    Heterogeneous Online Learning for “Thing-Adaptive” Fog Computing in IoT

    No full text
    corecore