18,781 research outputs found

    Follow Me at the Edge: Mobility-Aware Dynamic Service Placement for Mobile Edge Computing

    Full text link
    Mobile edge computing is a new computing paradigm, which pushes cloud computing capabilities away from the centralized cloud to the network edge. However, with the sinking of computing capabilities, the new challenge incurred by user mobility arises: since end-users typically move erratically, the services should be dynamically migrated among multiple edges to maintain the service performance, i.e., user-perceived latency. Tackling this problem is non-trivial since frequent service migration would greatly increase the operational cost. To address this challenge in terms of the performance-cost trade-off, in this paper we study the mobile edge service performance optimization problem under long-term cost budget constraint. To address user mobility which is typically unpredictable, we apply Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization problems which do not require a priori knowledge such as user mobility. As the decomposed problem is NP-hard, we first design an approximation algorithm based on Markov approximation to seek a near-optimal solution. To make our solution scalable and amenable to future 5G application scenario with large-scale user devices, we further propose a distributed approximation scheme with greatly reduced time complexity, based on the technique of best response update. Rigorous theoretical analysis and extensive evaluations demonstrate the efficacy of the proposed centralized and distributed schemes.Comment: The paper is accepted by IEEE Journal on Selected Areas in Communications, Aug. 201

    Combining Trajectory Optimization, Supervised Machine Learning, and Model Structure for Mitigating the Curse of Dimensionality in the Control of Bipedal Robots

    Full text link
    To overcome the obstructions imposed by high-dimensional bipedal models, we embed a stable walking motion in an attractive low-dimensional surface of the system's state space. The process begins with trajectory optimization to design an open-loop periodic walking motion of the high-dimensional model and then adding to this solution, a carefully selected set of additional open-loop trajectories of the model that steer toward the nominal motion. A drawback of trajectories is that they provide little information on how to respond to a disturbance. To address this shortcoming, Supervised Machine Learning is used to extract a low-dimensional state-variable realization of the open-loop trajectories. The periodic orbit is now an attractor of the low-dimensional state-variable model but is not attractive in the full-order system. We then use the special structure of mechanical models associated with bipedal robots to embed the low-dimensional model in the original model in such a manner that the desired walking motions are locally exponentially stable. The design procedure is first developed for ordinary differential equations and illustrated on a simple model. The methods are subsequently extended to a class of hybrid models and then realized experimentally on an Atrias-series 3D bipedal robot.Comment: Paper was submitted to International Journal of Robotics Research (IJRR) in Nov. 201

    A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    Full text link
    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks including definition, architecture and advantages. Next, a comprehensive survey of issues on computing, caching and communication techniques at the network edge is presented respectively. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks such as cloud technology, SDN/NFV and smart devices are discussed. Finally, open research challenges and future directions are presented as well

    Deep Visual Perception for Dynamic Walking on Discrete Terrain

    Full text link
    Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of 45-85 centimeters.Comment: Presented at Humanoids 201

    Reinforcement Learning for Robotics and Control with Active Uncertainty Reduction

    Full text link
    Model-free reinforcement learning based methods such as Proximal Policy Optimization, or Q-learning typically require thousands of interactions with the environment to approximate the optimum controller which may not always be feasible in robotics due to safety and time consumption. Model-based methods such as PILCO or BlackDrops, while data-efficient, provide solutions with limited robustness and complexity. To address this tradeoff, we introduce active uncertainty reduction-based virtual environments, which are formed through limited trials conducted in the original environment. We provide an efficient method for uncertainty management, which is used as a metric for self-improvement by identification of the points with maximum expected improvement through adaptive sampling. Capturing the uncertainty also allows for better mimicking of the reward responses of the original system. Our approach enables the use of complex policy structures and reward functions through a unique combination of model-based and model-free methods, while still retaining the data efficiency. We demonstrate the validity of our method on several classic reinforcement learning problems in OpenAI gym. We prove that our approach offers a better modeling capacity for complex system dynamics as compared to established methods

    A Survey on State Estimation Techniques and Challenges in Smart Distribution Systems

    Full text link
    This paper presents a review of the literature on State Estimation (SE) in power systems. While covering some works related to SE in transmission systems, the main focus of this paper is Distribution System State Estimation (DSSE). The paper discusses a few critical topics of DSSE, including mathematical problem formulation, application of pseudo-measurements, metering instrument placement, network topology issues, impacts of renewable penetration, and cyber-security. Both conventional and modern data-driven and probabilistic techniques have been reviewed. This paper can provide researchers and utility engineers with insights into the technical achievements, barriers, and future research directions of DSSE

    Learning Manipulation Skills Via Hierarchical Spatial Attention

    Full text link
    Learning generalizable skills in robotic manipulation has long been challenging due to real-world sized observation and action spaces. One method for addressing this problem is attention focus -- the robot learns where to attend its sensors and irrelevant details are ignored. However, these methods have largely not caught on due to the difficulty of learning a good attention policy and the added partial observability induced by a narrowed window of focus. This article addresses the first issue by constraining gazes to a spatial hierarchy. For the second issue, we identify a case where the partial observability induced by attention does not prevent Q-learning from finding an optimal policy. We conclude with real-robot experiments on challenging pick-place tasks demonstrating the applicability of the approach.Comment: IEEE Transactions on Robotics, March 2020. Video: https://youtu.be/4dZ6WiDX3-s . Source code: https://github.com/mgualti/Seq6DofMani

    The Wireless Control Plane: An Overview and Directions for Future Research

    Full text link
    Software-defined networking (SDN), which has been successfully deployed in the management of complex data centers, has recently been incorporated into a myriad of 5G networks to intelligently manage a wide range of heterogeneous wireless devices, software systems, and wireless access technologies. Thus, the SDN control plane needs to communicate wirelessly with the wireless data plane either directly or indirectly. The uncertainties in the wireless SDN control plane (WCP) make its design challenging. Both WCP schemes (direct WCP, D-WCP, and indirect WCP, I-WCP) have been incorporated into recent 5G networks; however, a discussion of their design principles and their design limitations is missing. This paper introduces an overview of the WCP design (I-WCP and D-WCP) and discusses its intricacies by reviewing its deployment in recent 5G networks. Furthermore, to facilitate synthesizing a robust WCP, this paper proposes a generic WCP framework using deep reinforcement learning (DRL) principles and presents a roadmap for future research.Comment: This paper has been accepted to appear in Elsevier Journal of Networks and Computer Applications. It has 34 pages, 8 figures, and two table

    Budget-constrained Edge Service Provisioning with Demand Estimation via Bandit Learning

    Full text link
    Shared edge computing platforms, which enable Application Service Providers (ASPs) to deploy applications in close proximity to mobile users are providing ultra-low latency and location-awareness to a rich portfolio of services. Though ubiquitous edge service provisioning, i.e., deploying the application at all possible edge sites, is always preferable, it is impractical due to often limited operational budget of ASPs. In this case, an ASP has to cautiously decide where to deploy the edge service and how much budget it is willing to use. A central issue here is that the service demand received by each edge site, which is the key factor of deploying benefit, is unknown to ASPs a priori. What's more complicated is that this demand pattern varies temporally and spatially across geographically distributed edge sites. In this paper, we investigate an edge resource rental problem where the ASP learns service demand patterns for individual edge sites while renting computation resource at these sites to host its applications for edge service provisioning. An online algorithm, called Context-aware Online Edge Resource Rental (COERR), is proposed based on the framework of Contextual Combinatorial Multi-armed Bandit (CC-MAB). COERR observes side-information (context) to learn the demand patterns of edge sites and decides rental decisions (including where to rent the computation resource and how much to rent) to maximize ASP's utility given a limited budget. COERR provides a provable performance achieving sublinear regret compared to an Oracle algorithm that knows exactly the expected service demand of edge sites. Experiments are carried out on a real-world dataset and the results show that COERR significantly outperforms other benchmarks

    Heterogeneous MacroTasking (HeMT) for Parallel Processing in the Public Cloud

    Full text link
    Using tiny, equal-sized tasks (Homogeneous microTasking, HomT) has long been regarded an effective way of load balancing in parallel computing systems. When combined with nodes pulling in work upon becoming idle, HomT has the desirable property of automatically adapting its load distribution to the processing capacities of participating nodes - more powerful nodes finish their work sooner and, therefore, pull in additional work faster. As a result, HomT is deemed especially desirable in settings with heterogeneous (and possibly possessing dynamically changing) processing capacities. However, HomT does have additional scheduling and I/O overheads that might make this load balancing scheme costly in some scenarios. In this paper, we first analyze these advantages and disadvantages of HomT. We then propose an alternative load balancing scheme - Heterogeneous MacroTasking (HeMT) - wherein workload is intentionally partitioned according to nodes' processing capacity. Our goal is to study when HeMT is able to overcome the performance disadvantages of HomT. We implement a prototype of HeMT within the Apache Spark application framework with complementary enhancements to the Apache Mesos cluster manager. Spark's built-in scheduler, when parameterized appropriately, implements HomT. Our experimental results show that HeMT out-performs HomT when accurate workload-specific estimates of nodes' processing capacities are learned. As representative results, Spark with HeMT offers about 10% better average completion times for realistic data processing workloads over the default system
    • …
    corecore