2,901 research outputs found

    Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application

    Distribution on Warp Maps for Alignment of Open and Closed Curves

    Get PDF
    Alignment of curve data is an integral part of their statistical analysis, and can be achieved using model- or optimization-based approaches. The parameter space is usually the set of monotone, continuous warp maps of a domain. Infinite-dimensional nature of the parameter space encourages sampling based approaches, which require a distribution on the set of warp maps. Moreover, the distribution should also enable sampling in the presence of important landmark information on the curves which constrain the warp maps. For alignment of closed and open curves in Rd,d=1,2,3\mathbb{R}^d, d=1,2,3, possibly with landmark information, we provide a constructive, point-process based definition of a distribution on the set of warp maps of [0,1][0,1] and the unit circle S1\mathbb{S}^1 that is (1) simple to sample from, and (2) possesses the desiderata for decomposition of the alignment problem with landmark constraints into multiple unconstrained ones. For warp maps on [0,1][0,1], the distribution is related to the Dirichlet process. We demonstrate its utility by using it as a prior distribution on warp maps in a Bayesian model for alignment of two univariate curves, and as a proposal distribution in a stochastic algorithm that optimizes a suitable alignment functional for higher-dimensional curves. Several examples from simulated and real datasets are provided

    Distributed Sparse Computing and Communication for Big Graph Analytics and Deep Learning

    Get PDF
    Sparsity can be found in the underlying structure of many real-world computationally expensive problems including big graph analytics and large scale sparse deep neural networks. In addition, if gracefully investigated, many of these problems contain a broad substratum of parallelism suitable for parallel and distributed executions of sparse computation. However, usually, dense computation is preferred to its sparse alternative as sparse computation is not only hard to parallelize due to the irregular nature of the sparse data, but also complicated to implement in terms of rewriting a dense algorithm into a sparse one. Hence, foolproof sparse computation requires customized data structures to encode the sparsity of the sparse data and new algorithms to mask the complexity of the sparse computation. However, by carefully exploiting the sparse data structures and algorithms, sparse computation can reduce memory consumption, communication volume, and processing power and thus undoubtedly move the scalability boundaries compared to its dense equivalent. In this dissertation, I explain how to use parallel and distributed computing techniques in the presence of sparsity to solve large scientific problems including graph analytics and deep learning. To meet this end goal, I leverage the duality between graph theory and sparse linear algebra primitives, and thus solve graph analytics and deep learning problems with the sparse matrix operations. My contributions are fourfold: (1) design and implementation of a new distributed compressed sparse matrix data structure that reduces both computation and communication volumes and is suitable for sparse matrix-vector and sparse matrix-matrix operations, (2) introducing the new MPI*X parallelism model that deems threads as basic units of computing and communication, (3) optimizing sparse matrix-matrix multiplication by employing different hashing techniques, and (4) proposing the new data-then-model parallelism that mitigates the effect of stragglers in sparse deep learning by combining data and model parallelisms. Altogether, these contributions provide a set of data structures and algorithms to accelerate and scale the sparse computing and communication

    The opaque square

    Full text link
    The problem of finding small sets that block every line passing through a unit square was first considered by Mazurkiewicz in 1916. We call such a set {\em opaque} or a {\em barrier} for the square. The shortest known barrier has length 2+62=2.6389…\sqrt{2}+ \frac{\sqrt{6}}{2}= 2.6389\ldots. The current best lower bound for the length of a (not necessarily connected) barrier is 22, as established by Jones about 50 years ago. No better lower bound is known even if the barrier is restricted to lie in the square or in its close vicinity. Under a suitable locality assumption, we replace this lower bound by 2+10−122+10^{-12}, which represents the first, albeit small, step in a long time toward finding the length of the shortest barrier. A sharper bound is obtained for interior barriers: the length of any interior barrier for the unit square is at least 2+10−52 + 10^{-5}. Two of the key elements in our proofs are: (i) formulas established by Sylvester for the measure of all lines that meet two disjoint planar convex bodies, and (ii) a procedure for detecting lines that are witness to the invalidity of a short bogus barrier for the square.Comment: 23 pages, 8 figure

    mgm: Estimating Time-Varying Mixed Graphical Models in High-Dimensional Data

    Get PDF
    We present the R-package mgm for the estimation of k-order Mixed Graphical Models (MGMs) and mixed Vector Autoregressive (mVAR) models in high-dimensional data. These are a useful extensions of graphical models for only one variable type, since data sets consisting of mixed types of variables (continuous, count, categorical) are ubiquitous. In addition, we allow to relax the stationarity assumption of both models by introducing time-varying versions MGMs and mVAR models based on a kernel weighting approach. Time-varying models offer a rich description of temporally evolving systems and allow to identify external influences on the model structure such as the impact of interventions. We provide the background of all implemented methods and provide fully reproducible examples that illustrate how to use the package

    VIRTUALIZED BASEBAND UNITS CONSOLIDATION IN ADVANCED LTE NETWORKS USING MOBILITY- AND POWER-AWARE ALGORITHMS

    Get PDF
    Virtualization of baseband units in Advanced Long-Term Evolution networks and a rapid performance growth of general purpose processors naturally raise the interest in resource multiplexing. The concept of resource sharing and management between virtualized instances is not new and extensively used in data centers. We adopt some of the resource management techniques to organize virtualized baseband units on a pool of hosts and investigate the behavior of the system in order to identify features which are particularly relevant to mobile environment. Subsequently, we introduce our own resource management algorithm specifically targeted to address some of the peculiarities identified by experimental results
    • …
    corecore