143,764 research outputs found

    Enabling Disaster Resilient 4G Mobile Communication Networks

    Full text link
    The 4G Long Term Evolution (LTE) is the cellular technology expected to outperform the previous generations and to some extent revolutionize the experience of the users by taking advantage of the most advanced radio access techniques (i.e. OFDMA, SC-FDMA, MIMO). However, the strong dependencies between user equipments (UEs), base stations (eNBs) and the Evolved Packet Core (EPC) limit the flexibility, manageability and resiliency in such networks. In case the communication links between UEs-eNB or eNB-EPC are disrupted, UEs are in fact unable to communicate. In this article, we reshape the 4G mobile network to move towards more virtual and distributed architectures for improving disaster resilience, drastically reducing the dependency between UEs, eNBs and EPC. The contribution of this work is twofold. We firstly present the Flexible Management Entity (FME), a distributed entity which leverages on virtualized EPC functionalities in 4G cellular systems. Second, we introduce a simple and novel device-todevice (D2D) communication scheme allowing the UEs in physical proximity to communicate directly without resorting to the coordination with an eNB.Comment: Submitted to IEEE Communications Magazin

    Structure-Aware Dynamic Scheduler for Parallel Machine Learning

    Full text link
    Training large machine learning (ML) models with many variables or parameters can take a long time if one employs sequential procedures even with stochastic updates. A natural solution is to turn to distributed computing on a cluster; however, naive, unstructured parallelization of ML algorithms does not usually lead to a proportional speedup and can even result in divergence, because dependencies between model elements can attenuate the computational gains from parallelization and compromise correctness of inference. Recent efforts toward this issue have benefited from exploiting the static, a priori block structures residing in ML algorithms. In this paper, we take this path further by exploring the dynamic block structures and workloads therein present during ML program execution, which offers new opportunities for improving convergence, correctness, and load balancing in distributed ML. We propose and showcase a general-purpose scheduler, STRADS, for coordinating distributed updates in ML algorithms, which harnesses the aforementioned opportunities in a systematic way. We provide theoretical guarantees for our scheduler, and demonstrate its efficacy versus static block structures on Lasso and Matrix Factorization
    • …
    corecore