19,784 research outputs found

    OSCAR: A Collaborative Bandwidth Aggregation System

    Full text link
    The exponential increase in mobile data demand, coupled with growing user expectation to be connected in all places at all times, have introduced novel challenges for researchers to address. Fortunately, the wide spread deployment of various network technologies and the increased adoption of multi-interface enabled devices have enabled researchers to develop solutions for those challenges. Such solutions aim to exploit available interfaces on such devices in both solitary and collaborative forms. These solutions, however, have faced a steep deployment barrier. In this paper, we present OSCAR, a multi-objective, incentive-based, collaborative, and deployable bandwidth aggregation system. We present the OSCAR architecture that does not introduce any intermediate hardware nor require changes to current applications or legacy servers. The OSCAR architecture is designed to automatically estimate the system's context, dynamically schedule various connections and/or packets to different interfaces, be backwards compatible with the current Internet architecture, and provide the user with incentives for collaboration. We also formulate the OSCAR scheduler as a multi-objective, multi-modal scheduler that maximizes system throughput while minimizing energy consumption or financial cost. We evaluate OSCAR via implementation on Linux, as well as via simulation, and compare our results to the current optimal achievable throughput, cost, and energy consumption. Our evaluation shows that, in the throughput maximization mode, we provide up to 150% enhancement in throughput compared to current operating systems, without any changes to legacy servers. Moreover, this performance gain further increases with the availability of connection resume-supporting, or OSCAR-enabled servers, reaching the maximum achievable upper-bound throughput

    Cross-Layer Optimization of Fast Video Delivery in Cache-Enabled Relaying Networks

    Full text link
    This paper investigates the cross-layer optimization of fast video delivery and caching for minimization of the overall video delivery time in a two-hop relaying network. The half-duplex relay nodes are equipped with both a cache and a buffer which facilitate joint scheduling of fetching and delivery to exploit the channel diversity for improving the overall delivery performance. The fast delivery control is formulated as a two-stage functional non-convex optimization problem. By exploiting the underlying convex and quasi-convex structures, the problem can be solved exactly and efficiently by the developed algorithm. Simulation results show that significant caching and buffering gains can be achieved with the proposed framework, which translates into a reduction of the overall video delivery time. Besides, a trade-off between caching and buffering gains is unveiled.Comment: 7 pages, 4 figures; accepted for presentation at IEEE Globecom, San Diego, CA, Dec. 201

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Via Minimization in VLSI Chip Design - Application of a Planar Max-Cut Algorithm

    Get PDF
    The design of very large scale integrated (VLSI) chips is an exciting area of applied discrete mathematics.Due to the intractability of the majority of the problems, and also due to the huge instance sizes, the design process is decomposed into various sub-problems. In this paper, for a given detailed routing solution, we revisit the assignment of layers to net segments. For connected metalized nets, a layer change is accomplished by a vertical interconnection area (via). We seek to minimize the use of these vias as vias not only reduce the electrical reliability and performance of the chip, but also decrease the manufacturing yield substantially. In the general case, the via minimization problem is NP-hard. However, it is known that the two layer via minimization problem can be solved as a maximum cut problem on a planar graph which is a polynomial task.The focus of this paper is to use this approach for modern real-world chips. From the roughly two dozen wiring layers present, we take two adjacent ones for the via minimization. As a core-routine, we use a fast maximum cut algorithm on planar graphs. For being able to use the solutions in practice, we integrate practically relevant design rule constraints at the expense of potentially using further vias. Thus, our solution satisfies the additional constraints present in actual current designs. The computational results show that our implementation is fast on real-world instances as it usually computes a solution within a few minutes CPU time only. Moreover, often a considerable amount of vias can be saved

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig
    • …
    corecore