761 research outputs found

    Delay Minimizing User Association in Cellular Networks via Hierarchically Well-Separated Trees

    Full text link
    We study downlink delay minimization within the context of cellular user association policies that map mobile users to base stations. We note the delay minimum user association problem fits within a broader class of network utility maximization and can be posed as a non-convex quadratic program. This non-convexity motivates a split quadratic objective function that captures the original problem's inherent tradeoff: association with a station that provides the highest signal-to-interference-plus-noise ratio (SINR) vs. a station that is least congested. We find the split-term formulation is amenable to linearization by embedding the base stations in a hierarchically well-separated tree (HST), which offers a linear approximation with constant distortion. We provide a numerical comparison of several problem formulations and find that with appropriate optimization parameter selection, the quadratic reformulation produces association policies with sum delays that are close to that of the original network utility maximization. We also comment on the more difficult problem when idle base stations (those without associated users) are deactivated.Comment: 6 pages, 5 figures. Submitted on 2013-10-03 to the 2015 IEEE International Conference on Communications (ICC). Accepted on 2015-01-09 to the 2015 IEEE International Conference on Communications (ICC

    A Submodular Optimization Framework for Outage-Aware Cell Association in Heterogeneous Cellular Networks

    Get PDF
    In cellular heterogeneous networks (HetNets), offloading users to small cell base stations (SBSs) leads to a degradation in signal to interference plus noise ratio (SINR) and results in high outage probabilities for offloaded users. In this paper, we propose a novel framework to solve the cell association problem with the intention of improving user outage performance while achieving load balancing across different tiers of BSs. We formulate a combinatorial utility maximization problem with weighted BS loads that achieves proportional fairness among users and also takes into account user outage performance. A formulation of the weighting parameters is proposed to discourage assigning users to BSs with high outage probabilities. In addition, we show that the combinatorial optimization problem can be reformulated as a monotone submodular maximization problem and it can be readily solved via a greedy algorithm with lazy evaluations. The obtained solution offers a constant performance guarantee to the cell association problem. Simulation results show that our proposed approach leads to over 30% reduction in outage probabilities for offloaded users and achieves load balancing across macrocell and small cell BSs

    Software-Driven and Virtualized Architectures for Scalable 5G Networks

    Full text link
    In this dissertation, we argue that it is essential to rearchitect 4G cellular core networks–sitting between the Internet and the radio access network–to meet the scalability, performance, and flexibility requirements of 5G networks. Today, there is a growing consensus among operators and research community that software-defined networking (SDN), network function virtualization (NFV), and mobile edge computing (MEC) paradigms will be the key ingredients of the next-generation cellular networks. Motivated by these trends, we design and optimize three core network architectures, SoftMoW, SoftBox, and SkyCore, for different network scales, objectives, and conditions. SoftMoW provides global control over nationwide core networks with the ultimate goal of enabling new routing and mobility optimizations. SoftBox attempts to enhance policy enforcement in statewide core networks to enable low-latency, signaling-efficient, and customized services for mobile devices. Sky- Core is aimed at realizing a compact core network for citywide UAV-based radio networks that are going to serve first responders in the future. Network slicing techniques make it possible to deploy these solutions on the same infrastructure in parallel. To better support mobility and provide verifiable security, these architectures can use an addressing scheme that separates network locations and identities with self-certifying, flat and non-aggregatable address components. To benefit the proposed architectures, we designed a high-speed and memory-efficient router, called Caesar, for this type of addressing schemePHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146130/1/moradi_1.pd

    Delivering IoT Services in Smart Cities and Environmental Monitoring through Collective Awareness, Mobile Crowdsensing and Open Data

    Get PDF
    The Internet of Things (IoT) is the paradigm that allows us to interact with the real world by means of networking-enabled devices and convert physical phenomena into valuable digital knowledge. Such a rapidly evolving field leveraged the explosion of a number of technologies, standards and platforms. Consequently, different IoT ecosystems behave as closed islands and do not interoperate with each other, thus the potential of the number of connected objects in the world is far from being totally unleashed. Typically, research efforts in tackling such challenge tend to propose a new IoT platforms or standards, however, such solutions find obstacles in keeping up the pace at which the field is evolving. Our work is different, in that it originates from the following observation: in use cases that depend on common phenomena such as Smart Cities or environmental monitoring a lot of useful data for applications is already in place somewhere or devices capable of collecting such data are already deployed. For such scenarios, we propose and study the use of Collective Awareness Paradigms (CAP), which offload data collection to a crowd of participants. We bring three main contributions: we study the feasibility of using Open Data coming from heterogeneous sources, focusing particularly on crowdsourced and user-contributed data that has the drawback of being incomplete and we then propose a State-of-the-Art algorith that automatically classifies raw crowdsourced sensor data; we design a data collection framework that uses Mobile Crowdsensing (MCS) and puts the participants and the stakeholders in a coordinated interaction together with a distributed data collection algorithm that prevents the users from collecting too much or too less data; (3) we design a Service Oriented Architecture that constitutes a unique interface to the raw data collected through CAPs through their aggregation into ad-hoc services, moreover, we provide a prototype implementation

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Features extraction using random matrix theory.

    Get PDF
    Representing the complex data in a concise and accurate way is a special stage in data mining methodology. Redundant and noisy data affects generalization power of any classification algorithm, undermines the results of any clustering algorithm and finally encumbers the monitoring of large dynamic systems. This work provides several efficient approaches to all aforementioned sides of the analysis. We established, that notable difference can be made, if the results from the theory of ensembles of random matrices are employed. Particularly important result of our study is a discovered family of methods based on projecting the data set on different subsets of the correlation spectrum. Generally, we start with traditional correlation matrix of a given data set. We perform singular value decomposition, and establish boundaries between essential and unimportant eigen-components of the spectrum. Then, depending on the nature of the problem at hand we either use former or later part for the projection purpose. Projecting the spectrum of interest is a common technique in linear and non-linear spectral methods such as Principal Component Analysis, Independent Component Analysis and Kernel Principal Component Analysis. Usually the part of the spectrum to project is defined by the amount of variance of overall data or feature space in non-linear case. The applicability of these spectral methods is limited by the assumption that larger variance has important dynamics, i.e. if the data has a high signal-to-noise ratio. If it is true, projection of principal components targets two problems in data mining, reduction in the number of features and selection of more important features. Our methodology does not make an assumption of high signal-to-noise ratio, instead, using the rigorous instruments of Random Matrix Theory (RNIT) it identifies the presence of noise and establishes its boundaries. The knowledge of the structure of the spectrum gives us possibility to make more insightful projections. For instance, in the application to router network traffic, the reconstruction error procedure for anomaly detection is based on the projection of noisy part of the spectrum. Whereas, in bioinformatics application of clustering the different types of leukemia, implicit denoising of the correlation matrix is achieved by decomposing the spectrum to random and non-random parts. For temporal high dimensional data, spectrum and eigenvectors of its correlation matrix is another representation of the data. Thus, eigenvalues, components of the eigenvectors, inverse participation ratio of eigenvector components and other operators of eigen analysis are spectral features of dynamic system. In our work we proposed to extract spectral features using the RMT. We demonstrated that with extracted spectral features we can monitor the changing dynamics of network traffic. Experimenting with the delayed correlation matrices of network traffic and extracting its spectral features, we visualized the delayed processes in the system. We demonstrated in our work that broad range of applications in feature extraction can benefit from the novel RMT based approach to the spectral representation of the data

    The 4th Conference of PhD Students in Computer Science

    Get PDF
    • …
    corecore