179 research outputs found

    How user throughput depends on the traffic demand in large cellular networks

    Get PDF
    Little's law allows to express the mean user throughput in any region of the network as the ratio of the mean traffic demand to the steady-state mean number of users in this region. Corresponding statistics are usually collected in operational networks for each cell. Using ergodic arguments and Palm theoretic formalism, we show that the global mean user throughput in the network is equal to the ratio of these two means in the steady state of the "typical cell". Here, both means account for double averaging: over time and network geometry, and can be related to the per-surface traffic demand, base-station density and the spatial distribution of the SINR. This latter accounts for network irregularities, shadowing and idling cells via cell-load equations. We validate our approach comparing analytical and simulation results for Poisson network model to real-network cell-measurements

    Using Poisson processes to model lattice cellular networks

    Get PDF
    An almost ubiquitous assumption made in the stochastic-analytic study of the quality of service in cellular networks is Poisson distribution of base stations. It is usually justified by various irregularities in the real placement of base stations, which ideally should form the hexagonal pattern. We provide a different and rigorous argument justifying the Poisson assumption under sufficiently strong log-normal shadowing observed in the network, in the evaluation of a natural class of the typical-user service-characteristics including its SINR. Namely, we present a Poisson-convergence result for a broad range of stationary (including lattice) networks subject to log-normal shadowing of increasing variance. We show also for the Poisson model that the distribution of all these characteristics does not depend on the particular form of the additional fading distribution. Our approach involves a mapping of 2D network model to 1D image of it "perceived" by the typical user. For this image we prove our convergence result and the invariance of the Poisson limit with respect to the distribution of the additional shadowing or fading. Moreover, we present some new results for Poisson model allowing one to calculate the distribution function of the SINR in its whole domain. We use them to study and optimize the mean energy efficiency in cellular networks

    SINR-based k-coverage probability in cellular networks with arbitrary shadowing

    Get PDF
    We give numerically tractable, explicit integral expressions for the distribution of the signal-to-interference-and-noise-ratio (SINR) experienced by a typical user in the down-link channel from the k-th strongest base stations of a cellular network modelled by Poisson point process on the plane. Our signal propagation-loss model comprises of a power-law path-loss function with arbitrarily distributed shadowing, independent across all base stations, with and without Rayleigh fading. Our results are valid in the whole domain of SINR, in particular for SINR<1, where one observes multiple coverage. In this latter aspect our paper complements previous studies reported in [Dhillon et al. JSAC 2012]

    Wireless networks appear Poissonian due to strong shadowing

    Get PDF
    Geographic locations of cellular base stations sometimes can be well fitted with spatial homogeneous Poisson point processes. In this paper we make a complementary observation: In the presence of the log-normal shadowing of sufficiently high variance, the statistics of the propagation loss of a single user with respect to different network stations are invariant with respect to their geographic positioning, whether regular or not, for a wide class of empirically homogeneous networks. Even in perfectly hexagonal case they appear as though they were realized in a Poisson network model, i.e., form an inhomogeneous Poisson point process on the positive half-line with a power-law density characterized by the path-loss exponent. At the same time, the conditional distances to the corresponding base stations, given their observed propagation losses, become independent and log-normally distributed, which can be seen as a decoupling between the real and model geometry. The result applies also to Suzuki (Rayleigh-log-normal) propagation model. We use Kolmogorov-Smirnov test to empirically study the quality of the Poisson approximation and use it to build a linear-regression method for the statistical estimation of the value of the path-loss exponent

    What frequency bandwidth to run cellular network in a given country? - a downlink dimensioning problem

    Get PDF
    We propose an analytic approach to the frequency bandwidth dimensioning problem, faced by cellular network operators who deploy/upgrade their networks in various geographical regions (countries) with an inhomogeneous urbanization. We present a model allowing one to capture fundamental relations between users' quality of service parameters (mean downlink throughput), traffic demand, the density of base station deployment, and the available frequency bandwidth. These relations depend on the applied cellular technology (3G or 4G impacting user peak bit-rate) and on the path-loss characteristics observed in different (urban, sub-urban and rural) areas. We observe that if the distance between base stations is kept inversely proportional to the distance coefficient of the path-loss function, then the performance of the typical cells of these different areas is similar when serving the same (per-cell) traffic demand. In this case, the frequency bandwidth dimensioning problem can be solved uniformly across the country applying the mean cell approach proposed in [Blaszczyszyn et al. WiOpt2014] http://dx.doi.org/10.1109/WIOPT.2014.6850355 . We validate our approach by comparing the analytical results to measurements in operational networks in various geographical zones of different countries

    Embed and Conquer: Scalable Embeddings for Kernel k-Means on MapReduce

    Full text link
    The kernel kk-means is an effective method for data clustering which extends the commonly-used kk-means algorithm to work on a similarity matrix over complex data structures. The kernel kk-means algorithm is however computationally very complex as it requires the complete data matrix to be calculated and stored. Further, the kernelized nature of the kernel kk-means algorithm hinders the parallelization of its computations on modern infrastructures for distributed computing. In this paper, we are defining a family of kernel-based low-dimensional embeddings that allows for scaling kernel kk-means on MapReduce via an efficient and unified parallelization strategy. Afterwards, we propose two methods for low-dimensional embedding that adhere to our definition of the embedding family. Exploiting the proposed parallelization strategy, we present two scalable MapReduce algorithms for kernel kk-means. We demonstrate the effectiveness and efficiency of the proposed algorithms through an empirical evaluation on benchmark data sets.Comment: Appears in Proceedings of the SIAM International Conference on Data Mining (SDM), 201

    Towards a Self-Healing approach to sustain Web Services Reliability.

    Get PDF
    International audienceWeb service technology expands the role of the Web from a simple data carrier to a service provider. To sustain this role, some issues such as reliability continue to hurdle Web services widespread use, and thus need to be addressed. Autonomic computing seems offering solutions to the specific issue of reliability. These solutions let Web services self-heal in response to the errors that are detected and then fixed. Self-healing is simply defined as the capacity of a system to restore itself to a normal state without human intervention. In this paper, we design and implement a selfhealing approach to achieve Web services reliability. Two steps are identified in this approach: (1) model a Web service using two behaviors known as operational and control; and (2) monitor the execution of a Web service using a control interface that sits between these two behaviors. This control interface is implemented in compliance with the principles of aspect-oriented programming and case-based reasoning

    A contextual semantic mediator for a distributed cooperative maintenance platform.

    No full text
    International audiencePlatforms expand maintenance systems from centralized systems into e-maintenance platforms integrating various cooperative distributed systems and maintenance applications. This phenomenon allowed an evolution in services offered to maintenance actors by integrating more intelligent applications, providing decision support and facilitating the access to needed data. To manage this evolution, e-maintenance platforms must respond to a great challenge which is ensuring an interoperable communication between its integrated systems. By combining different techniques used in previous works, we propose in this work a semantic mediator system ensuring a high level of interoperability between systems in the maintenance platform

    A formal ontology for industrial maintenance

    No full text
    International audienceThe rapid advancement of information and communication technologies has resulted in a variety of maintenance support systems and tools covering all sub-domains of maintenance. Most of these systems are based on different models that are sometimes redundant or incoherent and always heterogeneous. This problem has lead to the development of maintenance platforms integrating all of these support systems. The main problem confronted by these integration platforms is to provide semantic interoperability between different applications within the same environment. In this aim, we have developed an ontology for the field of industrial maintenance, adopting the METHONTOLOGY approach to manage the life cycle development of this ontology, that we have called IMAMO (Industrial MAintenance Management Ontology). This ontology can be used not only to ensure semantic interoperability but also to generate new knowledge that supports decision making in the maintenance process. This paper provides and discusses some tests so as to evaluate the ontology and to show how it can ensure semantic interoperability and generate new knowledge within the platform

    PETRA: Process Evolution using a TRAce-based system on a maintenance platform

    Get PDF
    To meet increasing needs in the field of maintenance, we studied the dynamic aspect of process and services on a maintenance platform, a major challenge in process mining and knowledge engineering. Hence, we propose a dynamic experience feedback approach to exploit maintenance process behaviors in real execution of the maintenance platform. An active learning process exploiting event log is introduced by taking into account the dynamic aspect of knowledge using trace engineering. Our proposal makes explicit the underlying knowledge of platform users by means of a trace-based system called “PETRA”. The goal of this system is to extract new knowledge rules about transitions and activities in maintenance processes from previous platform executions as well as its user (i.e. maintenance operators) interactions. While following a Knowledge Traces Discovery process and handling the maintenance ontology IMAMO, “PETRA” is composed of three main subsystems: tracking, learning and knowledge capitalization. The capitalized rules are shared in the platform knowledge base in order to be reused in future process executions. The feasibility of this method is proven through concrete use cases involving four maintenance processes and their simulation
    • …
    corecore