13,422 research outputs found

    Can Hardware Distortion Correlation be Neglected When Analyzing Uplink SE in Massive MIMO?

    Get PDF
    This paper analyzes how the distortion created by hardware impairments in a multiple-antenna base station affects the uplink spectral efficiency (SE), with focus on Massive MIMO. The distortion is correlated across the antennas, but has been often approximated as uncorrelated to facilitate (tractable) SE analysis. To determine when this approximation is accurate, basic properties of the distortion correlation are first uncovered. Then, we focus on third-order non-linearities and prove analytically and numerically that the correlation can be neglected in the SE analysis when there are many users. In i.i.d. Rayleigh fading with equal signal-to-noise ratios, this occurs when having five users.Comment: 5 pages, 3 figures, IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 201

    RF measurements I: signal receiving techniques

    Full text link
    For the characterization of components, systems and signals in the RF and microwave range, several dedicated instruments are in use. In this paper the fundamentals of the RF-signal sampling technique, which has found widespread applications in 'digital' oscilloscopes and sampling scopes, are discussed. The key element in these front-ends is the Schottky diode which can be used either as an RF mixer or as a single sampler. The spectrum analyser has become an absolutely indispensable tool for RF signal analysis. Here the front-end is the RF mixer as the RF section of modern spectrum analysers has a rather complex architecture. The reasons for this complexity and certain working principles as well as limitations are discussed. In addition, an overview of the development of scalar and vector signal analysers is given. For the determination of the noise temperature of a one-port and the noise figure of a two-port, basic concepts and relations are shown. A brief discussion of commonly used noise measurement techniques concludes the paper.Comment: 24 pages, contribution to the CAS - CERN Accelerator School: Specialised Course on RF for Accelerators; 8 - 17 Jun 2010, Ebeltoft, Denmar

    Green Cellular Networks: A Survey, Some Research Issues and Challenges

    Full text link
    Energy efficiency in cellular networks is a growing concern for cellular operators to not only maintain profitability, but also to reduce the overall environment effects. This emerging trend of achieving energy efficiency in cellular networks is motivating the standardization authorities and network operators to continuously explore future technologies in order to bring improvements in the entire network infrastructure. In this article, we present a brief survey of methods to improve the power efficiency of cellular networks, explore some research issues and challenges and suggest some techniques to enable an energy efficient or "green" cellular network. Since base stations consume a maximum portion of the total energy used in a cellular system, we will first provide a comprehensive survey on techniques to obtain energy savings in base stations. Next, we discuss how heterogeneous network deployment based on micro, pico and femto-cells can be used to achieve this goal. Since cognitive radio and cooperative relaying are undisputed future technologies in this regard, we propose a research vision to make these technologies more energy efficient. Lastly, we explore some broader perspectives in realizing a "green" cellular network technologyComment: 16 pages, 5 figures, 2 table

    BriskStream: Scaling Data Stream Processing on Shared-Memory Multicore Architectures

    Full text link
    We introduce BriskStream, an in-memory data stream processing system (DSPSs) specifically designed for modern shared-memory multicore architectures. BriskStream's key contribution is an execution plan optimization paradigm, namely RLAS, which takes relative-location (i.e., NUMA distance) of each pair of producer-consumer operators into consideration. We propose a branch and bound based approach with three heuristics to resolve the resulting nontrivial optimization problem. The experimental evaluations demonstrate that BriskStream yields much higher throughput and better scalability than existing DSPSs on multi-core architectures when processing different types of workloads.Comment: To appear in SIGMOD'1

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions
    corecore