69,376 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks

    Full text link
    Recent advances in electronics are enabling substantial processing to be performed at each node (robots, sensors) of a networked system. Local processing enables data compression and may mitigate measurement noise, but it is still slower compared to a central computer (it entails a larger computational delay). However, while nodes can process the data in parallel, the centralized computational is sequential in nature. On the other hand, if a node sends raw data to a central computer for processing, it incurs communication delay. This leads to a fundamental communication-computation trade-off, where each node has to decide on the optimal amount of preprocessing in order to maximize the network performance. We consider a network in charge of estimating the state of a dynamical system and provide three contributions. First, we provide a rigorous problem formulation for optimal real-time estimation in processing networks in the presence of delays. Second, we show that, in the case of a homogeneous network (where all sensors have the same computation) that monitors a continuous-time scalar linear system, the optimal amount of local preprocessing maximizing the network estimation performance can be computed analytically. Third, we consider the realistic case of a heterogeneous network monitoring a discrete-time multi-variate linear system and provide algorithms to decide on suitable preprocessing at each node, and to select a sensor subset when computational constraints make using all sensors suboptimal. Numerical simulations show that selecting the sensors is crucial. Moreover, we show that if the nodes apply the preprocessing policy suggested by our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio

    Computing server power modeling in a data center: survey,taxonomy and performance evaluation

    Full text link
    Data centers are large scale, energy-hungry infrastructure serving the increasing computational demands as the world is becoming more connected in smart cities. The emergence of advanced technologies such as cloud-based services, internet of things (IoT) and big data analytics has augmented the growth of global data centers, leading to high energy consumption. This upsurge in energy consumption of the data centers not only incurs the issue of surging high cost (operational and maintenance) but also has an adverse effect on the environment. Dynamic power management in a data center environment requires the cognizance of the correlation between the system and hardware level performance counters and the power consumption. Power consumption modeling exhibits this correlation and is crucial in designing energy-efficient optimization strategies based on resource utilization. Several works in power modeling are proposed and used in the literature. However, these power models have been evaluated using different benchmarking applications, power measurement techniques and error calculation formula on different machines. In this work, we present a taxonomy and evaluation of 24 software-based power models using a unified environment, benchmarking applications, power measurement technique and error formula, with the aim of achieving an objective comparison. We use different servers architectures to assess the impact of heterogeneity on the models' comparison. The performance analysis of these models is elaborated in the paper

    Fast Power and Energy Efficiency Analysis of FPGA-based Wireless Base-band Processing

    Full text link
    Nowadays, demands for high performance keep on increasing in the wireless communication domain. This leads to a consistent rise of the complexity and designing such systems has become a challenging task. In this context, energy efficiency is considered as a key topic, especially for embedded systems in which design space is often very constrained. In this paper, a fast and accurate power estimation approach for FPGA-based hardware systems is applied to a typical wireless communication system. It aims at providing power estimates of complete systems prior to their implementations. This is made possible by using a dedicated library of high-level models that are representative of hardware IPs. Based on high-level simulations, design space exploration is made a lot faster and easier. The definition of a scenario and the monitoring of IP's time-activities facilitate the comparison of several domain-specific systems. The proposed approach and its benefits are demonstrated through a typical use case in the wireless communication domain.Comment: Presented at HIP3ES, 201

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Byzantine Attack and Defense in Cognitive Radio Networks: A Survey

    Full text link
    The Byzantine attack in cooperative spectrum sensing (CSS), also known as the spectrum sensing data falsification (SSDF) attack in the literature, is one of the key adversaries to the success of cognitive radio networks (CRNs). In the past couple of years, the research on the Byzantine attack and defense strategies has gained worldwide increasing attention. In this paper, we provide a comprehensive survey and tutorial on the recent advances in the Byzantine attack and defense for CSS in CRNs. Specifically, we first briefly present the preliminaries of CSS for general readers, including signal detection techniques, hypothesis testing, and data fusion. Second, we analyze the spear and shield relation between Byzantine attack and defense from three aspects: the vulnerability of CSS to attack, the obstacles in CSS to defense, and the games between attack and defense. Then, we propose a taxonomy of the existing Byzantine attack behaviors and elaborate on the corresponding attack parameters, which determine where, who, how, and when to launch attacks. Next, from the perspectives of homogeneous or heterogeneous scenarios, we classify the existing defense algorithms, and provide an in-depth tutorial on the state-of-the-art Byzantine defense schemes, commonly known as robust or secure CSS in the literature. Furthermore, we highlight the unsolved research challenges and depict the future research directions.Comment: Accepted by IEEE Communications Surveys and Tutoiral
    • …
    corecore