4,197 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Full text link
    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.Comment: 28 pages, Published 21 April 2015 at MDPI's journal "Sensors

    On Distributed Linear Estimation With Observation Model Uncertainties

    Full text link
    We consider distributed estimation of a Gaussian source in a heterogenous bandwidth constrained sensor network, where the source is corrupted by independent multiplicative and additive observation noises, with incomplete statistical knowledge of the multiplicative noise. For multi-bit quantizers, we derive the closed-form mean-square-error (MSE) expression for the linear minimum MSE (LMMSE) estimator at the FC. For both error-free and erroneous communication channels, we propose several rate allocation methods named as longest root to leaf path, greedy and integer relaxation to (i) minimize the MSE given a network bandwidth constraint, and (ii) minimize the required network bandwidth given a target MSE. We also derive the Bayesian Cramer-Rao lower bound (CRLB) and compare the MSE performance of our proposed methods against the CRLB. Our results corroborate that, for low power multiplicative observation noises and adequate network bandwidth, the gaps between the MSE of our proposed methods and the CRLB are negligible, while the performance of other methods like individual rate allocation and uniform is not satisfactory

    Cramer-Rao bounds in the estimation of time of arrival in fading channels

    Get PDF
    This paper computes the Cramer-Rao bounds for the time of arrival estimation in a multipath Rice and Rayleigh fading scenario, conditioned to the previous estimation of a set of propagation channels, since these channel estimates (correlation between received signal and the pilot sequence) are sufficient statistics in the estimation of delays. Furthermore, channel estimation is a constitutive block in receivers, so we can take advantage of this information to improve timing estimation by using time and space diversity. The received signal is modeled as coming from a scattering environment that disperses the signal both in space and time. Spatial scattering is modeled with a Gaussian distribution and temporal dispersion as an exponential random variable. The impact of the sampling rate, the roll-off factor, the spatial and temporal correlation among channel estimates, the number of channel estimates, and the use of multiple sensors in the antenna at the receiver is studied and related to the mobile subscriber positioning issue. To our knowledge, this model is the only one of its kind as a result of the relationship between the space-time diversity and the accuracy of the timing estimation.Peer ReviewedPostprint (published version
    • …
    corecore