27,537 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    A decentralized motion coordination strategy for dynamic target tracking

    Get PDF
    This paper presents a decentralized motion planning algorithm for the distributed sensing of a noisy dynamical process by multiple cooperating mobile sensor agents. This problem is motivated by localization and tracking tasks of dynamic targets. Our gradient-descent method is based on a cost function that measures the overall quality of sensing. We also investigate the role of imperfect communication between sensor agents in this framework, and examine the trade-offs in performance between sensing and communication. Simulations illustrate the basic characteristics of the algorithms

    Content Sharing in Mobile Networks with Infrastructure: Planning and Management

    Get PDF
    This thesis focuses on mobile ad-hoc networks (with pedestrian or vehicular mobility) having infrastructure support. We deal with the problems of design, deployment and management of such networks. A first issue to address concerns infrastructure itself: how pervasive should it be in order for the network to operate at the same time efficiently and in a cost-effective manner? How should the units composing it (e.g., access points) be placed? There are several approaches to such questions in literature, and this thesis studies and compares them. Furthermore, in order to effectively design the infrastructure, we need to understand how and how much it will be used. As an example, what is the relationship between infrastructure-to-node and node-to-node communication? How far away, in time and space, do data travel before its destination is reached? A common assumption made when dealing with such problems is that perfect knowledge about the current and future node mobility is available. In this thesis, we also deal with the problem of assessing the impact that an imperfect, limited knowledge has on network performance. As far as the management of the network is concerned, this thesis presents a variant of the paradigm known as publish-and-subscribe. With respect to the original paradigm, our goal was to ensure a high probability of finding the requested content, even in presence of selfish, uncooperative nodes, or even nodes whose precise goal is harming the system. Each node is allowed to get from the network an amount of content which corresponds to the amount of content provided to other nodes. Nodes with caching capabilities are assisted in using their cache in order to improve the amount of offered conten

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Resource-aware IoT Control: Saving Communication through Predictive Triggering

    Full text link
    The Internet of Things (IoT) interconnects multiple physical devices in large-scale networks. When the 'things' coordinate decisions and act collectively on shared information, feedback is introduced between them. Multiple feedback loops are thus closed over a shared, general-purpose network. Traditional feedback control is unsuitable for design of IoT control because it relies on high-rate periodic communication and is ignorant of the shared network resource. Therefore, recent event-based estimation methods are applied herein for resource-aware IoT control allowing agents to decide online whether communication with other agents is needed, or not. While this can reduce network traffic significantly, a severe limitation of typical event-based approaches is the need for instantaneous triggering decisions that leave no time to reallocate freed resources (e.g., communication slots), which hence remain unused. To address this problem, novel predictive and self triggering protocols are proposed herein. From a unified Bayesian decision framework, two schemes are developed: self triggers that predict, at the current triggering instant, the next one; and predictive triggers that check at every time step, whether communication will be needed at a given prediction horizon. The suitability of these triggers for feedback control is demonstrated in hardware experiments on a cart-pole, and scalability is discussed with a multi-vehicle simulation.Comment: 16 pages, 15 figures, accepted article to appear in IEEE Internet of Things Journal. arXiv admin note: text overlap with arXiv:1609.0753

    Fade Depth Prediction Using Human Presence for Real Life WSN Deployment

    Get PDF
    Current problem in real life WSN deployment is determining fade depth in indoor propagation scenario for link power budget analysis using (fade margin parameter). Due to the fact that human presence impacts the performance of wireless networks, this paper proposes a statistical approach for shadow fading prediction using various real life parameters. Considered parameters within this paper include statistically mapped human presence and the number of people through time compared to the received signal strength. This paper proposes an empirical model fade depth prediction model derived from a comprehensive set of measured data in indoor propagation scenario. It is shown that the measured fade depth has high correlations with the number of people in non-line-of-sight condition, giving a solid foundation for the fade depth prediction model. In line-of-sight conditions this correlations is significantly lower. By using the proposed model in real life deployment scenarios of WSNs, the data loss and power consumption can be reduced by the means of intelligently planning and designing Wireless Sensor Network

    Channel Prediction with Location Uncertainty for Ad-Hoc Networks

    Get PDF
    Multi-agent systems (MAS) rely on positioning technologies to determine their physical location, and on wireless communication technologies to exchange information. Both positioning and communication are affected by uncertainties, which should be accounted for. This paper considers an agent placement problem to optimize end-to-end communication quality in a MAS, in the presence of uncertainties. Using Gaussian processes (GPs), operating on the input space of location distributions, we are able to model, learn, and predict the wireless channel. Predictions, in the form of distributions, are fed into the communication optimization problems. This approach inherently avoids regions of the workspace with high position uncertainty and leads to better average communication performance. We illustrate the benefits of our approach via extensive simulations, based on real wireless channel measurements. Finally, we demonstrate the improved channel learning and prediction performance, as well as the increased robustness in agent placement
    corecore