10,388 research outputs found

    Sensor network scheduling algorithm considering estimation error variance and communication energy

    Get PDF
    金沢大学理工研究域  慶應義塾大学This paper deals with a sensor scheduling algrithm considering estimation error variance and communication energy in sensor networked feedback systems. We propose a novel decentralized estimation algorithm with unknown inputs in each sensor node. Most existing works deal with the sensor network systems as sensing systems and it is difficult to apply them to the real physical feedback control systems Then some local estimates are merged and the merged estimates can be optimized in the proposed method and the estimation error covariance has a unique positive definite solution under some assumptions. Next, we propose a novel sensor scheduling algorithm in which each sensor node transmits information. A sensor node uses energy by communication between other sensor node or the plant. The proposed algorithm achieves a sub-optimal network topology with minimum energy and a desired variance. Experimental results show an effectiveness of the proposed method. © 2010 IEEE

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks

    Full text link
    Recent advances in electronics are enabling substantial processing to be performed at each node (robots, sensors) of a networked system. Local processing enables data compression and may mitigate measurement noise, but it is still slower compared to a central computer (it entails a larger computational delay). However, while nodes can process the data in parallel, the centralized computational is sequential in nature. On the other hand, if a node sends raw data to a central computer for processing, it incurs communication delay. This leads to a fundamental communication-computation trade-off, where each node has to decide on the optimal amount of preprocessing in order to maximize the network performance. We consider a network in charge of estimating the state of a dynamical system and provide three contributions. First, we provide a rigorous problem formulation for optimal real-time estimation in processing networks in the presence of delays. Second, we show that, in the case of a homogeneous network (where all sensors have the same computation) that monitors a continuous-time scalar linear system, the optimal amount of local preprocessing maximizing the network estimation performance can be computed analytically. Third, we consider the realistic case of a heterogeneous network monitoring a discrete-time multi-variate linear system and provide algorithms to decide on suitable preprocessing at each node, and to select a sensor subset when computational constraints make using all sensors suboptimal. Numerical simulations show that selecting the sensors is crucial. Moreover, we show that if the nodes apply the preprocessing policy suggested by our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio

    Decentralized Estimation over Orthogonal Multiple-access Fading Channels in Wireless Sensor Networks - Optimal and Suboptimal Estimators

    Get PDF
    Optimal and suboptimal decentralized estimators in wireless sensor networks (WSNs) over orthogonal multiple-access fading channels are studied in this paper. Considering multiple-bit quantization before digital transmission, we develop maximum likelihood estimators (MLEs) with both known and unknown channel state information (CSI). When training symbols are available, we derive a MLE that is a special case of the MLE with unknown CSI. It implicitly uses the training symbols to estimate the channel coefficients and exploits the estimated CSI in an optimal way. To reduce the computational complexity, we propose suboptimal estimators. These estimators exploit both signal and data level redundant information to improve the estimation performance. The proposed MLEs reduce to traditional fusion based or diversity based estimators when communications or observations are perfect. By introducing a general message function, the proposed estimators can be applied when various analog or digital transmission schemes are used. The simulations show that the estimators using digital communications with multiple-bit quantization outperform the estimator using analog-and-forwarding transmission in fading channels. When considering the total bandwidth and energy constraints, the MLE using multiple-bit quantization is superior to that using binary quantization at medium and high observation signal-to-noise ratio levels
    corecore