53,519 research outputs found

    Consensus-based Networked Tracking in Presence of Heterogeneous Time-Delays

    Full text link
    We propose a distributed (single) target tracking scheme based on networked estimation and consensus algorithms over static sensor networks. The tracking part is based on linear time-difference-of-arrival (TDOA) measurement proposed in our previous works. This paper, in particular, develops delay-tolerant distributed filtering solutions over sparse data-transmission networks. We assume general arbitrary heterogeneous delays at different links. This may occur in many realistic large-scale applications where the data-sharing between different nodes is subject to latency due to communication-resource constraints or large spatially distributed sensor networks. The solution we propose in this work shows improved performance (verified by both theory and simulations) in such scenarios. Another privilege of such distributed schemes is the possibility to add localized fault-detection and isolation (FDI) strategies along with survivable graph-theoretic design, which opens many follow-up venues to this research. To our best knowledge no such delay-tolerant distributed linear algorithm is given in the existing distributed tracking literature.Comment: ICRoM2

    Collaborative signal and information processing for target detection with heterogeneous sensor networks

    Get PDF
    In this paper, an approach for target detection and acquisition with heterogeneous sensor networks through strategic resource allocation and coordination is presented. Based on sensor management and collaborative signal and information processing, low-capacity low-cost sensors are strategically deployed to guide and cue scarce high performance sensors in the network to improve the data quality, with which the mission is eventually completed more efficiently with lower cost. We focus on the problem of designing such a network system in which issues of resource selection and allocation, system behaviour and capacity, target behaviour and patterns, the environment, and multiple constraints such as the cost must be addressed simultaneously. Simulation results offer significant insight into sensor selection and network operation, and demonstrate the great benefits introduced by guided search in an application of hunting down and capturing hostile vehicles on the battlefield

    Gravitational Clustering: A Simple, Robust and Adaptive Approach for Distributed Networks

    Full text link
    Distributed signal processing for wireless sensor networks enables that different devices cooperate to solve different signal processing tasks. A crucial first step is to answer the question: who observes what? Recently, several distributed algorithms have been proposed, which frame the signal/object labelling problem in terms of cluster analysis after extracting source-specific features, however, the number of clusters is assumed to be known. We propose a new method called Gravitational Clustering (GC) to adaptively estimate the time-varying number of clusters based on a set of feature vectors. The key idea is to exploit the physical principle of gravitational force between mass units: streaming-in feature vectors are considered as mass units of fixed position in the feature space, around which mobile mass units are injected at each time instant. The cluster enumeration exploits the fact that the highest attraction on the mobile mass units is exerted by regions with a high density of feature vectors, i.e., gravitational clusters. By sharing estimates among neighboring nodes via a diffusion-adaptation scheme, cooperative and distributed cluster enumeration is achieved. Numerical experiments concerning robustness against outliers, convergence and computational complexity are conducted. The application in a distributed cooperative multi-view camera network illustrates the applicability to real-world problems.Comment: 12 pages, 9 figure

    Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks

    Full text link
    Recent advances in electronics are enabling substantial processing to be performed at each node (robots, sensors) of a networked system. Local processing enables data compression and may mitigate measurement noise, but it is still slower compared to a central computer (it entails a larger computational delay). However, while nodes can process the data in parallel, the centralized computational is sequential in nature. On the other hand, if a node sends raw data to a central computer for processing, it incurs communication delay. This leads to a fundamental communication-computation trade-off, where each node has to decide on the optimal amount of preprocessing in order to maximize the network performance. We consider a network in charge of estimating the state of a dynamical system and provide three contributions. First, we provide a rigorous problem formulation for optimal real-time estimation in processing networks in the presence of delays. Second, we show that, in the case of a homogeneous network (where all sensors have the same computation) that monitors a continuous-time scalar linear system, the optimal amount of local preprocessing maximizing the network estimation performance can be computed analytically. Third, we consider the realistic case of a heterogeneous network monitoring a discrete-time multi-variate linear system and provide algorithms to decide on suitable preprocessing at each node, and to select a sensor subset when computational constraints make using all sensors suboptimal. Numerical simulations show that selecting the sensors is crucial. Moreover, we show that if the nodes apply the preprocessing policy suggested by our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio

    LQG Control and Sensing Co-Design

    Full text link
    We investigate a Linear-Quadratic-Gaussian (LQG) control and sensing co-design problem, where one jointly designs sensing and control policies. We focus on the realistic case where the sensing design is selected among a finite set of available sensors, where each sensor is associated with a different cost (e.g., power consumption). We consider two dual problem instances: sensing-constrained LQG control, where one maximizes control performance subject to a sensor cost budget, and minimum-sensing LQG control, where one minimizes sensor cost subject to performance constraints. We prove no polynomial time algorithm guarantees across all problem instances a constant approximation factor from the optimal. Nonetheless, we present the first polynomial time algorithms with per-instance suboptimality guarantees. To this end, we leverage a separation principle, that partially decouples the design of sensing and control. Then, we frame LQG co-design as the optimization of approximately supermodular set functions; we develop novel algorithms to solve the problems; and we prove original results on the performance of the algorithms, and establish connections between their suboptimality and control-theoretic quantities. We conclude the paper by discussing two applications, namely, sensing-constrained formation control and resource-constrained robot navigation.Comment: Accepted to IEEE TAC. Includes contributions to submodular function optimization literature, and extends conference paper arXiv:1709.0882

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    On Distributed Linear Estimation With Observation Model Uncertainties

    Full text link
    We consider distributed estimation of a Gaussian source in a heterogenous bandwidth constrained sensor network, where the source is corrupted by independent multiplicative and additive observation noises, with incomplete statistical knowledge of the multiplicative noise. For multi-bit quantizers, we derive the closed-form mean-square-error (MSE) expression for the linear minimum MSE (LMMSE) estimator at the FC. For both error-free and erroneous communication channels, we propose several rate allocation methods named as longest root to leaf path, greedy and integer relaxation to (i) minimize the MSE given a network bandwidth constraint, and (ii) minimize the required network bandwidth given a target MSE. We also derive the Bayesian Cramer-Rao lower bound (CRLB) and compare the MSE performance of our proposed methods against the CRLB. Our results corroborate that, for low power multiplicative observation noises and adequate network bandwidth, the gaps between the MSE of our proposed methods and the CRLB are negligible, while the performance of other methods like individual rate allocation and uniform is not satisfactory
    • …
    corecore