345 research outputs found

    An efficient Monte Carlo approach for optimizing decentralized estimation networks constrained by undirected topologies

    Get PDF
    We consider a decentralized estimation network subject to communication constraints such that nearby platforms can communicate with each other through low capacity links rendering an undirected graph. After transmitting symbols based on its measurement, each node outputs an estimate for the random variable it is associated with as a function of both the measurement and incoming messages from neighbors. We are concerned with the underlying design problem and handle it through a Bayesian risk that penalizes the cost of communications as well as estimation errors, and constraining the feasible set of communication and estimation rules local to each node by the undirected communication graph. We adopt an iterative solution previously proposed for decentralized detection networks which can be carried out in a message passing fashion under certain conditions. For the estimation case, the integral operators involved do not yield closed form solutions in general so we utilize Monte Carlo methods. We achieve an iterative algorithm which yields an approximation to an optimal decentralized estimation strategy in a person by person sense subject to such constraints. In an example, we present a quantification of the trade-off between the estimation accuracy and cost of communications using the proposed algorithm

    Regional variance for multi-object filtering

    Get PDF
    Recent progress in multi-object filtering has led to algorithms that compute the first-order moment of multi-object distributions based on sensor measurements. The number of targets in arbitrarily selected regions can be estimated using the first-order moment. In this work, we introduce explicit formulae for the computation of the second-order statistic on the target number. The proposed concept of regional variance quantifies the level of confidence on target number estimates in arbitrary regions and facilitates information-based decisions. We provide algorithms for its computation for the Probability Hypothesis Density (PHD) and the Cardinalized Probability Hypothesis Density (CPHD) filters. We demonstrate the behaviour of the regional statistics through simulation examples

    Monte Carlo optimization approach for decentralized estimation networks under communication constraints

    Get PDF
    We consider designing decentralized estimation schemes over bandwidth limited communication links with a particular interest in the tradeoff between the estimation accuracy and the cost of communications due to, e.g., energy consumption. We take two classes of in–network processing strategies into account which yield graph representations through modeling the sensor platforms as the vertices and the communication links by edges as well as a tractable Bayesian risk that comprises the cost of transmissions and penalty for the estimation errors. This approach captures a broad range of possibilities for “online” processing of observations as well as the constraints imposed and enables a rigorous design setting in the form of a constrained optimization problem. Similar schemes as well as the structures exhibited by the solutions to the design problem has been studied previously in the context of decentralized detection. Under reasonable assumptions, the optimization can be carried out in a message passing fashion. We adopt this framework for estimation, however, the corresponding optimization schemes involve integral operators that cannot be evaluated exactly in general. We develop an approximation framework using Monte Carlo methods and obtain particle representations and approximate computational schemes for both classes of in–network processing strategies and their optimization. The proposed Monte Carlo optimization procedures operate in a scalable and efficient fashion and, owing to the non-parametric nature, can produce results for any distributions provided that samples can be produced from the marginals. In addition, this approach exhibits graceful degradation of the estimation accuracy asymptotically as the communication becomes more costly, through a parameterized Bayesian risk

    Graphical model-based approaches to target tracking in sensor networks: an overview of some recent work and challenges

    Get PDF
    Sensor Networks have provided a technology base for distributed target tracking applications among others. Conventional centralized approaches to the problem lack scalability in such a scenario where a large number of sensors provide measurements simultaneously under a possibly non-collaborating environment. Therefore research efforts have focused on scalable, robust, and distributed algorithms for the inference tasks related to target tracking, i.e. localization, data association, and track maintenance. Graphical models provide a rigorous tool for development of such algorithms by modeling the information structure of a given task and providing distributed solutions through message passing algorithms. However, the limited communication capabilities and energy resources of sensor networks pose the additional difculty of considering the tradeoff between the communication cost and the accuracy of the result. Also the network structure and the information structure are different aspects of the problem and a mapping between the physical entities and the information structure is needed. In this paper we discuss available formalisms based on graphical models for target tracking in sensor networks with a focus on the aforementioned issues. We point out additional constraints that must be asserted in order to achieve further insight and more effective solutions

    Developing an Overbooking Fuzzy-Based Mathematical Optimization Model for Multi-Leg Flights

    Get PDF
    Overbooking is one of the most vital revenue management practices that is used in the airline industry. Identification of an overbooking level is a challenging task due to the uncertainties associated with external factors, such as demand for tickets, and inappropriate overbooking levels which may cause revenue losses as well as loss of reputation and customer loyalty. Therefore, the aim of this paper is to propose a fuzzy linear programming model and Genetic Algorithms (GAs) to maximize the overall revenue of a large-scale multi-leg flight network by minimizing the number of empty seats and the number of denied passengers. A fuzzy logic technique is used for modeling the fuzzy demand on overbooking flight tickets and a metaheuristics-based GA technique is adopted to solve large-scale multi-leg flights problem. As part of model verification, the proposed GA is applied to solve a small multi-leg flight linear programming model with a fuzzified demand factor. In addition, experimentation with large-scale problems with different input parameters’ settings such as penalty rate, show-up rate and demand level are also conducted to understand the behavior of the developed model. The validation results show that the proposed GA produces almost identical results to those in a small-scale multi-leg flight problem. In addition, the performance of the large-scale multi-leg flight network represented by a number of KPIs including total booking, denied passengers and net-overbooking profit towards changing these input parameters will also be revealed

    An efficient Monte Carlo approach for optimizing communication constrained decentralized estimation networks

    Get PDF
    We consider the design problem of a decentralized estimation network under communication constraints. The underlying low capacity links are modeled by introducing a directed acyclic graph where each node corresponds to a sensor platform. The operation of the platforms are constrained by the graph such that each node, based on its measurement and incoming messages from parents, produces a local estimate and outgoing messages to children. A Bayesian risk that captures both estimation error penalty and cost of communications, e.g. due to consumption of the limited resource of energy, together with constraining the feasible set of strategies by the graph, yields a rigorous problem definition. We adopt an iterative solution that converges to an optimal strategy in a person-byperson sense previously proposed for decentralized detection networks under a team theoretic investigation. Provided that some reasonable assumptions hold, the solution admits a message passing interpretation exhibiting linear complexity in the number of nodes. However, the corresponding expressions in the estimation setting contain integral operators with no closed form solutions in general. We propose particle representations and approximate computational schemes through Monte Carlo methods in order not to compromise model accuracy and achieve an optimization method which results in an approximation to an optimal strategy for decentralized estimation networks under communication constraints. Through an example, we present a quantification of the trade-off between the estimation accuracy and the cost of communications where the former degrades as the later is increased

    An efficient Monte Carlo approach for optimizing communication constrained decentralized estimation networks

    Get PDF

    Simultaneous tracking and long time integration for detection in collaborative array radars

    Get PDF

    Information measures in distributed multitarget tracking

    Get PDF
    In this paper, we consider the role that different information measures play in the problem of decentralised multi-target tracking. In many sensor networks, it is not possible to maintain the full joint probability distribution and so suboptimal algorithms must be used. We use a distributed form of the Probability Hypothesis Density (PHD) filter based on a generalisation of covariance intersection known as exponential mixture densities (EMDs). However, EMD-based fusion must be actively controlled to optimise the relative weights placed on different information sources. We explore the performance consequences of using different information measures to optimise the update. By considering approaches that minimise absolute information (entropy and Rényi entropy) or equalise divergence (Kullback-Leibler Divergence and Rényi Divergence), we show that the divergence measures are both simpler and easier to work with. Furthermore, in our simulation scenario, the performance is very similar with all the information measures considered, suggesting that the simpler measures can be used. © 2011 IEEE
    corecore