21 research outputs found

    Dynamic Reconfiguration in Camera Networks: A Short Survey

    Get PDF
    There is a clear trend in camera networks towards enhanced functionality and flexibility, and a fixed static deployment is typically not sufficient to fulfill these increased requirements. Dynamic network reconfiguration helps to optimize the network performance to the currently required specific tasks while considering the available resources. Although several reconfiguration methods have been recently proposed, e.g., for maximizing the global scene coverage or maximizing the image quality of specific targets, there is a lack of a general framework highlighting the key components shared by all these systems. In this paper we propose a reference framework for network reconfiguration and present a short survey of some of the most relevant state-of-the-art works in this field, showing how they can be reformulated in our framework. Finally we discuss the main open research challenges in camera network reconfiguration

    Bayesian Search Under Dynamic Disaster Scenarios

    Get PDF
    Search and Rescue (SAR) is a hard decision making context where there is available a limited amount of resources that should be strategically allocated over the search region in order to find missing people opportunely. In this thesis, we consider those SAR scenarios where the search region is being affected by some type of dynamic threat such as a wilder or a hurricane. In spite of the large amount of SAR missions that consistently take place under these circumstances, and being Search Theory a research area dating back from more than a half century, to the best of our knowledge, this kind of search problem has not being considered in any previous research. Here we propose a bi-objective mathematical optimization model and three solution methods for the problem: (1) Epsilon-constraint; (2) Lexicographic; and (3) Ant Colony based heuristic. One of the objectives of our model pursues the allocation of resources in riskiest zones. This objective attempts to find victims located at the closest regions to the threat, presenting a high risk of being reached by the disaster. In contrast, the second objective is oriented to allocate resources in regions where it is more likely to find the victim. Furthermore, we implemented a receding horizon approach oriented to provide our planning methodology with the ability to adapt to disaster's behavior based on updated information gathered during the mission. All our products were validated through computational experiments.MaestríaMagister en Ingeniería Industria

    Predictive Control of Networked Multiagent Systems via Cloud Computing

    Get PDF

    Autonomous Monitoring of Contaminants in Fluids

    Get PDF
    The litigation and mitigation of maritime incidents suffer from a lack of information, first at the incident location, then throughout the evolution of contaminants such as spilled oil through the surrounding environment. Prior work addresses this through ocean and oil models, model directed sensor guidance and other observation methods such as satellites. However, each of these approaches and research fields have short-comings when viewed in the context of fast-response to an incident, and of constructing an all-in-one framework for monitoring contaminants using autonomous mobile sensors. In summary, models often lack consideration of data-assimilation or sensor guidance requirements, sensor guidance is specific to source locating, oil mapping, or fluid measuring and not all three, and data assimilation methods can have stringent requirements on model structure or computation time that may not be feasible. This thesis presents a model-based adaptive monitoring framework for the estimation of oil spills using mobile sensors. In the first of a four-stage process, simulation of a combined ocean, wind and oil model provides a state trajectory over a finite time horizon, used in the second stage to solve an adjoint optimisation problem for sensing locations. In the third stage, a reduced-order model is identified from the state trajectory, utilised alongside measurements to produce smoothed state estimates in the fourth stage, which update and re-initialise the first-stage simulation. In the second stage, sensors are directed to optimal sensing locations via the solution of a Partial Differential Equation (PDE) constrained optimisation problem. This problem formulation represents a key contributory idea, utilising the definition of spill uncertainty as a scalar PDE to be minimised subject to sensor, ocean, wind and oil constraints. Spill uncertainty is a function of uncertainty in (i) the bespoke model of the ocean, wind and oil spill, (ii) the reduced order model identified from sensor data, and (iii) the data assimilation method employed to estimate the states of the environment and spill. The uncertainty minimisation is spatio-temporally weighted by a function of spill probability and information utility, prioritising critical measurements. In the penultimate chapter, numerical case-studies spanning a 2500 km2 coastal area are presented. Here the monitoring framework is compared to an industry standard method in three scenarios: A spill monitoring and prediction problem, a retrodiction and monitoring problem and a source locating problem

    Particle Gaussian Mixture Filters for Nonlinear Non-Gaussian Bayesian Estimation

    Get PDF
    Nonlinear filtering is the problem of estimating the state of a stochastic nonlinear dynamical system using noisy observations. It is well known that the posterior state estimates in nonlinear problems may assume non-Gaussian multimodal probability densities. We present an unscented Kalman-particle hybrid filtering framework for tracking the three dimensional motion of a space object. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. The performance of the hybrid filtering approach is assessed by simulating two test cases of space objects that are assumed to undergo full three dimensional orbital motion. Having established its performance in the space object tracking problem, we extend the hybrid approach to the general multimodal estimation problem. We propose a particle Gaussian mixture-I (PGM-I) filter for nonlinear estimation that is free of the particle depletion problem inherent to most particle filters. The PGM-I filter employs an ensemble of randomly sampled states for the propagation of state probability density. A Gaussian mixture model (GMM) of the propagated PDF is then recovered by clustering the ensemble. The posterior density is obtained subsequently through a Kalman measurement update of the mixture modes. We prove the convergence in probability of the resultant density to the true filter density assuming exponential forgetting of initial conditions by the true filter. The PGM-I filter is capable of handling the non-Gaussianity of the state PDF arising from dynamics, initial conditions or process noise. A more general estimation scheme titled PGM-II filter that can also handle non-Gaussianity related to measurement update is considered next. The PGM-II filter employs a parallel Markov chain Monte Carlo (MCMC) method to sample from the posterior PDF. The PGM-II filter update is asymptotically exact and does not enforce any assumptions on the number of Gaussian modes. We test the performance of the PGM filters on a number of benchmark filtering problems chosen from recent literature. The PGM filtering performance is compared with that of other general purpose nonlinear filters such as the feedback particle filter and the log homotopy based particle flow filters. The results also indicate that the PGM filters can perform at par with or better than other general purpose nonlinear filters such as the feedback particle filter (FPF) and the log homotopy based particle flow filters. Based on the results, we derive important guidelines on the choice between the PGM-I and PGM-II filters. Furthermore, we conceive an extension of the PGM-I filter, namely the augmented PGM-I filter, for handling the nonlinear/non- Gaussian measurement update without incurring a large computational penalty. A preliminary design for a decentralized PGM-I filter for the distributed estimation problem is also obtained. Finally we conduct a more detailed study on the performance of the parallel MCMC algorithm. It is found that running several parallel Markov chains can lead to significant computational savings in sampling problems that involve multi modal target densities. We also show that the parallel MCMC method can be used to solve global optimization problems

    Information theoretic sensor management

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 195-203).Sensor management may be defined as those stochastic control problems in which control values are selected to influence sensing parameters in order to maximize the utility of the resulting measurements for an underlying detection or estimation problem. While problems of this type can be formulated as a dynamic program, the state space of the program is in general infinite, and traditional solution techniques are inapplicable. Despite this fact, many authors have applied simple heuristics such as greedy or myopic controllers with great success. This thesis studies sensor management problems in which information theoretic quantities such as entropy are utilized to measure detection or estimation performance. The work has two emphases: Firstly, we seek performance bounds which guarantee performance of the greedy heuristic and derivatives thereof in certain classes of problems. Secondly, we seek to extend these basic heuristic controllers to nd algorithms that provide improved performance and are applicable in larger classes of problems for which the performance bounds do not apply. The primary problem of interest is multiple object tracking and identification; application areas include sensor network management and multifunction radar control.(cont.) Utilizing the property of submodularity, as proposed for related problems by different authors, we show that the greedy heuristic applied to sequential selection problems with information theoretic objectives is guaranteed to achieve at least half of the optimal reward. Tighter guarantees are obtained for diffusive problems and for problems involving discounted rewards. Online computable guarantees also provide tighter bounds in specific problems. The basic result applies to open loop selections, where all decisions are made before any observation values are received; we also show that the closed loop greedy heuristic, which utilizes observations received in the interim in its subsequent decisions, possesses the same guarantee relative to the open loop optimal, and that no such guarantee exists relative to the optimal closed loop performance. The same mathematical property is utilized to obtain an algorithm that exploits the structure of selection problems involving multiple independent objects. The algorithm involves a sequence of integer programs which provide progressively tighter upper bounds to the true optimal reward. An auxiliary problem provides progressively tighter lower bounds, which can be used to terminate when a near-optimal solution has been found.(cont.) The formulation involves an abstract resource consumption model, which allows observations that expend different amounts of available time. Finally, we present a heuristic approximation for an object tracking problem in a sensor network, which permits a direct trade-o between estimation performance and energy consumption. We approach the trade-o through a constrained optimization framework, seeking to either optimize estimation performance over a rolling horizon subject to a constraint on energy consumption, or to optimize energy consumption subject to a constraint on estimation performance. Lagrangian relaxation is used alongside a series of heuristic approximations to and a tractable solution that captures the essential structure in the problem.by Jason L. Williams.Ph.D

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Robust distributed planning strategies for autonomous multi-agent teams

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012.Cataloged from department-submitted PDF version of thesis. This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 225-244).The increased use of autonomous robotic agents, such as unmanned aerial vehicles (UAVs) and ground rovers, for complex missions has motivated the development of autonomous task allocation and planning methods that ensure spatial and temporal coordination for teams of cooperating agents. The basic problem can be formulated as a combinatorial optimization (mixed-integer program) involving nonlinear and time-varying system dynamics. For most problems of interest, optimal solution methods are computationally intractable (NP-Hard), and centralized planning approaches, which usually require high bandwidth connections with a ground station (e.g. to transmit received sensor data, and to dispense agent plans), are resource intensive and react slowly to local changes in dynamic environments. Distributed approximate algorithms, where agents plan individually and coordinate with each other locally through consensus protocols, can alleviate many of these issues and have been successfully used to develop real-time conflict-free solutions for heterogeneous networked teams. An important issue associated with autonomous planning is that many of the algorithms rely on underlying system models and parameters which are often subject to uncertainty. This uncertainty can result from many sources including: inaccurate modeling due to simplifications, assumptions, and/or parameter errors; fundamentally nondeterministic processes (e.g. sensor readings, stochastic dynamics); and dynamic local information changes. As discrepancies between the planner models and the actual system dynamics increase, mission performance typically degrades. The impact of these discrepancies on the overall quality of the plan is usually hard to quantify in advance due to nonlinear effects, coupling between tasks and agents, and interdependencies between system constraints. However, if uncertainty models of planning parameters are available, they can be leveraged to create robust plans that explicitly hedge against the inherent uncertainty given allowable risk thresholds. This thesis presents real-time robust distributed planning strategies that can be used to plan for multi-agent networked teams operating in stochastic and dynamic environments. One class of distributed combinatorial planning algorithms involves using auction algorithms augmented with consensus protocols to allocate tasks amongst a team of agents while resolving conflicting assignments locally between the agents. A particular algorithm in this class is the Consensus-Based Bundle Algorithm (CBBA), a distributed auction protocol that guarantees conflict-free solutions despite inconsistencies in situational awareness across the team. CBBA runs in polynomial time, demonstrating good scalability with increasing numbers of agents and tasks. This thesis builds upon the CBBA framework to address many realistic considerations associated with planning for networked teams, including time-critical mission constraints, limited communication between agents, and stochastic operating environments. A particular focus of this work is a robust extension to CBBA that handles distributed planning in stochastic environments given probabilistic parameter models and different stochastic metrics. The Robust CBBA algorithm proposed in this thesis provides a distributed real-time framework which can leverage different stochastic metrics to hedge against parameter uncertainty. In mission scenarios where low probability of failure is required, a chance-constrained stochastic metric can be used to provide probabilistic guarantees on achievable mission performance given allowable risk thresholds. This thesis proposes a distributed chance-constrained approximation that can be used within the Robust CBBA framework, and derives constraints on individual risk allocations to guarantee equivalence between the centralized chance-constrained optimization and the distributed approximation. Different risk allocation strategies for homogeneous and heterogeneous teams are proposed that approximate the agent and mission score distributions a priori, and results are provided showing improved performance in time-critical mission scenarios given allowable risk thresholds.by Sameera S. Ponda.Ph.D

    Joint University Program for Air Transportation Research, 1988-1989

    Get PDF
    The research conducted during 1988 to 1989 under the NASA/FAA-sponsored Joint University Program for Air Transportation Research is summarized. The Joint University Program is a coordinated set of three grants sponsored by NASA Langley Research Center and the Federal Aviation Administration, one each with the Massachusetts Institute of Technology, Ohio University, and Princeton University. Completed works, status reports, and annotated bibliographies are presented for research topics, which include computer science, guidance and control theory and practice, aircraft performance, flight dynamics, and applied experimental psychology. An overview of the year's activities for each university is also presented
    corecore